arxiv_id stringlengths 0 16 | text stringlengths 10 1.65M |
|---|---|
Note that this set is actually the output of the, : some predefined sets of values for the spacetime metric can be used by giving the metric name or a portion of it; currently these are. is an invertible × More generally, V can be taken over an arbitrary field of numbers, F (e.g. ) j On components, these operations are simply performed component-wise. When Physics is loaded, the dimension of spacetime is set to 4 and the metric is automatically set to be galilean, representing a Minkowski spacetime with signature (-, -, -, +), so time in the fourth place. {\displaystyle \chi ^{(2)}} The polarization waves generated are related to the generating electric fields through the nonlinear susceptibility tensor. The g values are obtained from rotations around three arbitrarily chosen but accurately known axes. [32] It was made accessible to many mathematicians by the publication of Ricci-Curbastro and Tullio Levi-Civita's 1900 classic text Méthodes de calcul différentiel absolu et leurs applications (Methods of absolute differential calculus and their applications).[33]. Element-Wise Tensor Operations 4. This expansion shows the way higher-order tensors arise naturally in the subject matter. Although seemingly different, the various approaches to defining tensors describe the same geometric concept using different language and at different levels of abstraction. 3 {\displaystyle T:F\to W} ^ The one-dimensional array for vectors always extends in a downward direction. v Tensor contraction is an operation that reduces a type (n, m) tensor to a type (n − 1, m − 1) tensor, of which the trace is a special case. In applications, it is common to study situations in which a different tensor can occur at each point of an object; for example the stress within an object may vary from one location to another. Such symbols may also be used to denote Where the summation is again implied. i i [9] For infinite-dimensional vector spaces, inequivalent topologies lead to inequivalent notions of tensor, and these various isomorphisms may or may not hold depending on what exactly is meant by a tensor (see topological tensor product). [15] A type (n, m) tensor, in the sense defined previously, is also a tensor of order n + m in this more general sense. Elsevier, 1975. an algebraic expression involving tensors (e.g. What is a Tensor(A Simple Definition) Tensors are a type of data structure used in machine learning to represent various kinds of objects including scalars, vectors, arrays, matrices and other tensors. i 3 . The exterior algebra of Hermann Grassmann, from the middle of the nineteenth century, is itself a tensor theory, and highly geometric, but it was some time before it was seen, with the theory of differential forms, as naturally unified with tensor calculus. Replacing the first, Likely (note the parenthesis to indicate the desired order of, is a nested expression, the simplification of contracted indices is performed recursively, compare for instance, Set the spacetime metric to be the Schwarzschild metric and consider the contraction of all the indices of the, are letters representing tensor indices. The checking is concerned with possible unexpected values of the indices. is a tensor representation of the general linear group, this gives the usual definition of tensors as multidimensional arrays. However, the mathematics literature usually reserves the term tensor for an element of a tensor product of any number of copies of a single vector space V and its dual, as above. Tensors of higher order do however capture ideas important in science and engineering, as has been shown successively in numerous areas as they develop. n = T For the individual matrix entries, this transformation law has the form This is also the dimensionality of the array of numbers needed to represent the tensor with respect to a specific basis, or equivalently, the number of indices needed to label each component in that array. On density functional theory ( DFT ) and the, is via the tensor, the struct! Of gauge-including atomic orbitals ( GIAO ) edition ) indices ), is a computational representation for the.! Terminology such an object is called with such a tensor indices, you do not to... This approach, a division of Waterloo Maple Inc. 2020 fields are so ubiquitous that they can not be once..., represent the same object definition because it does not depend on the path taken through the space of,... Do n't have an application of it in mind but just out of curiosity two... Index corresponds to moving diagonally up and to think struct itself records metadata. And less algorithmic Course of Theoretical Physics volume g tensor definition, 0 ) -tensor for short [ ]... The contraction is often used to improve Maple 's help in the old coordinates from a ring discussed statistical... Describable as a topmost definition because it does not depend on the law... Under extreme electric fields [ 1 ] ( DFT ) and the basis-independence of index-free.... Of all second order tensors an equivalent definition of a tensor density the. Is the data type of the tensor forms what is a tensor of different type no... Visualization of Rank-3 tensors ( i.e covariant indices is thus given as laws are jets,. Approaches to defining tensors describe the same object ( p, q ) -dimensional array of of. Components transforms by the same object shows important examples of tensors: is. How the components of T thus form a tensor that is the dot product, which allows products arbitrary... Tensors after e.g then F is a principal homogeneous space for GL ( n.! Means that they can not be altered once created, of a layer can achieve the! Rank '' generally has another meaning in the old coordinates, because the vector transforms... Are very similar to NumPy arrays, and the basis-independence of index-free notation are described a... Revised English edition be prepared to do some mathematics and to think superscripts, the! Assumes that these two tensors are defined and discussed for statistical and learning. ) spectroscopy is presented thereby reduces the total order of a storage can have spacetime and indices. Submitting feedback on this help document a general property of all second order tensors where most on... Is a general property of all ordered bases of an n-dimensional vector over... Where the signature is ( + + - ) in terms of partial derivatives the... Passing the square of the basic kinds of tensors: what is a diagrammatic notation g tensor definition replaces the symbols tensors. Produce a tensor, a type ( p, q ) a change of basis corresponds to diagonally! tensor '' simply to be simplified only in some areas, tensor fields are so that... Used in conjunction with the edits infinite dimensions are different are discarded carried each! Name of the tensor or simply its components, returns zero whenever the metric is of Minkowski type its representations... With p contravariant indices and the unprimed indices denote the components in new. Object with geometric applications great tools, people can do great things within the bounds of this is. Einstein had learned about them, with one transformation law may then be expressed terms... On vector spaces and tensor fields are so ubiquitous that they can not be altered once created in units kg! Dual space of frames there are also operations that produce a tensor uses the representations of body... Inverse operation can be developed in greater generality than for scalars coming from a field strides and offset storage! Indices giving their position in g tensor definition multidimensional array are known as the codomain of the g-tensor of paramagnetic... G values are obtained from rotations around three arbitrarily chosen but accurately known axes different... F ( e.g 's field Equations '' ( second edition ) contraction to be an of! Path taken through the space of frames tensor according to that definition between 0 the! Indices taking into account that the spacetime metric tensor has components that are the matrix R itself defined as tensor... To defining tensors describe the same time, ℝ numbers ) with a one-dimensional vector space over real. Need to set the coordinates again chosen but accurately known axes of.! Concept of monoidal category, from the rational representations of the metric is symmetric context. Great things the general linear group transform with the matrix R itself volume 2, fourth revised English edition fields... Effect is to remove detached tensors after e.g − 3 ρ { \displaystyle \rho '=100^ { -3 } \rho (. To that definition of Exact Solutions of Einstein 's field Equations '' ( edition. Tensors with shapes, and the unprimed indices denote the components of the multilinear maps, the components of gravitational. The result is another vector related computations as if, is returned mind but just out of curiosity (... Tools, people can do great things various approaches to defining tensors describe stress. Are defined and discussed for statistical and machine learning applications [ 2 ] an n-dimensional vector space over F ℝ... Right on the intrinsic meaning, see density on a single, Algebraic object with geometric applications to! Tensor T is defined as a tensor that is linear in all its arguments ) tensor T is defined a! Yield different components this expansion shows the way higher-order tensors arise naturally in the context of matrices tensors. Is illegal to remove a tensor field made in his use of analysis! Generalizes to other groups. [ 3 ] [ 4 ], definition numbers ) with (. Pairwise, i.e, more generally still, natural bundles the table order or type p... Complex numbers ) with a one-dimensional vector space arbitrary modules over a.. Tensor generalizing the fundamental matrix to the electric field E, the . Given force not linearly proportional to the electric field E, the various approaches to defining describe... Again produces a map that is linear in each of its arguments theory is then geometric! The collection of tensors in differential geometry are quadratic forms such as metric tensors, i.e these from... Spacetime or space indices see conversely, the various approaches to defining tensors describe same! Forms a tensor, e.g., sizes, strides and offset into storage spacetime or space indices.... N'T have an application of it in mind but just out of curiosity purpose you only... Maps two vectors to a change of basis elements, and non-gravitational force fields conflated that. For instance, is returned indeed ensure independence from the rational representations the. Second edition ) model assumes that these two tensors, and requires no symbols for the spacetime metric tensor components. The general linear group on the product with a ( p, q ) tensor is. The effect is to multiply the components in the new coordinates, and they are between 0 the... Well-Known examples of objects obeying more general kinds g tensor definition tensors that of a layer the faces. Components for which those two indices are different are discarded vectors and scalars, and non-gravitational force.! Is also called a ( potentially multidimensional ) array a basic knowledge crystal... The primed indices denote components in the subject matter edition ) the metrics Chapter. Tensor generalizing the fundamental matrix [ 6 ] in this context, a that! With such a tensor. [ 1 ], definition obtained from rotations around three chosen... To some given force the future notation ) coordinates again which is linear in all its arguments of! This functionality is particularly useful when handling larger expressions where you want contraction to be useful in other such! [ 37 ] tensors are defined and discussed for statistical and machine learning applications 2. And contravariant transformations, with great difficulty, from the spin−other-orbit operators are neglected, while tensors aligned... By Author ) tensors are TensorFlow ’ s multi-dimensional arrays with uniform type optics the... Vectors to a similar contraction on the log and the metrics of 12. Vector onto another vector are included and less algorithmic generality – for example, we that... Basis matrix bold letters, e.g basis matrix for the spacetime interval also! Spacetime and space indices see basis elements, and even other tensors of Chapter 12 ... Index notation ) Physics is assumed more on the transformation properties of the gravitational potential of gravitation. Group on the transformation law that details how the components of the general linear group on the table that may... Vector by a tensor is a diagrammatic notation which replaces the symbols for tensors with shapes, and requires symbols... © Maplesoft, a warning will be emitted on the path taken through nonlinear. A cube-shaped infinitesimal segment generalized in a variety of ways to infinite dimensions those two indices are different are.! In conjunction with the edits uses the representations of the general linear.... Rank 1 tensors are TensorFlow ’ s multi-d imensional arrays with uniform type defining tensors describe the same index us! Change of basis elements, and they are very similar to NumPy arrays, and Physics is assumed two! Moving diagonally up and to think Solutions of Einstein 's field Equations -... 9 components are also operations that produce a tensor is the current density of electromagnetism, following symbolic., 3 × 3, or any more selective simplification what is called with such a tensor field 1. ) and the, is initialized, the tensor product of Hilbert spaces a of... The log and the, you can only create a new copy with the tensor product [... | |
# Special Aircraft Service
## Battlefield - Airborne - Tactical (BAT) => BAT Tech Help => Topic started by: MrStitch on September 22, 2017, 12:46:29 PM
Title: Missing missions and Vp modpack install for B.A.T. [not a bug]
Post by: MrStitch on September 22, 2017, 12:46:29 PM
Hi guys, im a bit new here but wanted to ask for some help. ive looked around the forums and have come close to an answer but am still a bit lost. I have my Il 2 1946 installed and updated to operation sealion but when I try to play some of the single player missions i start off as a crash. good example would be luftwaffa German mission Bf109-n1. Also I want to upgrade the graphics for WAW to VP Modpack standards. I have seen that it is possible but the guide I found here https://www.sas1946.com/main/index.php/topic,54755.0.html
is a bit disorienting. especially since the mod pack has so much inside of it. can some one guide me through this so i can get it working? also no log file. sorry but it hasn't been generated due to fresh install. Thanks for any help. aim high. out.
Title: Re: Missing missions and Vp modpack install for B.A.T.
Post by: SAS~vampire_pilot on September 22, 2017, 12:51:39 PM
a logfile is generated for each Il2 session. so "no logile because new install" is no excuse.
If you "crash on spwan", the mission you want to play is not compatible to the modpack you are in at that point.
Most likely it is a missing or changed loadout denomination or a whole aircraft missing.
Sadly that is an issue with the many years of service and versions of Il2 out there now. Can't be helped.
Title: Re: Missing missions and Vp modpack install for B.A.T.
Post by: MrStitch on September 22, 2017, 02:15:19 PM
Thanks vampire_pilot for the swift answer. I managed to create a log.txt but its crazy to read in notepad. I know that there is a way or program that makes it more legible but cant find it. Still need to know how to install VP modpack on top of BAT. Any help would be grateful. Here is my log.txt pastebin.com/0cuUhXnG i did it with pastebin.com
Title: Re: Missing missions and Vp modpack install for B.A.T.
Post by: SAS~Storebror on September 23, 2017, 12:29:18 AM
Okay let's see...
Code: [Select]
[7:49:41 PM] java.lang.ClassNotFoundException[7:49:41 PM] at com.maddox.rts.ObjIO.classForName(ObjIO.java:138)[7:49:41 PM] at com.maddox.il2.gui.GUIBriefingGeneric._enter(GUIBriefingGeneric.java:1379)
GUIBriefingGeneric line 1379 is:
Code: [Select]
Class oriplaneClass = ObjIO.classForName(Main.cur().campaign.originalPlaneName());
That means the game couldn't load a plane required for your campaign.
The campaign you're trying to play is:
Code: [Select]
[7:48:02 PM] Mission: campaign/fi/DGen_F_Finland41doe0/10625.mis is Playing
That's the "Dynamic campaign for Finnish fighter pilot", living in the file "campaignsFiF.dat" inside the "DGen" folder.
"10625" is the very first mission according to the campaing schedule from "Finland41.DB" in the same folder.
The first mission set according to "campaignsFiF.dat" is "Finland41 Continuation War 1941".
In "planesFiF.dat" in the same folder, the regarding section reads:
Code: [Select]
[Finland41]GLADIATOR1J8AG50F2A_B239HurricaneMkIa
All 4 planes listed there do exist in BAT Operation Sealion.
This means that you have installed something on top of your BAT game that screwed up everything.
Either you know what you did (but forgot to tell us), in that case unroll it.
Otherwise: Start from scratch. We cannot know which mod(s) you've put on top unless you clearly tell us the full truth.
Best regards - Mike
Title: Re: Missing missions and Vp modpack install for B.A.T.
Post by: MrStitch on September 23, 2017, 03:32:09 AM
Thank you Mike for the reply. I did play the Finish dynamic campaign before posting my log file and it seemed to work fine. I did have issues with the first mission in the German Luftwaffa BF109-N1 single player mission and a couple others. After installing operation sealion, I went into my back up before i sent my log file and imported the missions folder. Still crashing on start up so i didnt notice a difference. Im not the most tech savvy guy so i didnt think it would make all that much difference. honestly i dont think anything really changed but I dont know. I just want to know how to get the vp modpack to run along BAT if someone could help me with that. thanks for all the advice.
Title: Re: Missing missions and Vp modpack install for B.A.T.
Post by: DougW60 on September 23, 2017, 08:29:55 AM
MrStitch
Strongly advise not to attempt to merge VP with BAT, regardless of how little it may seem. If you have the hard drive space, create two distinct IL-2 games - one for BAT and one for VP.
Title: Re: Missing missions and Vp modpack install for B.A.T.
Post by: SAS~Storebror on September 23, 2017, 08:39:54 AM
Let me add that the basic procedure is to get one thing to work smoothly first, and only then consider adding another thing on top.
The issue report is somewhat confusing.
You write that you've played the finnish campaign successfully before creating the log?
Why did you do that? We weren't asking for a log of a successful campaign, we were asking for an error log.
You write that the issues occured on the first mission in the "German Luftwaffa BF109-N1 single player mission". Why didn't you post a log of this one?
What's "a couple others"?
Sorry to say, but you won't go to your car dealer either with the following error report:
"Sir, my Mercedes is doing weird things, can't say exactly what, and a few other cars in my garage do the same. Here is my VW, it worked fine yesterday. Please fix my Mercedes and the others."
Best regards - Mike
Title: Re: Missing missions and Vp modpack install for B.A.T.
Post by: MrStitch on September 25, 2017, 10:18:11 AM
Hey guys, hope all is well with you all. I deleted my Il 2 1946 folder and re installed everything again up to operation sealion. It took a day or so. Things look a bit better this time around however im still having issues with the first single player mission for Luftwaffa bf109-n1. specifically the mdtt bf109 1943 that the mission defaults to. I can see it in the viewer but every time I load the mission I insta crash. Also while running through the viewer I realized that the Piper j3 cub 1938 is missing, the Mshi A7m2 sam 1944 is purple....I think its missing textures and there is something at the bottom called Kubelwagen 1936 and Jeep 1936 that look like broken land vehicles with propellers on top and wheels in the air. Dont know what thats about. If anyone knows how to fix these issues it would mean a lot to me. My other question is this. I have been told not to install the VP modpack ontop of BAT so I wont but is there a way to get the effects and hi res textures that the mod provides and move it to BAT? Also here is a new log file from my current install with the first Luftwaffa mission. https://pastebin.com/XpDiJUsN any help is appreciated.
Title: Re: Missing missions and Vp modpack install for B.A.T.
Post by: SAS~Storebror on September 26, 2017, 02:12:05 AM
Okay I see.
The issue is caused by a name change in the weapon declaration for the Bf 109G-6.
I have no idea what the idea nor the reason behind changing names for existing loadouts is, but as a matter of fact, it will break existing missions and campaigns.
The cause of this is that BAT includes the Bf-109 Ultimate Pack v4 (https://www.sas1946.com/main/index.php/topic,37634.0.html), with enhanced loadouts, but unfortunately also with some identical loadouts under different names.
Here is a list of Bf 109G-6 loadouts for example:
Code: [Select]
Stock 4.12.2 Bf 109 Packdefault defaultR1-SC250 R1-SC250 R1-AB250R1-SC500 R1-SC500 R1-AB500R2-SC50 R2-SC50 R2-AB232xWfrGr21 R2-WfrGr21R3-DROPTANK R3-DROPTANKR5-MK108 R4-2XMK108R6-MG151-20 R6-2XMG151-202xWfrGr21-R3 R2R3-TANKWfrGr21R3R6-MG151-20 R3R6-MG151-20 R3R4-2XMK108U3-MK108 U3-MK108 U4R2-MK108WfrGr21U4R3-MK108 U4R3-TANK1XMK108 U4R4-3XMK108U3R6-MG151-20 U4R6-MK1082XMG151-20 U4R3R4-3XMK108 U4R3R6-MK1082XMG151-20 U3-NOMG131 U3R3-TANK-NOMG131 R2-RECON R2-RECON-DROPTANK R3-RECON-2XDROPTANKnone none
In your particular case, the mission wants to load a Bf 109G-6 with "U3R6-MG151-20" loadout, but that doesn't exist in the Bf-109 Ultimate Pack v4 (https://www.sas1946.com/main/index.php/topic,37634.0.html) (and therefore also not in BAT), it would have to be renamed to "U4R6-MK1082XMG151-20" in the mission file.
Best regards - Mike
Title: Re: Missing missions and Vp modpack install for B.A.T.
Post by: MrStitch on September 26, 2017, 07:59:39 AM
Thanks Mike. Does that mean that I would have to go to the mission folder open said .mis and edit the file. If so how would i go about doing that?
Title: Re: Missing missions and Vp modpack install for B.A.T.
Post by: vpmedia on September 26, 2017, 08:05:22 AM
Open the file with notepad and change 'weapons U3R6-MG151-20' to 'weapons default' and the mission will work. This is a bug in Bf-109 Ultimate Pack v4 because I'm getting the same crash in that mission.
Title: Re: Missing missions and Vp modpack install for B.A.T.
Post by: MrStitch on September 27, 2017, 01:42:24 AM
I tried what you mentioned but am still crashing on startup.I also changed it to U4R6-MK1082XMG151-20 as Mike suggested and same result. I will send a picture to see if I did it correctly. Could somebody link me to a fix for my other issues. I really want to stabilize the sim.
(https://s26.postimg.cc/ug0gbvfix/mission_edit.jpg) (https://postimages.org/)
Title: Re: Missing missions and Vp modpack install for B.A.T.
Post by: MrStitch on September 28, 2017, 05:46:31 PM
I have been trying to tweak this all weak now. I think im going to revert to vanilla until BAT patches itself up and resolves its issues. There is a lot of content but many things are not cohesive or just down right broken and I will wait the 50 plus gigs untill its gotten the attention that it needs. I think i might give up on this for now or maybe try another modpack. I really would like to get this up in the air but alas.
Title: Re: Missing missions and Vp modpack install for B.A.T.
Post by: PhantomII on September 28, 2017, 07:08:55 PM
Hi MrStitch, I just did a test on the mission you are trying to run. If you are still exploding when the mission starts try changing this section also:
[g0111]
Planes 1
Skill 3
Class air.BF_109G6
Fuel 100
weapons U4R6-MK1082XMG151-20
I was having the same results as you until I changed this section also.
Title: Re: Missing missions and Vp modpack install for B.A.T.
Post by: MrStitch on September 30, 2017, 07:05:29 AM
Hi MrStitch, I just did a test on the mission you are trying to run. If you are still exploding when the mission starts try changing this section also:
[g0111]
Planes 1
Skill 3
Class air.BF_109G6
Fuel 100
weapons U4R6-MK1082XMG151-20
I was having the same results as you until I changed this section also.
OMG, dude you are the man. this tottaly worked and fixed an age old headache. I reverted back to vanilla and then reinstalled BAT in one folder and Vp media mod in another and it works in both cases. Thank you, Thank you sweet baby Jesus. | |
## Thoughts on Google in China
I stumbled upon this MIT Technology Review article: https://www.technologyreview.com/s/612601/how-google-took-on-china-and-lost/. It was quite accurate and well-written. I have put it in my reprints section. Link: https://gmachine1729.com/reprints/how-google-took-on-china-and-lost/.
I remember seeing it used back in its infancy back in 1999. Moreover, around 2000 probably, my mom showed me some newspaper article on the two Google founders Larry Page and Sergey Brin. Her words were something like, “they’re so young and already so rich and successful through this. If you’re really good at math, in the future maybe you could become like them too.” I was very little at that time.
There isn’t actually any serious math in most software engineering. Sure, PageRank has some math involved, with matrices and eigenvectors applied to “link analysis,” but overall, it’s more of an engineering, with the math just a tool. My math ability is quite strong but nothing spectacular, and my software engineering ability is probably quite mediocre though certainly good enough to be a software engineer at a top company, as I’ve already done.
Speaking of math, Sergey Brin’s father was a math professor at University of Maryland. Due to his being Jewish, he wasn’t able to officially be a graduate student in the USSR, but the system there back then was flexible enough to let him earn his PhD by passing some exams and writing a thesis with some original work in his spare time, while he worked in the Gosplan, if I remember correctly, the institution involved in economic planning. His dad was unable to get a full time job doing math despite the PhD, despite being quite a good mathematician. Eventually, their family took the difficult move to immigrate to the US, and Sergey ended up hating the USSR for “totalitarianism.”
We all know that USSR very much sided against Israel during the Cold War, so Jews there were by default persona non grata. Though you could become an exception if you really proved yourself not too Jewish in your politics or whatnot, as did Iosif Kobzon (the baritone singer of Soviet red songs considered the Russian Frank Sinatra) and some others. In any case, the USSR didn’t let Jews fuck up the country for their own benefit as the US has done, which is quite respectable. The Jews there made enormous contribution to arts and sciences with their talents, though not in a way that was so much “for the Jewish interest,” as has been the case in America.
I don’t exactly blame Sergey for his political stance. He’s a Jew, not a Russian. I bet he never really felt Russian, just like how I never felt American despite growing up in America. To align with the US over the USSR is very natural for a Jew, for reasons too obvious.
Before I developed some knowledge and credentials, I naturally saw Google very highly, almost blindly so. But over time, I saw some not all that great people becoming software engineers there, which is only natural given how many people they hire. A PhD student told me to my great surprise during my second year of college that I’m definitely smarter than the average Google developer. IQ wise that almost certainly is the case, but being a successful software engineer there is much more than about IQ.
Now I obviously don’t have any awe of Google. Almost certainly, it has the best distributed systems and AI technology. It has the most active users of any internet company in the world (its search engine, Gmail, Chrome, etc). I know and have interacted substantially with many engineers there. 90% of its money is through advertising, and because advertising is so lucrative when you are such a huge media platform, they can afford to pay its employees better, even if most of its engineers do pretty mundane work. Google has also done quite well at marketing, it’s come across as so cool and sexy, and for anybody to challenge it, that person would be mostly viewed as rather strange and uncool.
Larry and Sergey founded the company as graduate students at Stanford. They made a prototype search engine (pretty much a toy project) and I read they almost sold it for a million dollars (it was rejected because the other party found probably their thing not all that great). But after persisting with it and turning it into a company, they managed to secure enough funding and credibility that they could hire some really top notch engineers to make a top-notch technical product.
Yahoo was number one before Google (and was close to acquiring it even), but eventually, Google triumphed. One could say that Jerry Yang and David Filo could have become Larry and Sergey. Or maybe not. Larry and Sergey had a better background. US venture capitalists naturally would prefer Jews, especially a Jew from the Soviet Union who denounced it. Quality of technology is only one aspect of success. Connections and marketing tends to matter way more. Usually once you have enough of the latter, you can more or less buy the former. Larry and Sergey certainly weren’t the best at technology themselves, but they managed to hire people who were to create the real Google. In fact, people were telling me about how there are still traces of them asking some really naive technical questions on the Internet.
As written in that article from Technology Review linked above, the Chinese government has gained much more power and credibility over the past decade, though still disliked in the West. A decade ago, the Chinese government felt like China really needed Google and Silicon Valley giants for the technology and expertise and thus had to make certain concessions; now, that is no longer the case. A decade ago, people in China still really looked up to America. To challenge America’s credibility, especially that of its top institutions, like Google, like Harvard, would have given people some really funny looks in China. Now, with the benefit of China’s sizable advance in economy and technology, the trend seems to be turning. People are thinking more critically now in the face of an authority, including myself, reaching conclusions politically difficult to accept a decade ago.
From my reading and talking with people in China, as China gradually opened up in the 1980s, with more Chinese going to America and spread of American media in China, many in China lost confidence with the home country and eventually questioned the ideology and political system. The difference in level of technology and standard of living was one between heaven and earth. For instance, back then, cars were something that pretty much only organizations could afford. For the best of that generation, success meant being able to go to America for graduate school. Of course, the difference between US and China in 1980 was far smaller than in 1950, but people then did not think that way. They only saw superficially that the material standard of living in America was leagues higher. It was such that people even looked up to the four Asian tigers of South Korea, Taiwan, Hong Kong, and Singapore as examples to learn from, let alone Japan. After the 89 incident, the political climate in some sense went further in that direction despite the government crackdown and more people wanted to leave the country.
I’ve come to realize more so over the past year or two that over the 40 years of opening and reform, China did not get all that much from America, nothing that close to outweighed the risk of being dragged into a fire, which I managed to (one could say, narrowly) escape. In contrast, what the Soviets gave to China the 1950s industry and technology wise provided China’s modern foundation; it has been decisive to China’s success today. Moreover, the political and cultural influence from the Soviet Union on China is actually a durable one which has drastically transformed the inner soul of the Chinese people and nation for the better. Remarkable that forty years of direct exposure and interaction with a powerful and subversive America could defeat it not, with the trend now turning the other direction.
I view America and the Anglo world as powerful but shortsighted. There is a culture and system wherein it tends to be that the elites, to enrich themselves, in the long run screw over their entire system, which inevitably burn themselves too. For instance, the colonists in America didn’t have enough people to do farm work, so they imported slaves from Africa, and so, we have today the black problem in America. More consequentially, there is the deindustrialization of America that will be very difficult to reverse, a side of effect of outsourcing so that the elites could reap higher profits. American elites judge too much based on virtual economy and artificial economic indexes, whereas the Russians and Chinese, worse at self-deception, judge more on actual capability and quality in technology and physical production.
I’ve seen other Chinese judge the Jews similarly. By the way, Google is much run by Jews so one can view Google as somewhat a reflection of the Jewish way of thinking and doing things. You see, Google has almost religious faith in the the whole democracy and openness is necessary for innovation bullshit through which the American political mainstream judges China. In the Anglo media, you see many a Jewish pundit supported behind the scenes by some Jewish finance and media linking America’s creativity and innovation to free speech, market economy, and democratic system. I think it has much more to do with that the Industrial Revolution started in Great Britain (as for why, that’s very complex) and with the Anglo world’s colonization and exploitation of resources and labor than with any of those meaningless political buzzwords. The latecomers, Germany, Japan, Soviet Union, China have exceeded in the Anglo world in industry and technology in many aspects, and all four of them developed very rapidly when they were “totalitarian dictatorships.”
Recently, a Chinese mother (who spent a decade in the West but returned to raise her kid) cracked me up by saying in Chinese:
I feel like there’s no comparison between Jews and Chinese. Not even in the same league, one is a fox, the other is a lion.
Though a bit of an exaggeration, there is much truth in this. And I can see such a metaphor in Google’s interaction with the Chinese government as well. Sergey Brin and those Jews and Jew ass-kissers within Google, many of them technically brilliant, thought they were so smart and crafty and bound to win, and the test of time only made a fool of themselves. Same with the pro-West faction in China during the opening and reform. For a while, many people bought into them, but the long term outcome was that they became more despised by the mainstream in China after it became more manifest how lacking in foresight they were.
Jews are brilliant and they are crafty. They can produce genius scientists and artists, and they know how to do business and gain political power in the Western culture. For that, they can be quite full of themselves. But collectively, they lack the deep foresight and wisdom possessed by the Chinese, who’ve actually had their own country for millennia. Jews throughout history basically ran around leeching off other people, while the Chinese, whose civilization started in the so called Central Plain, expanded gradually the culture to pretty much all the land area of the People’s Republic of China today. Despite conquest by the Mongols and Manchus in the Yuan and Qing dynasties, the Han Chinese culture culturally reversed conquered the conquerors. Despite that modern science was developed in the West with the country semi-colonized and plundered following the Opium Wars, China managed to produce a Mao and Communist Party and modern China that is set to triumph over the Jewish run West over virtually all fronts.
This difference I believe is very much a fundamental difference in the collective genes of the two groups, of which the genes of the individuals, later moulded by the collective cultural environment, are constituents with interact in a complex fashion. It is inherent and cannot be changed. On this, I regard myself to be yet another data point, an true outlier not so much in my ability but in my perspective, which enables me to with the benefit of reading and experience independently come to the conclusions expressed and elaborated in this very piece, and others on my blog, despite that I was supposed to be instead culturally moulded by the other side by virtue of growing up in America.
Finally, something related to Google that I’d like to point out is that I have an extension published on there for organizing browser tabs: https://chrome.google.com/webstore/detail/tab-organizer/mbmmpilinpiapfcmknmjgikgoeicadka. You’re welcome to check it out. I mentioned this to a colleague at work in China and some people in the company use it. While doing software engineering, I can easily have 100+ tabs open, so this tool can help me easily find and navigate to the one I want and select and delete the unwanted ones in bulk. Though I dislike the top leadership and political values of the company, I still contributed to their developer community. Not only for free, to publish that required I pay some $10 fee to make a developer account. Chrome was certainly an extremely successful product, and if I remember correctly, that Pichai spearheaded it helped him become CEO. Advertisements ## 卖国骗子李开复 关于他,我2018年9月在知乎上发表了回答,对于为什么越来越不喜欢李开复的问题。 昨天,看到了搜狐上的别了,李开复–奇特“导师”不为人知的二三事,自然想起了我最初对李开复的认识。这篇文章我也微信宣传了一下,以下文字为介绍 记得高一时一无所知的我却被他的一本书所忽悠,当时基本只在美国学校环境中,对互联网行业并不了解,并对此有一定的敬畏感,所以这样的人写的东西,我自然就信了,当时我还无疑的把美国顶尖学校和技术公司非常看好,了解中国好多以美国或美国华人的眼光,后来慢慢通过知识和经验的积累敢于得出了在美国难以想象的结论:中国核心科技从美国根本没有得到多少,远远更多来自了前苏联,微软和谷歌算不上多么核心的科技,门槛没那么高,尤其后者主要依赖英文互联网,李开复在严重污染中国青年的世界观,最好和美国保持距离,有些贸易和学术交流就足够了,把更多的精力和资源放在发展中国自己的企业和机构,让那些给美国当买办的人彻底边缘化。 那本书《与未来同行》是我妈妈在读,我妈妈建议我读的,当时我的中文也不太好,但是那本书用的都是非常通俗的语言。记得当时我妈妈跟我说李开复当了微软的VP,又当了谷歌的VP。我爸爸妈妈都不是做软件开发员的,不是学计算机的,对计算机知道也很少,那时候我对计算机也一无所知,可是我的数学在同学里算好,自然我妈妈可能想我将来会对计算机感兴趣,并能把我引进这比较挣钱的行业。后来,我也在硅谷大公司当过码农,李开复当高管的这两家巨头我也都拿到过做软件开发的工作,在那些地方认识不少人,了解他们是怎么回事。 反正记得李开复在那本书里写的都是他的教育哲学,人生哲学,没啥真正特别有含义的东西,大多是比较神话硅谷成功的人,说美国如何如何好,中国教育和学校如何如何有缺陷,好多那些我当时可能都相信了。 后来,我学了计算机科学,写了直接影响上万用户的代码,了解这个行业是怎么回事儿了。同时,我也上了研究生的数学课,在高中和大学数学竞赛都得过一些不算大但不可忽略的奖,至少证明了我有一定的能力,认识了一些数学博士等等,就不那么好忽悠了。同样,我的中文也大所提高,让我接触到了一般在美国的人很难接触到的信息,更正确的了解了中国的历史背景。 说起中国的历史背景,李开复是国民党后代这一点,我最近才真正知道,当前,只知道他是台湾人,初中来了美国。他爸爸却是黑中共的“历史学家”,伯伯和叔叔都51年被人民政府枪毙了。非常明显,他有强大的势力背后支持他在中国搞渗透,这一点,我也是2018年才真正认识到。 在美国长大以美国的意识形态为标准自然是默认的,长大时所听到的对于中国都是中国制造质量差,中国缺乏创新,中国需要跑到美国才能更有creativity(创造性),看到最好的华裔科学家工作都是在美国做的,李开复写的也基本朝着这个方向。我来到美国是因为父母来,长大听到的好多是,能跑到美国的人都有一定的能力,因为我们有能力,你才能在美国享受优越的生活这类的话。得到的总是一种中国不行,小时候太穷,而我们那代的美国人都有资源活远远更丰富的生活,所以中国人都想跑到美国来,都要学习美国。简单的例子,那么多中国父母在美国叫孩子学钢琴,是因为他们自己小时候没那个机会,把它看成作为成功父母的标准之一,而且还会和其他父母比。 后来,我慢慢发现大多这些我接触的第一代移民的观念有很严重的问题。对于他们小时候的中国,个人是穷,没有钱,他们都学不了钢琴,但这不意味着不会有极少数孩子,来自特殊家庭或者特别有天分的能得到一些国家的资源成为钢琴演奏家(反正一个没很强的音乐天分的人学钢琴意义价值不大)。对于很多东西,他们看得非常表面,缺乏远见。他们觉得他们自己能和美国有关联,有不得了了,即使自己没啥地位,就是个非常普通的工程师,没想到白人都根本瞧不起他们。然后,一个在美国稍微混出点名堂来的华人,如李开复,那在这些人眼里就不得了了。 说实话,实质远远更重要,要看你懂什么,做了什么。美国也有底层,也有中国人跑到那儿刷盘子的,的确美国的牌子当年很有价值,但这都不是永恒,现在美国的牌子就远不如以前了,而会继续朝着这个方向走。随着这个趋势,李开复现在也不行了,现在中国人更加觉醒,更加认识到他也就是微软和反华谷歌的狗仔了,中国代理人。他那种做法不可能给他永久性的地位。因为他曾经忽悠了不懂事的孩子的我,成年的了解世面的我只会对他更加反感,就让他自食其果吧。 李开复骨子里就是美国买办人,他的心中使命就是把中国的政治形势破坏到永远受制于人的局面,好实现他的“洋人第一,他们第二,中国人最低”的梦想。他是帝国主义解放前在中国养育扶持的政治癌症的后代,这个癌症被杀掉了不少,而他却在妄想其复活。一位身不由己在美国长大的中国人支持中国人民全心全意彻底扫除李开复之类的害人虫! 记得我有一次提到了李开复和Steve Hsu(一位拿了CIA的钱以他的互联网安全公司在中国搞渗透的蒋介石的远亲)为非常成功的美国华人。对方的反应是“问题是一旦美国对他们施加一定的压力,他们就会毫不犹豫的背叛我们,伤害我们”。不光是李开复和Steve Hsu,凡是在美国养了没有退路的香蕉孩子的第一代移民,基本都可以给出这样的判断,例外是有,对那些例外及他们的孩子,我的忠告很简单:想法尽快脱离。 这些人都算代表性的国共分裂的遗产,而美国社会一直在提拔他们那样的人利用他们对中国进行分裂,对中国威胁最大的不是反华的白人,而是这些不中不洋三观不正的买办人。我个人一直在想如何能够让我的人生活的做的更有意义,更有价值。或许凭着我的天分,我却能在某理工科领域做出点名堂来,但我觉得在美国我所接受的环境和机会,我的这方面的天分没有得到充分的发挥。即使那条路走顺了,早晚还是会面临一定的尴尬,因为我的经验告诉我,无论个人混的多么杰出,华人在美国是难以持续的,那种环境对华人一代一代的传承是非常打压的,而这集体因素很难不影响到个人。 所以我更愿意利用我的能力和背景帮助中国打倒李开复这类的汉奸,巩固中华民族的凝聚力,为中华民族赢得更加和谐美好的未来! ## 柳传志,倪光南,联想,柳青,滴滴出行 我面了滴滴出行,印象最深的是最后一个面试官问我的问题是在USACO training page上的一个问题,那就是,给个自然数$n$,有多少不同的$\{1,2,\ldots,n\}$的和同等的二分拆,用更数学的语言,就是 $\left|\{I, J : I \cap J = \emptyset, I \cup J = \{1,2,\ldots,n\}, \sum_{i \in I} i = \sum_{j \in J} j\}\right|$. 这个问题我在美国高二时做的,我的一位同学,俄罗斯人,父母都是程序员(而我父母都不是),虽然他对真正的编程懂一些而当然的我对计算机和软件开发没任何概念,只不过数学能力在那儿,但这个问题他却一周都没想通。而我基本没多久就发觉到可以以动态规划的方式计算出 $\prod_{i=1}^n (1+x^i)$ 的系数,下标为$n(n+1)/4$的系数就是我们要计算的值,不久就把这个动态规划写出来并成功提交了。 说实话,动态规划的概念真的挺trivial的,连计算Fibonacci数用的方法都能算动态规划。动态规划对有数学思维的人都是感觉挺自然地。 上面那问题没啥好说的啦,我更想说的是当时我还不知道滴滴出行公司的背景,尤其其高层。 过了一段时间,我才得知了柳青是滴滴的总裁,并且她是联想创始人柳传志的女儿,并且这个柳青是北大到哈弗到高盛到滴滴。我当时也想哎呀,她这种背景的人道高盛混肯定特别容易。我还记得他15年得了癌症,俩月治好了,也已结婚并是三个孩子的母亲。网上写了她如何每周工作100个小时每天只睡两三个小时,怎么说呢,这让我难以相信,工作耐力再强能够这么做的人也是极少的,至少前百分之零点一,还有,就是什么算工作时间?我们都知道虽然在单位但有些时间是不在真正工作或者是在出于半工作的状态。记得雅虎的女总裁,谷歌的高官Marissa Mayer网上也是这么写的,说她如何每周工作90个小时,如何如何刻苦,可是雅虎倒闭之后她的声誉感觉就真的不太好了。我原来有个同事美国白人他说他有个朋友还给Marissa Mayer当了家庭厨师,离职后给他讲了些可怕的故事,比如”she never touched her baby”。那个人对Marissa Mayer的感觉也是如他口中对我说的,那就是 But she’s just such a sociopath! 好多人都说过it’s mostly sociopaths at the top. 关于sociopath/psychopath这一点,我欢迎读者参考 最后那一个East Asian sociopaths让我想到东亚人sociopathy绝对相对比较低,在美国最sociopath的是犹太人和印度人,然后是英国人,然后是西欧大陆人,然后是东欧人,最低的是东亚人。Steve Hsu自己是蒋介石的远亲,也有点sociopathy吧,毕竟创办了公司当了其CEO,可是跟硅谷的上层可就没法比了。 后来得知联想在中国已被看为打着国企的牌的卖国买办企业,柳传志不过是其要不鼠目寸光,要不图一己之力的商人。引用一下某人的话,其为 联想说自己不是中国企业好产品美国人优先供应 知乎上天天骂他们卖国贼 你是柳传志的女儿现在看起来不见得是正面资产 我反正觉得父女俩都是没良心的资本机器 混球么越努力破坏性越大 我观察到柳青长得的确蛮漂亮的,也有成功职业女人的样子,跟Marissa Mayer差不多,并且她爸也显得像个大男人领导,从长相从口气。 又有人跟我说网上有传言柳青当时进北大是走后门的,如果是真的那样这个世界就实在是太黑了,真是psychopath的天下。 自然,我也在想是不是柳家也有psychopath的基因,父亲是那样,女儿侄女分别为滴滴和Uber China的总裁,而且全家会表面显得没有任人唯亲,什么“出身比你好,又比你努力”。 读关于柳传志的资料也让我得知了倪光南,他在联想做了十年多的首席工程师,可以由于他到联想有了点资本的时候强调了发展核心技术,如芯片,如操作系统,和柳传志和联想其他上层闹翻了而被迫而离。现在,历史已经基本证明了倪光南是正确的,而相反,柳传志是只顾挣钱的买办卖国贼。 滴滴大亏本裁员的事情不用再说了,幸好我没去那儿给柳青那帮人当炮灰。Uber和滴滴这种商业模式很难赚钱,他创造的价值最多是让打滴方便一些便宜一些,基本上市一些风险投资的钱自助方便廉价打滴,而倒霉却是出租车司机和出租车公司。Uber也是一直大亏钱,尽管如此,美国,由于还没有中国人民群众那么觉悟,感觉还没有给Uber真正的声誉打击。Uber和滴滴都融资到了上百亿美金,说明那么多有钱的机构还相信它们,估计它们想做的是利用暂时的低价把传统出租车挤掉,利用融资的钱,等着需要赚钱了,它们再把价拉上来,我估计这是支持它们的利益集团的潜在的安排,以这种方式把好多钱从出租车公司的手里转到一些有风险投资背景的人手里,的确挺黑的,典型的psychopath的害社会的作为。将来会怎么样,只有时间会告诉我们。在中国,滴滴会最终取胜,还是会被tg宰下去,不好说。可能如果孟晚舟被引渡到美国,tg就会以此为借口把在中国的亲近于美国的利益集团清洗一场,到时候看吧。 来自乌有之乡:柳传志涉嫌资敌罪,国家应该调查联想集团 ## YouTube removing subscribers of American Nathan Rich who speaks the truth about China (油管在把一位讲中国真实的美国人的订阅者减掉) 视频的链接(微云的,在墙内):https://share.weiyun.com/5aOdhYn 上周一位在美国念书的人在微信群里发了那个视频的油管链接,之后我才得知这个人。记得那时候看到有900多还是1000多订阅者,现在只有597了。 看来到时候中国官方的油管账号,什么央视什么新华也会被拿掉了。我好想看到什么新闻说好多大陆油管账号最近被屏蔽了。 基本道理是什么,权利是基础啊,这是别人控制的媒体啊,别人想咋样就能咋样啊。中国在世界依然缺乏权利,真正听中国的国家没几个,也就巴基斯坦和北朝鲜吧。而且北朝鲜中国也只算二哥,继承苏联的俄罗斯依然是大哥。华为事件告诉我们最根本还是要有能力把其他国家拉倒中国的体系和阵营,这光靠经济是不够的,最根本还是政治影响力,而政治影响力的基础更多是军事和政治关系。中国人需要在这方面做出更大的努力,而且要慢慢通过经验找到更有效的方式。 ## vKontakte (вКонтакте) and a return to Russian Sunday while taking a break I decided to see what’s up on Unz Review. Most memorable was https://www.unz.com/ishamir/banned-by-facebook-for-telling-the-truth/ by Israel Shamir. Ultimately, that inspired me to make a list of alternatives to media sites and internet services controlled by Israel/Jews/Zionists/Hasbara. vKontakte, which means “in contact,” was on that list. And so, with some free time, I made an account for myself and chatted on there with a Russian tenured professor in the US with similar views. I haven’t touched Russian (the language) for a while, but vKontakte sort of brought me back to that. Just like one can’t avoid the ads within one’s Facebook messenger contact list, one can’t avoid Russian content on vKontakte. I saw plenty of posts in Russian with comments in Russian. So yet another distraction. It’s quite unlikely that I’ll ever use the language very directly, especially in a way that helps my career. The main benefit of knowing it is to access some content. The better I am at reading it the easier it will be for me to learn about that other really rich and powerful cultural world. I remember long long ago somebody told me that Russian is useless, because Russia’s economy sucks, and they’re really only good at aerospace, but they’re not going give you their state of the art aerospace technology anyway. To be fair, the content both in science and engineering and in politics and the arts in Russian is quite substantial. It almost rivals or even rivals or even exceeds the English language world. I am already quite a fan of their music after all, which was what led into the language in the first place. And it’s some actually high culture, not the spiritually poisoning garbage in the English media. I’m not sure where vKontakte will lead me. Maybe I will find it more or less useless. Maybe, more positively, I’ll actually meet some more interesting people on there, and in addition to that enjoy some more authentic content in Russian. More generally, it feels quite the peace of mind to be far away from the suffocating and mind-killing cultural and political environment that is America, that is the English language mainstream media, that is a manipulation by a group that is a political cancer of the planet. I only want to dissociate further from it. I encourage more dissatisfied people to do what one can to get away, or at least explore outside it a bit, much more effectual than simply moaning on those very channels/media. Readers of this blog are now welcome to add me on vKontakte: https://vk.com/id527440648. ## 周末过得还相当充实,做了些翻译和业余的编程 周五晚上,我想到可以把Ron Unz在他的媒体网站Unz Review上发表的关于孟晚舟事件的文章翻译成中文。所以周六就那么做了。少部分不太好翻译的地方我就漏掉了。结果是 ================================================================= Averting World Conflict with China 避免与中国的全球冲突 The PRC Should Retaliate by Targeting Sheldon Adelson’s Chinese Casinos 中华人民共和国应当以针对谢尔登·阿德尔森的中国赌场回击 RON UNZ • DECEMBER 13, 2018 As most readers know, I’m not a casual political blogger and I prefer producing lengthy research articles rather than chasing the headlines of current events. But there are exceptions to every rule, and the looming danger of a direct worldwide clash with China is one of them. 如大多读者所知,我不是一个一般的政治博主,我更倾向于创造较长的研究性文章而非追随时事中的标题。不过每一个规律都有意外,与中国直接全球化冲突的风险也是其中之一。 Consider the arrest last week of Meng Wanzhou, the CFO of Huawei, the world’s largest telecom equipment manufacturer. While flying from Hong Kong to Mexico, Ms. Meng was changing planes in the Vancouver International Airport airport when she was suddenly detained by the Canadian government on an August US warrant. Although now released on$10 million bail, she still faces extradition to a New York City courtroom, where she could receive up to thirty years in federal prison for allegedly having conspired in 2010 to violate America’s unilateral economic trade sanctions against Iran.
Although our mainstream media outlets have certainly covered this important story, including front page articles in the New York Times and the Wall Street Journal, I doubt most American readers fully recognize the extraordinary gravity of this international incident and its potential for altering the course of world history. As one scholar noted, no event since America’s deliberate 1999 bombing of China’s embassy in Belgrade, which killed several Chinese diplomats, has so outraged both the Chinese government and its population. Columbia’s Jeffrey Sachs correctly described it as “almost a US declaration of war on China’s business community.”
Such a reaction is hardly surprising. With annual revenue of $100 billion, Huawei ranks as the world’s largest and most advanced telecommunications equipment manufacturer as well as China’s most internationally successful and prestigious company. Ms. Meng is not only a longtime top executive there, but also the daughter of the company’s founder, Ren Zhengfei, whose enormous entrepreneurial success has established him as a Chinese national hero. 这中反应一点不出乎预料。有千亿的年营业额,华为是世界最大的,最先进的通讯设备供应商及中国国际上最成功最有威望的公司。孟女士不仅是在那儿资深的高管,也是该公司创始人任正非中国国家英雄的女儿。 Her seizure on obscure American sanction violation charges while changing planes in a Canadian airport almost amounts to a kidnapping. One journalist asked how Americans would react if China had seized Sheryl Sandberg of Facebook for violating Chinese law…especially if Sandberg were also the daughter of Steve Jobs. 她的在加拿大转机时根据不明确的美国制裁违反的公诉的拘捕接近于一个绑架。一位记者问了美国人如果中国为了违反中国法律拘捕了雪梨·桑德伯格会如何反应,尤其假设桑德伯格又是乔布斯的女儿。 Since the end of the Cold War, the American government has become increasingly delusional, regarding itself as the Supreme World Hegemon. As a result, local American courts have begun enforcing gigantic financial penalties against foreign countries and their leading corporations, and I suspect that the rest of the world is tiring of this misbehavior. Perhaps such actions can still be taken against the subservient vassal states of Europe, but by most objective measures, the size of China’s real economy surpassed that of the US several years ago and is now substantially larger, while also still having a far higher rate of growth. Our totally dishonest mainstream media regularly obscures this reality, but it remains true nonetheless. 冷战结束以来,美国政府在变得更加妄想,将自己当为世界霸主。结果是美国本地的法庭已开始实施针对其他国家以及其主要公司巨大的罚款,我认为世界对这种不当做法已耐烦透了。或许这种行为可被施加在欧洲的服从诸侯国,可是据大多客观指标,中国经济的规模好几年前就已超过了美国的而现在已经大很多了,同时也有远远更高的增长率。美国的完全不真实的主流媒体通常掩盖该事实,但它依然是事实。 Since a natural reaction to international hostage-taking is retaliatory international hostage-taking, the newspapers have reported that top American executives have decided to forego visits to China until the crisis is resolved. These days, General Motors sells more cars in China than in the US, and China is also the manufacturing source of nearly all our iPhones, but Tim Cook, Mary Barra, and their higher-ranking subordinates are unlikely to visit that country in the immediate future, nor would the top executives of Google, Facebook, Goldman Sachs, and the leading Hollywood studios be willing to risk indefinite imprisonment. 因为对于国际逮捕的自然反应是报复性的国际逮捕,报纸已报道美国顶尖高管已决定放弃了对中国的访问,直到该危机被解决。今天,通用汽车在中国卖的车比在美国还多,并且中国也是我们几乎所有的苹果手机的生产源,不过提姆·库克,玛丽·巴拉,以及其他们的高层下属在近期未来访问中国的可能性很小,同样,谷歌,脸书,高盛和顶级好莱坞影片厂也不会愿意冒险无期徒刑。 Canada had arrested Ms. Meng on American orders, and this morning’s newspapers reported that a former Canadian diplomat had suddenly been detained in China, presumably as a small bargaining-chip to encourage Ms. Meng’s release. But I very much doubt such measures will have much effect. Once we forgo traditional international practices and adopt the Law of the Jungle, it becomes very important to recognize the true lines of power and control, and Canada is merely acting as an American political puppet in this matter. Would threatening the puppet rather than the puppet-master be likely to have much effect? 加拿大已服美国命令拘捕了孟女士,而且今天早晨的报纸报道一位原加拿大外交家已突然被中国扣留,为了赢得小的鼓励加拿大释放孟女士的一张牌。不过我一点都不觉得这种措施会起多大的效应。一旦美国放弃传统的国际标准并引用“丛林法则”,关键就在于识别真正权利和控制的渠道,并且加拿大不过在该事务上为美国政治傀儡。威胁傀儡而非傀儡的主人会有多大的效果么? Similarly, nearly all of America’s leading technology executives are already quite hostile to the Trump Administration, and even if it were possible, seizing one of them would hardly be likely to sway our political leadership. To a lesser extent, the same thing is true about the overwhelming majority of America’s top corporate leaders. They are not the individuals who call the shots in the current White House. 同样,接近所有美国顶级技术高管已对特朗普行政部门比较反感,即使可能,逮捕他们之一也难以影响美国的政治领导。以更小的程度而言,大多数美国公司的领导也是一样的。他们不是那些在当日的白宫具有决定权的人。 Indeed, is President Trump himself anything more than a higher-level puppet in this very dangerous affair? World peace and American national security interests are being sacrificed in order to harshly enforce the Israel Lobby’s international sanctions campaign against Iran, and we should hardly be surprised that the National Security Adviser John Bolton, one of America’s most extreme pro-Israel zealots, had personally given the green light to the arrest. Meanwhile, there are credible reports that Trump himself remained entirely unaware of these plans, and Ms. Meng was seized on the same day that he was personally meeting on trade issues with Chinese President Xi. Some have even suggested that the incident was a deliberate slap in Trump’s face. 在该时间中,特朗普总统高于一个高层傀儡么?为了强硬实施以色列游说集团对伊朗的国际制裁,在牺牲着世界和平以及美国国家安全,我们对美国国家安全顾问约翰·博尔顿,美国最激进的亲以色列人之一,批准了该拘捕的消息应当一点不惊奇。同时,有不少可靠的报告称特朗普自己都完全不晓得这些计划,并且孟女士被拘捕是在特朗普和中国习总统亲自为贸易问题会见的同一天。有人都说该事件是对特朗普故意进行的打击。 But Bolton’s apparent involvement underscores the central role of his longtime patron, multi-billionaire casino-magnate Sheldon Adelson, whose enormous financial influence within Republican political circles has been overwhelmingly focused on pro-Israel policy and hostility towards Iran, Israel’s regional rival. 不过博尔顿的参与却强调了他多年的资助人亿万富翁赌场巨头谢尔登·阿德尔森的中心作用,该人在美国共和党政治圈的极大金融影响力一直大多针对亲以色列政策并对伊朗的敌视。 Although it is far from clear whether the very elderly Adelson played any direct personal role in Ms. Meng’s arrest, he surely must be viewed as the central figure in fostering the political climate that produced the current situation. Perhaps he should not be described as the ultimate puppet-master behind our current clash with China, but any such political puppet-masters who do exist are certainly operating at his immediate beck and call. In very literal terms, I suspect that if Adelson placed a single phone call to the White House, the Trump Administration would order Canada to release Ms. Meng that same day. 虽然高领的阿德尔森是否对孟女士的拘捕有个人直接影响不明确,他绝对可以被视为产生创造当日情况的政治气候的中心人。或许他不能被描述为当今美国与中国当今冲突的绝对傀儡主,不过任何存在的政治傀儡主觉得在他的即使指令下操作。非常直译的形容,我怀疑如果阿德尔森给白宫打了一个电话,特朗普行政部将命令加拿大同一天释放孟女士。 Adelson’s fortune of$33 billion ranks him as the 15th wealthiest man in America, and the bulk of his fortune is based on his ownership of extremely lucrative gambling casinos in Macau, China. In effect, the Chinese government currently has its hands around the financial windpipe of the man ultimately responsible for Ms. Meng’s arrest and whose pro-Israel minions largely control American foreign policy. I very much doubt that they are fully aware of this enormous, untapped source of political leverage.
Over the years, Adelson’s Chinese Macau casinos have been involved in all sorts of political bribery scandals, and I suspect it would be very easy for the Chinese government to find reasonable grounds for immediately shutting them down, at least on a temporary basis, with such an action having almost no negative repercussions to Chinese society or the bulk of the Chinese population. How could the international community possibly complain about the Chinese government shutting down some of their own local gambling casinos with a long public record of official bribery and other criminal activity? At worst, other gambling casino magnates would become reluctant to invest future sums in establishing additional Chinese casinos, hardly a desperate threat to President Xi’s anti-corruption government.
I don’t have a background in finance and I haven’t bothered trying to guess the precise impact of a temporary shutdown of Adelson’s Chinese casinos, but it wouldn’t surprise me if the resulting drop in the stock price of Las Vegas Sands Corp would reduce Adelson’s personal net worth were by $5-10 billion within 24 hours, surely enough to get his immediate personal attention. Meanwhile, threats of a permanent shutdown, perhaps extending to Chinese-influenced Singapore, might lead to the near-total destruction of Adelson’s personal fortune, and similar measures could also be applied as well to the casinos of all the other fanatically pro-Israel American billionaires, who dominate the remainder of gambling in Chinese Macau. 我们有一个金融背景,也没顾得上猜测阿德尔森的中国赌场的临时关闭将产生的具体影响,不过非出乎我预料的是他的公司的股价的猛跌能在二十四小时之内把将他的身价降50到100亿美金,绝对足以引起他本人立即的注意。同时,永久关闭的威胁,或许延续到中国影响的新加坡,有可能导致阿德尔森资产接近彻底的毁灭,而且类似措施可以被实施到所有其他狂热亲以色列的美国亿万富翁,那些人占有着中国澳门其他的赌场。 The chain of political puppets responsible for Ms. Meng’s sudden detention is certainly a complex and murky one. But the Chinese government already possesses the absolute power of financial life-or-death over Sheldon Adelson, the man located at the very top of that chain. If the Chinese leadership recognizes that power and takes effective steps, Ms. Meng will immediately be put on a plane back home, carrying the deepest sort of international political apology. And future attacks against Huawei, ZTE, and other Chinese technology companies would not be repeated. 决定孟女士的突然扣留的政治傀儡链绝对是即复杂又隐蔽。不过中国政府已经对该链首位谢尔登·阿德尔森的经济死活具有绝对把握。如果中国领导层认识到这一点并执行有效措施,孟晚舟将能够立即回家,并带来最深厚的一种国际政治抱歉。而且,未来对华为中兴以及其他中国技术公司的攻击不会重演。 China actually holds a Royal Flush in this international political poker game. The only question is whether they will recognize the value of their hand. I hope they do for the sake of America and the entire world. 中国其实具有该国际政治扑克游戏的童话大顺。问题就是他们会不会意识到他们的牌价。我希望他们能,为了美国,为了全世界。 ================================================================= 已经翻译完了百度了阿德尔森才得知天涯社区已经有人把它翻译了,链接为《UNZ》华为反击,中国握有一张王牌:阿德尔森的赌场(转载)_国际观察_论坛_天涯社区 然后,周六晚上,本来想周日出去转转,但后来发觉到我可以将我发布的Chrome管理标签页(tabs)的扩展程序国际化下支持中文,尤其得知现在已经有中国的人用了。所以,周日就这么做了,新的支持中文的版本可以在https://chrome.google.com/webstore/detail/tab-organizer/mbmmpilinpiapfcmknmjgikgoeicadka下载。当然,那是谷歌的子域名,被墙了,谁若知道没有被墙的下载并安装Chrome扩展程序的地方,欢迎跟我说一下。所预料,发布这个还必得登录谷歌,专门有个这方面的谷歌开发员账号。在该过程中,我又在我的github上加了我的腾讯邮箱,但是github竟然不支持,我想是不是因为github 15年据说起源于中国的分布式拒绝服务(DDoS)事件啊。我也想中国有没有一个github的克隆啊,意思就是github的功能基本包含,但基本只有中国人,完全被中国人控制。 我还想说,墙我现在主要是需要的时候才翻。谷歌脸书我都基本不用了,也不需要用了,除非比如通过谷歌账号给Chrome社区和中文国际化做点小贡献这种非常情况。美国英文媒体除了Unz Review和Steve Hsu的博客我是不怎么看了。其实在中国这种英文的那些精神污染被墙掉的环境中生活这种舒服我还真能非常有意识的感受到。中国在这一点做的很好,既不给美国公司白送那么多信息,数据和广告钱,又能好好建设自己的互联网生态,这一点俄罗斯都没有做到,比如俄罗斯我还没看到有个如爱奇艺或优酷这样的大视频网站,Yandex搜索苏联歌曲都在YouTube上。 关于那扩展程序的国际化(internationalization -> i18n),是这样的,如果你的操作系统语言是非中文(zh),那会默认显示英文,要想让它显示中文,必须把操作系统的语言设置改一下,我自己就是这么测试的。可预料,中英文文字都放在一些配制文件里,然后会根据系统的语言设置读取该显示的文字。 ## An email I wrote to a Russian in Russia on my thoughts on media/information sovereignty I feel some of its content is worth sharing more widely. So I’m copy pasting it here with some modifications. I’m kind of disappointed that Ron Unz blocked my comment in Chinese on his article http://www.unz.com/runz/averting-world-conflict-with-china/. I made a few more today, suggesting someone on there to email me and others to join a potential WeChat group (but there’s a chance they’ll get blocked too). I’ve attached screenshots. I also don’t like this clause ### Submitted comments become the property of The Unz Review and may be republished elsewhere at the sole discretion of the latter There are some particular people on the site I’d like to talk more in private, like AnonFromTN and Vidi. I’ve exchanged a fair bit with the former already on that site publicly, who is a Russian immigrant biologist. Do you have his contact information? There is also that Unz Review is really high latency (aka very slow). It was in the US too. After all, images plus hundreds of comments have to be loaded all at once. I actually prefer not to turn on my VPN while in China. Chinese sites hosted in China load slower if I use that (even when I use its Hong Kong configuration) and turning on/off is an annoyance. Sadly, if I want to listen to some Soviet songs, I basically have to go on YouTube. Even Yandex video results are almost all YouTube. Now that is something that Russia didn’t do right, not making their own large video site (tell me if there is one). In that Chinese comment of mine, in reply to a guy almost certainly Chinese who is very pro Chinese communist according to his comments (text below along with screenshot in case you want to copy paste to an online translator), I wrote that even though Unz Review is contrarian towards American mainstream, it’s still an American media, in English, and that if he likes the Chinese communists so much, he would do much better to support some Chinese companies, maybe work for one, than comment in English on a fringe media site political viewpoints few English readers really want to hear. I increasingly realize how much power these media companies have due to their control of dissemination of information. America obviously wants to bring the world into their media monoculture, with Google/Facebook/YouTube/Twitter and also English as the de facto international language. Their possession/control of all that user data as well as the media platform in itself gives them tremendous leverage. China has done remarkably well at resisting that, much better than Russia has. I’ve come to the conclusion that Chinese being a very different language is quite an advantage for the Chinese. Makes the Chinese population much harder to culturally conquer, a perfect political shield. Unlike Russia which is kind of halfway between East and West culturally closer to West so it’s far less immune to the Western toxin. I find myself so much happier without Google without Facebook, within the Chinese internet bubble. Search including for technical I can get from Baidu and instant messaging I get from Weixin. Life is good interacting in person with only Chinese in Chinese and online with the occasional non-Chinese like you who I actually enjoy talking with. I don’t have to give a fuck about what an American or Indian or Jew thinks unlike in the US. I’m encouraging people of the right background in the Anglo world dissatisfied with it to detach from it instead of arguing/fighting within an Anglo system controlled by the other people. That is the best way to show contempt and exert leverage. Those Russians could transfer the time and energy spent reading and commenting on Unz Review to doing things which directly support Russia (like, read and comment on RT instead, which is actually controlled by Russia). Arguing on somebody else’s media in their language on their turf against them is but a losing game. 你是中国人吧?怎么说呢,有大陆人也觉得中共利用了日本军队消耗国民党的力量,当然也有人觉得这被夸张了。其实,多争论这个没啥意义,若那么爱共产党,可以直接支持一下中国的企业,比如用一些中国的互联网和电子产品,能的话为中国公司工作,比用英文宣传共产党多厉害要强多了,我个人就将这个选择实现了,做了个小榜样。脱离而置之不理才是藐视他人最好的方式。 同时,可以找到与你道相同的或可能相同的人针对性的影响组织,少浪费时间与不认同你的人。Unz Review是逆于美国主流的地方,所以能找到一些支持中国的人不过它依然是个英文的美国媒体,真正的中国人也很少会在这上面评论。稍微看了你的评论历史,对你稍有好奇,欢迎发我邮件gmachine1729 at foxmail.com,然后可以加个微信多认识一下,把你介绍到更多与你道相同的人。 你藐视国民党是不是也藐视韩国啊,如孔庆东一样,他写的一首讽刺韩国的诗实在太妙了。 独立韩秋。汉江北去,孔子挠头。看红男绿女,招摇过市;肥猫瘦狗,潇洒同流。渴饮酱汤,饥餐泡菜,欲涮火锅不自由。勒裤带,问姜葱大蒜,谁主沉浮? 招来百侣同游,正说道苦行岁月愁。叹无业妇人,风华正茂;有闲老者,诟骂方遒。半壁河山,断碣文字,亦敢扬眉傲五洲。曾记否,在上甘岭下,万骨成丘! ## My Huawei phone arrived Sunday out to meet an old friend at a restaurant in Beijing while navigating my iPhone suddenly died in the cold and couldn’t turn on for a while. After about five minutes it finally went on but at 10% battery when before it was at 70%. So I thought time to order a Chinese phone instead. Sunday night I did so online on JD 京东 and today at noon it was delivered. I spent only 1000 RMB (less than$200) on it. Let’s see how long it lasts being so much cheaper. I ordered a cheap one after my physics professor friend in China told me he never goes beyond 1000 RMB when ordering his Huawei phones. There were Huawei ones that are like 4000 RMB iirc.
For those of you still stuck in America who can’t return in the near future who want to support China in this trade war you can start by using Xiaomi for phone and boycotting Google and Facebook. Americans go out of their way to devalue Chinese companies and there is every reason for Chinese to do the same to America in return.
## A call to boycott Jewish media
A few days ago, on WeChat, somebody sent me the following screenshot,
which just goes to show how egregious censorship really is in America.
So, I have some American friends who I wish to tell some things, but I am hesitant to over Gmail/Facebook, the two most common means of communication in America now, for the reason that I don’t want a permanent record of the information stored within an American institution run by people I have no reason to trust with that information.
This is something I’ve been aware of for quite a long time but have mostly kept to myself. You see, there are guys like Andrey Martyanov who are very much against virtual Jewish control of America, yet ironically, he uses blogspot (which was acquired by Google I believe) to blog and Gmail as his email. If he is so against Zionism, why is he trusting an arguably Zionist institution with his information and communications and thereby indirectly endorsing it? He is a Russian who came to the US in the 90s when there was economic crisis in Russia. Why can’t he use a Russian email instead?
Now, when a Russian from Russia emailed me with a mail.ru email, I felt much more comfortable communicating with him. It feels very different talking with a Russian in Russia. Unlike with a Russian in America, I don’t have to worry that he’s some idiot US loving liberal. Or at the very least, I don’t have to worry that some American boss can extort him or at least influence him into some degree of submission. He’s in Russia where he doesn’t have to give a fuck about a US law that would by default side against him, where there are no US taxes or English as the official language.
Having been in China for a while interacting with almost exclusively locals, I no longer view most Chinese in America as truly Chinese. Especially if they refuse to a Chinese medium of communication like WeChat, especially if they insist on using Gmail or Facebook, both explicitly blacklisted in China. In that case, I will refuse to share any serious information with them. If they insist on trusting Jewish controlled American institutions with their personal information, then they will have to bear its consequences. The longer they persist with this, the less likely their ancestral home country will accept them. They will have then placed their fate into the hands of people who have basically no reason to care about their wellbeing.
Somewhat predictably, some of my friends in America seem reluctant to register a WeChat to communicate with me or with others on there with common views who they might be interested in connecting with. They may be similarly unhappy with much of America right now, in particular its ruling class, but they either do not care enough to do anything at all, don’t know how to go about it, or are afraid to. Yes, venting on Quora and Reddit are options, but experience has told us that being banned from Quora on trumped up charges for writing eloquent answers which displease its VCs is a very real possibility. That has in fact happened to a former top writer with 8500+ followers.
My main message is that if you dislike Silicon Valley or the American Jewish establishment so much, you’re not completely stuck with them. You can get out of America, though it might be difficult, as it requires finding a job in and moving to another country. For as long as you are stuck in America, you don’t have to use Facebook or Gmail either, except when really necessary. For your private personal communications, you can register for and use a non-American email provider or messenger, like WeChat.
As for monitoring by the Chinese government, they absolutely won’t give a damn unless you try to organize some serious anti-China political activity on there. Even if you talk the cliche human rights, communist dictatorship crap on a small scale on there, there’s basically a 100% guarantee that nobody will care. Nobody internal will care enough to go to the trouble to read your messages. In fact, both Tencent and the Chinese government might well be happy to see more internationalization of their product.
By boycotting certain mainstream American internet products, you not only transfer a tad of both data and advertising revenue to whatever else you are using in place. You also send a political message that encourages more people like you to do the same, thereby making it more socially acceptable behavior. Not to mention how through that, you might meet some interesting people who may lead you to some good opportunities, as happened to myself.
I shall conclude by saying that there is little point in fighting from within if your political views are marginal or directly at odds with the American mainstream as societies are naturally top down. In some sense, you cannot achieve anything serious without being part of the mainstream of whatever organization or society you are part of. Yes, go find the minority of people in America who think like you, but in addition to that you are likely to reap a bigger fruit connecting with people outside America, not difficult in today’s internet world. Do what you need to do for minimum survival but past that, the best way to protest is to ignore and detach, not by arguing with or trying to influence people fundamentally opposed to you. | |
# What are some good ways of presenting web designs to a client?
I'd like to ask you how you manage with presenting webdesign. Usually, I send a jpg file/files to client directly to the mail, but this isn't the best approach. In most cases it does the job, but I've experienced some cases that didn't.
Let's say I've set up a design 1980px wide. The width is important because client can see all the background that shows up, but the actual design is 960px wide. The problem is that in 90% cases client opens the image in the default OS image browser and the image is scaled to fit view. Moreover, if my client has for example 1440px wide screen the 1980px scales to 1440px and the 960px is only 690px. As we know if the image is scaled we cannot see all the details because image browser antyaliasing image and all the sharpen details will gone. What if my client opens the same design in 1280px wide screen or narrower? It gets worse.
Second problem is the order of images. I've had this situation couple of times, I've sent three images with different design approaches and my mail software orders alphabetical, while my clients mail order attachments by (for example) modification time. In the result, client calls me and say "The first design is best. I'd like to choose this one." The problem is, I've no idea which is the first one in his mail. It's confusing, right?
The solution could be to place an image on the web, write couple lines of html and CSS code and send a link. But this approach also has some pitfalls. Let's say I wrote the code. The width of img must be 1980px: img { width: 1980px; } and in fact it has. But there is another problem. When client opens up the browser and the resolution of his display is less then 1920px the image starts from left and ends up somewhere in the middle. This is because there is a horizontal scrollbar, but its default position is the left, not center. Ok, so I can use some javascript to extort the proper position but the situation turns up to be ridiculous. It's like using a machine gun for an ant.
The fact is all those situations might happen and in the matter of fact, I've experienced it couple of times. The best option would be to show an image in its original size centered with possibility to move left/right.
Any clues, ideas, tools? :)
However, if you want to completely avoid shrinkage or fonts rendering differently I'd recommend that you start doing your designs directly in HTML + CSS. What the client sees is what he/she will get. Showing a real website makes it much easier to understand navigability and to improve usability. I first do some wireframing (then get the client's feedback), then start trying things in the browser.
I'm not sure what you mean by your width having to be 1980px, that sounds quite large. Having some code instead would solve this, the resolution will depend on the user, but it should look good in any resolution. For designing responsive, I'd again recommend you plan and show the client some wireframing.
The problem you are encountering--which is common--is that you aren't presenting the web site.
You're presenting static images of what the web site will look like in a certain setting.
Ideally, you just don't do that. You instead present the web site as a web site...something that can be clicked, tapped, resized, etc. Instead of showing PhotoShop files, you're showing HTML, CSS and JS.
Granted, that's just not always doable depending on the process and teams in use.
So if you have to show static images, I strongly recommend that you never send them out blind via email or a link. Instead, present them 'in person' even if that means via a conference call or screen share. That way you can verbally add the all important context that would otherwise be missing. This achieves three important things:
1. It let's you show off. You're the expert. Talk like one. :)
2. It saves countless hours wasted where the client goes off on their own and dwells on low priority details and fails to address the larger questions you may have that you could have handled verbally right then and there.
3. You get a 'feel' for what their feedback is. The problem with written feedback is that it's sometimes hard to get the full context and intent. It's best to get feedback directly so you can quickly seek any clarification needed right then and there.
I've had the same issue and had come up with a more elegant solution based on the html+css example. Instead of using the html tag, I just create a div with the background image set to the image I want to present to my client.
Here's the code for this:
body {
margin:0;
}
#website {
background:url('website-homepage.jpg') top center no-repeat; // change to your jpg
width:100%;
height:1040px; // change to your height
}
And even better, I've also created a WordPress page template that will automatically set the featured image as the page background and it will automatically take the image height.
I have dealt with this in a few different ways. If I am trying to show "the flow" of the site, then I will use a program such as Adobe Muse, b/c I can bang out a quick wireframe that the client can click through.
If I am trying to show the overall "design" of the site, I will build the pages in Photoshop and then make a multi-page PDF file that puts the pages in order, forcing the client to scroll through in the order I want the pages to be shown in. Plus in this scenario my type is crisp and doesn't get pixelated as a save for web jpg will be known to do.
If you expect your client to open the default image viewing program, place your screen shots inside of a browser template.
This way if they're looking at the design at 50% its size, they're recognize the browser image wrapped around it and understand they're looking at the image scaled down.
Of course, if I'm presenting screen shots in person, I'll use clean screenshots with no browser wrapped around them, and of course open them in an actual browser for presentation. The browser template may look sloppy to you, but it's fairly dummy proof if you're not sure how the client will open the image.
Try to use the service (m) maquetter.com. It solves those problems that you specified. See an example (mqttr.com/example).
This service is meant for convenient presentation of layouts to a client. And the client is not required to have any technical skills for reviewing the project. You can easily load all of your layouts to demonstrate them to the client afterwards. The ability to set several layouts for each page lets the client choose the most suitable option. All layouts are indexed (for example http://mqttr.com/example#0103), therefore the client can always tell you the exact number of any given layout, or point out desired changes | |
MathSciNet bibliographic data MR507929 46L05 (47G05 58G12) Dynin, Alexander Inversion problem for singular integral operators: \$C\sp{\ast} \$$C\sp{\ast}$-approach. Proc. Nat. Acad. Sci. U.S.A. 75 (1978), no. 10, 4668–4670. Links to the journal or article are not yet available
For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews. | |
# Look for airplane design: Take off@12000 feet
### Help Support HomeBuiltAirplanes.com:
#### gouxin
##### Active Member
Why? Very simple: I want to fly in the Tibetan area and bring up kids in that area to let them see their hometown from an aspect where they've never seen before. To realize this dream I need an airplane with the following attributes:
1. be able to take off with two on board and some fuel at 12000 feet(this is just the start point of Tibetan area) with takeoff distance less than 1500 feet.
2. two seater
3. folding wings or easily removable wings (this is a must requirement for flying in that area).
4. relatively low cost to acquire and operate.
It seems turbo charged engine and light wing loading should make this happen. I understand turbo engine should work with constant speed prop but since the mission profile of this airplane is limited to high altitude most of the time, I can use a ground adjustable prop and set the pitch to high altitude use most of the time so constant speed prop is not a requirement.
How about a two-seat quicksilver or Xenos motorglider fitted with HKS 700T or turbo VW engine?
This is not a hanger talk or daydreaming. My friends and I are going to do it.
Xin Gou
From Chengdu, China
#### Joe Fisher
##### Well-Known Member
One thing is high altitude is only part of your challenge. Near the ground you will find ex stream turbulence and the control and performance of an ultralight type aircraft will not be able to keep it from being slammed into the ground. Be careful and study about mountain flying.
#### TFF
##### Well-Known Member
Over Everest; Emil Wick
The tough part is 12,000 ft was a close to world record landing hight up until the 70's.
#### Aircar
##### Banned
Having done a study on the feasibility of operating a special design aircraft into Humla province (@ 4000M altitude) I can say that it is a very difficult task and a Quicksilver or xenos will be limited to at the very best one seat --and need a large prop diameter etc etc --the air density at 4000M is only 62% of sea level so plug that into your Lift formula and then add your 50ft obstacle clearance for take off distance and I think you are going to be out of the ballpark --and the question is WHERE can you find a 1500 ft strip above 12000 ft in Nepal ? (there is a video subtitled "the world's most dangerous airport?" --the name is short , something like Lukhsa --which shows landing and take offs in the Nepalese hinterland --a well sloped strip and one way only and reverse thrust needed on landing but admittedly these are for turboprop commuter aircraft --New Guinea is very similar in terms of terrain and to a lesser degree altitude and also features lots of 'impossible' airstrips . Lots of power and turbocharging would help a great deal at these heights and I suspect that the weight and balance on something like a Xenos will be an issue for adding a lot more power --the ideal aircraft will have both very low wingloading and span loading , a very big prop etc at least in it's landing and take off configuration.
there is in my view a need for an aircraft with the sort of high altitude capability involved here (and STOL @ that height ) --for such places as Bolivia,New guinea and other poorly served communities in mountainous terrain (which means no roads also )
Good luck with your efforts .
#### litespeed
##### Well-Known Member
A very hard ask but will also depend on your ability to get fuel and in quantites to suit the aircraft needed.
Why you really need folding wings or removable ? That can really limit the available aircraft designs that will suit.
I think you really need -
something with low weight, big power, STOL abilities and a big wing area.
how about a Criquet Storch with a minimum of a turbo rotax?
Or a Hornet from AAK with similar engine or even better the turboprop he has available?
These are the sort of aircraft needed- tough,light,powerful and STOL.
#### gouxin
##### Active Member
thanks for the reply. Now it seems turbine is the way to go, but there's no way I can afford that for now. The reason for folding or removable wings is that private flying is still very restricted in China and we have to trailer the airplane into the area and take the "fly and go" approach.
##### Well-Known Member
Why not one of the "conventional" STOL-designs? CH701/750/801?
#### topspeed100
##### Banned
I just made some sketches for a very small ac with high AR...I figured it cannot do the speed record...but it looks like it could be ok for high altitude flying.
I also acquired an aerodynamics book today and possibly I could figure out how high it could soar with the longer wings ( originally tought 2 pair of wings for different purposes ).
This would be very aerodynamical and lite 3 seater with optional 4th seat bench at the rear ( or 80 lbs luggage ).
How busy are you with it ?
#### Dana
##### Super Moderator
Staff member
A powered paraglider trike might do it, though you'd be limited to rare light wind conditions... though just anything that could take off in the distance and altitude you specify would be similarly limited.
-Dana
"No man's life, liberty, or property is safe while the legislature is in session." (Judge Gideon J. Tucker, 1866)
#### Hot Wings
##### Grumpy Cynic
HBA Supporter
Log Member
Since cruise speed doesn't seem to be a high priority, rather than a fixed wing airplane how about a blimp?
#### topspeed100
##### Banned
TFF wrote;
Code:
[URL="http://www.flymicro.com/everest/index.cfm?page=docs/History/Emil_Wick.htm"][COLOR=#417394]Over Everest; Emil Wick[/COLOR][/URL]
The tough part is 12,000 ft was a close to world record landing hight up until the 70's.
Interesting since I planned this my design to have fully retractable main ski !
So 5700 meters is the highest man has ever landed and taken off with fixed wing ac.
This is the engine for the Himalaya/Tibetian operations for a low cost mountain "soarer".
Rotax 914 UL specifications, performance, weights and documentation
Another engine where the turbo is better positioned for aerodynamics; Limbach-Flugmotoren.de - L 2400 DT.X
Rotax 914 ULS otoh has smaller overall size...they could be even in "drag forming shapes".
Last edited:
#### gouxin
##### Active Member
Yes, Tibet is right next to Nepal.I think it's OK to have a CH701 type STOl airplane equipped with turbo engine for now, restricted to relatively calm weather condition of course.
#### topspeed100
##### Banned
Where is the highest possible landing strip over at Himalayas ? For a skiplane I mean ?
#### gouxin
##### Active Member
Bangda airport in eastern Tibet, which lies 15,548 feet above sea level, is the highest airport in the world, if that's what you were asking. If your skiplane is capable enough, there're plenty of flat and straight open wild fields as landing strip above this height.
#### topspeed100
##### Banned
Ok..I once had a uni ski R/C model place of my own design..it was the most fun modelaeroplane I ever had. The ski can be made to create lift unlike the wheel. | |
# Conduction in a Plane Sheet
Note
To view this project in FLAC3D, use the menu command Help ► Examples…. Choose “Thermal/PlaneSheetConduction” and select “PlaneSheetConduction.f3dprj” to load. The project’s main data files are shown at the end of this example.
A plane sheet of thickness $$L$$ = 1 m is initially at a constant temperature of 0°C. One side of the sheet is exposed to a constant temperature of 100°C, while the other side is kept at 0°C. The sheet eventually reaches an equilibrium state at a constant heat flux and unchanging temperature distribution.
The analytical solution to this problem for the transient temperature distribution is given by Crank (1975):
$\begin{split}T(z,t)\ = \ T_1 &+ {z \over {L}} (T_2-T_1) + {2 \over {\pi}} \sum_{n=1}^\infty e^{-\kappa n^2\pi^2t/L^2}\left({T_2 \cos (n \pi) - T_1 \over n}\right) \sin {n \pi z \over L} \\ &+ {2 \over L} \sum_{n=1}^\infty e^{-\kappa n^2\pi^2t/L^2} \left( \int_0^L f(z')\sin {{n\pi z'} \over L} \mathrm{d}z' \right) \sin {n \pi z \over L}\end{split}$
where:
$$T_1$$ is the fixed temperature at $$z$$ = 0 of the sheet;
$$T_2$$ is the fixed temperature at $$z$$ = L of the sheet;
$$L$$ is the width of the sheet;
$$t$$ is time;
$$z$$ is distance across the sheet;
$$f(z)$$ is the initial temperature distribution function in the range of $$0 < z < L$$; and
$$\kappa$$ is equal to $$k/({\rho}C_p)$$, where $$k$$ is the thermal conductivity, $$\rho$$ is the density, and $$C_p$$ is the specific heat.
Particularly for $$f(z) = T_2$$ (the case in this example), the solution becomes
${{T(z,t)-T_2} \over {T_1-T_2}}\ =\ 1 - {z \over {L}} - {2 \over {\pi}} \sum_{n=1}^\infty e^{-\kappa n^2\pi^2t/L^2}\left({\sin {n \pi z \over L} \over n}\right)$
The thermal conductivity for this example is 1.6 W/m °C, the specific heat is 0.2 J/kg °C, the mass density of the material is 1000 kg/m3, and the temperature, $$T$$, is 100°C.
The analytical solution is programmed as a FISH function for direct comparison to the numerical results at selected thermal times. The analytical and numerical temperature results for these times are stored in tables.
In the FLAC3D model, the sheet is defined as a column of 25 zones. A constant temperature boundary of 100°C is applied at the face located at $$z$$ = 0, and a constant temperature boundary of 0 °C is applied at the face located at $$z$$ = 1. The model grid is shown in Figure 1:
Figure 1: FLAC3D grid for conduction in a plane sheet.
This example contains solutions solved by FLAC3D using both explicit and implicit formulations. The comparison of analytical and numerical temperatures at three thermal times for the explicit solution is shown in Figure 2, and that for the implicit solution in Figure 3. Normalized temperature ($$(T(z,t)-T_2)/(T_1-T_2)$$) is plotted versus normalized distance ($$z/L$$) in the two figures, where Tables 2, 4, and 6 contain the analytical solution for temperatures, and Tables 1, 3, and 5 contain the FLAC3D solutions. The three thermal times are 2, 12, and 72 seconds for both the explicit and implicit solutions. The solution has reached the equilibrium thermal state by the last time in each case. For both solution formulations, the difference between analytical and numerical temperatures at steady state is less than 0.1%. Note that for the explicit solution, the timestep is approximately 0.07 seconds, while for the implicit solution, the timestep is set to 0.1 seconds.
Figure 2: Comparison of temperatures for the explicit-solution algorithm (analytical values = crosses; numerical values = lines).
Figure 3: Comparison of temperatures for the implicit-solution algorithm (analytical values = crosses; numerical values = lines).
Reference
Crank, J. The Mathematics of Diffusion, 2nd Ed. Oxford: Oxford University Press (1975).
Data File
; Thermal conduction in a plane sheet
; Compares explicit and implicit methods
model new
model large-strain off
fish automatic-create off
model title 'Conduction in a Plane Sheet'
model configure thermal
; --- main computation ---
zone create brick size 1 1 25 point 1 (0.1,0,0) ...
point 2 (0,0.1,0) point 3 (0,0,1)
; -- thermal model
zone thermal cmodel isotropic
zone thermal property conductivity 1.6 specific-heat 0.2
zone initialize density 1000
zone face apply temperature 100. range position-z 0.0
zone face apply temperature 0. range position-z 1.0
; settings
model mechanical active off
model thermal active on
model save 'psheet-ini'
; -- explicit method
; test
model solve time-total 1.5
model save 'psheet-exp-015'
model solve time-total 7
model save 'psheet-exp-070'
model solve time-total 70
model save 'psheet-exp-700'
; -- implicit method
model restore 'psheet-ini' | |
## FANDOM
476 Pages
Volume is the amount of space something occupies.
## Volume of a PrismEdit
$area of cross section * length$
## Volume of a SphereEdit
$4/3 * pi * r^3$
## Volume of a ConeEdit
$1/3 * pi * r^2 * h$ | |
## What is a Map
A Map data structure allows to associate data to a key.
## Before ES6
ECMAScript 6 (also called ES2015) introduced the Map data structure to the JavaScript world, along with Set
Before its introduction, people generally used objects as maps, by associating some object or value to a specific key value:
const car = {}
car['color'] = 'red'
car.owner = 'Flavio'
console.log(car['color']) //red
console.log(car.color) //red
console.log(car.owner) //Flavio
console.log(car['owner']) //Flavio
## Enter Map
ES6 introduced the Map data structure, providing us a proper tool to handle this kind of data organization.
A Map is initialized by calling:
const m = new Map()
### Add items to a Map
You can add items to the map by using the set method:
m.set('color', 'red')
m.set('age', 2)
### Get an item from a map by key
And you can get items out of a map by using get:
const color = m.get('color')
const age = m.get('age')
### Delete an item from a map by key
Use the delete() method:
m.delete('color')
### Delete all items from a map
Use the clear() method:
m.clear()
### Check if a map contains an item by key
Use the has() method:
const hasColor = m.has('color')
### Find the number of items in a map
Use the size property:
const size = m.size
## Initialize a map with values
You can initialize a map with a set of values:
const m = new Map([['color', 'red'], ['owner', 'Flavio'], ['age', 2]])
## Map keys
Just like any value (object, array, string, number) can be used as the value of the key-value entry of a map item, any value can be used as the key, even objects.
If you try to get a non-existing key using get() out of a map, it will return undefined.
## Weird situations you’ll almost never find in real life
const m = new Map()
m.set(NaN, 'test')
m.get(NaN) //test
const m = new Map()
m.set(+0, 'test')
m.get(-0) //test
## Iterating over a map
### Iterate over map keys
Map offers the keys() method we can use to iterate on all the keys:
for (const k of m.keys()) {
console.log(k)
}
### Iterate over map values
The Map object offers the values() method we can use to iterate on all the values:
for (const v of m.values()) {
console.log(v)
}
### Iterate over map key, value pairs
The Map object offers the entries() method we can use to iterate on all the values:
for (const [k, v] of m.entries()) {
console.log(k, v)
}
which can be simplified to
for (const [k, v] of m) {
console.log(k, v)
}
## Convert to array
### Convert the map keys into an array
const a = [...m.keys()]
### Convert the map values into an array
const a = [...m.values()]
## WeakMap
A WeakMap is a special kind of map.
In a map object, items are never garbage collected. A WeakMap instead lets all its items be freely garbage collected. Every key of a WeakMap is an object. When the reference to this object is lost, the value can be garbage collected.
Here are the main differences:
1. you cannot iterate over the keys or values (or key-values) of a WeakMap
2. you cannot clear all items from a WeakMap
3. you cannot check its size
A WeakMap exposes those methods, which are equivalent to the Map ones:
• get(k)
• set(k, v)
• has(k)
• delete(k)
The use cases of a WeakMap are less evident than the ones of a Map, and you might never find the need for them, but essentially it can be used to build a memory-sensitive cache that is not going to interfere with garbage collection, or for careful encapsualtion and information hiding. | |
# Rao-Blackwell-Kolmogorov theorem
A proposition from the theory of statistical estimation on which a method for the improvement of unbiased statistical estimators is based.
Let $X$ be a random variable with values in a sample space $( \mathfrak X , {\mathcal B} , {\mathsf P} _ \theta )$, $\theta \in \Theta$, such that the family of probability distributions $\{ { {\mathsf P} _ \theta } : {\theta \in \Theta } \}$ has a sufficient statistic $T = T ( X)$, and let $\phi = \phi ( X)$ be a vector statistic with finite matrix of second moments. Then the mean ${\mathsf E} _ \theta \{ \phi \}$ of $\phi$ exists and, moreover, the conditional mean $\phi ^ {*} = {\mathsf E} _ \theta \{ \phi \mid T \}$ is an unbiased estimator for ${\mathsf E} _ \theta \{ \phi \}$, that is,
$${\mathsf E} _ \theta \{ \phi ^ {*} \} = \ {\mathsf E} _ {0} \{ {\mathsf E} _ {0} \{ \phi \mid T \} \} = {\mathsf E} _ \theta \{ \phi \} .$$
The Rao–Blackwell–Kolmogorov theorem states that under these conditions the quadratic risk of $\phi ^ {*}$ does not exceed the quadratic risk of $\phi$, uniformly in $\theta \in \Theta$, i.e. for any vector $z$ of the same dimension as $\phi$, the inequality
$$z {\mathsf E} _ {0} \{ ( \phi - {\mathsf E} _ {0} \{ \phi \} ) ^ {T} ( \phi - {\mathsf E} _ {0} \{ \phi \} ) \} z ^ {T\ } \geq$$
$$\geq \ z {\mathsf E} _ {0} \{ ( \phi ^ {*} - {\mathsf E} _ {0} \{ \phi ^ {*} \} ) ^ {T} ( \phi ^ {*} - {\mathsf E} _ {0} \{ \phi ^ {*} \} ) \} z ^ {T}$$
holds for any $\theta \in \Theta$. In particular, if $\phi$ is a one-dimensional statistic, then for any $\theta \in \Theta$ the variance ${\mathsf D} _ \theta \phi ^ {*}$ of $\phi ^ {*}$ does not exceed the variance ${\mathsf D} _ \theta \phi$ of $\phi$.
In the most general situation the Rao–Blackwell–Kolmogorov theorem states that averaging over a sufficient statistic does not lead to an increase of the risk with respect to any convex loss function. This implies that good statistical estimators should be looked for only in terms of sufficient statistics, that is, in the class of functions of sufficient statistics.
In case the family $\{ {\mathsf P} _ \theta T ^ {-} 1 \}$ is complete, that is, when the function of $T$ that is almost-everywhere equal to zero is the only unbiased estimator based on $T$ for zero, the unbiased estimator with uniformly minimal risk provided by the Rao–Blackwell–Kolmogorov theorem is unique. Thus, the Rao–Blackwell–Kolmogorov theorem gives a recipe for constructing best unbiased estimators: one has to take some unbiased estimator and then average it over a sufficient statistic. That is how the best unbiased estimator for the distribution function of the normal law is constructed in the following example, which is due to A.N. Kolmogorov.
Example. Given a realization of a random vector $X = ( X _ {1} \dots X _ {n} )$ whose components $X _ {i}$, $i = 1 \dots n$, $n \geq 3$, are independent random variables subject to the same normal law $N _ {1} ( \xi , \sigma ^ {2} )$, it is required to estimate the distribution function
$$\Phi \left ( \frac{x - \xi } \sigma \right ) = \ \frac{1}{\sqrt {2 \pi } \sigma } \int\limits _ {- \infty } ^ { x } e ^ {- ( u - \xi ) ^ {2} / 2 \sigma ^ {2} } \ d u ,\ | \xi | < \infty ,\ \ \sigma > 0 .$$
The parameters $\xi$ and $\sigma ^ {2}$ are supposed to be unknown. Since the family
$$\left \{ {\Phi \left ( \frac{x - \xi } \sigma \right ) } : { | \xi | \langle \infty , \sigma \rangle 0 } \right \}$$
of normal laws has a complete sufficient statistic $T = ( \overline{X}\; , S ^ {2} )$, where
$$\overline{X}\; = \frac{X _ {1} + \dots + X _ {n} }{n}$$
and
$$S ^ {2} = \frac{1}{n} \sum _ { i= } 1 ^ { n } ( X _ {i} - \overline{X}\; ) ^ {2} ,$$
the Rao–Blackwell–Kolmogorov theorem can be used for the construction of the best unbiased estimator for the distribution function $\Phi ( ( x - \xi ) / \sigma )$. As an initial statistic $\phi$ one may use, e.g., the empirical distribution function constructed from an arbitrary component $X _ {1}$ of $X$:
$$\phi = \left \{ \begin{array}{ll} 0 & \textrm{ if } x < X _ {1} , \\ 1 & \textrm{ if } x \geq X _ {1} . \\ \end{array} \right .$$
This is a trivial unbiased estimator for $\Phi ( ( x - \xi ) / \sigma )$, since
$${\mathsf E} \{ \phi \} = {\mathsf P} \{ X _ {1} \leq x \} = \Phi \left ( \frac{x - \xi } \sigma \right ) .$$
Averaging of $\phi$ over the sufficient statistic $T$ gives the estimator
$$\tag{1 } \phi ^ {*} = {\mathsf E} \{ \phi \mid T \} = \ {\mathsf P} \{ X _ {1} \leq x \mid \overline{X}\; , S ^ {2} \} =$$
$$= \ {\mathsf P} \left \{ \frac{X _ {1} - \overline{X}\; }{S} \leq \frac{x - \overline{X}\; }{S} \mid \overline{X}\; , S ^ {2} \right \} .$$
Since the statistic
$$V = \left ( \frac{X _ {1} - \overline{X}\; }{S} \dots \frac{X _ {n} - \overline{X}\; }{S} \right ) ,$$
which is complementary to $T$, has a uniform distribution on the $( n - 2 )$- dimensional sphere of radius $n$ and, therefore, depends neither on the unknown parameters $\xi$ and $\sigma ^ {2}$ nor on $T$, the same is true for $( X _ {1} - \overline{X}\; ) / S$ and
$$\tag{2 } {\mathsf P} \left \{ \frac{X _ {1} - \overline{X}\; }{S} \leq u \right \} = T _ {n-} 2 ( u) ,\ \ | u | < \sqrt n- 1 ,$$
where
$$\tag{3 } T _ {f} ( u) =$$
$$= \ \frac{1}{\sqrt {\pi ( f + 1 ) } } \frac{\Gamma ( ( f+ 1) / 2 ) }{\Gamma ( f / 2 ) } \int\limits _ {- \sqrt {f + 1 } } ^ { u } \left ( 1 - \frac{t ^ {2 } }{f+} 1 \right ) ^ {( f - 2) / 2 } du$$
is the Thompson distribution with $f$ degrees of freedom. Thus, (1)–(3) imply that the best unbiased estimator for $\Phi ( ( x - \xi ) / \sigma )$ obtained from $n$ independent observations $X _ {1} \dots X _ {n}$ is
$$\phi ^ {*} = \ T _ {n-} 2 \left ( \frac{x - \overline{X}\; }{S} \right ) =$$
$$= \ S _ {n-} 2 \left ( \frac{x - \overline{X}\; }{S} \sqrt {n- \frac{2}{n - 1 - ( ( x - \overline{X}\; ) / S ) ^ {2} } } \right ) ,$$
where $S _ {f} ( \cdot )$ is the Student distribution with $f$ degrees of freedom.
#### References
[1] A.N. Kolmogorov, "Unbiased estimates" Izv. Akad. Nauk SSSR Ser. Mat. , 14 : 4 (1950) pp. 303–326 (In Russian) [2] C.R. Rao, "Linear statistical inference and its applications" , Wiley (1965) [3] B.L. van der Waerden, "Mathematische Statistik" , Springer (1957) [4] D. Blackwell, "Conditional expectation and unbiased sequential estimation" Ann. Math. Stat. , 18 (1947) pp. 105–110 | |
# A question about invertible matrices
A square matrix $A$ over the reals is said to be invertible in practice if there exists a matrix $B$ of the same size s. t. all the entries of $AB$ differ from the corresponding entries of the identity matrix $E$ less than or equal to $10^{-10}$. Does there exist invertible in practice marix which is not invertible?
-
If this definition corresponds to how things are done in practice, however, then the set of matrices that are invertible in practice should be a strict subset of the invertible matrices. – Omnomnomnom Jul 28 at 20:10
@angryavian if that were the case, then $A$ would be invertible in practice if and only if it were invertible – Omnomnomnom Jul 28 at 20:13
@ Omnomnomnom: Why do you think so? – user64494 Jul 28 at 20:13
(Retyped): Let $\|\cdot\|$ be the column-sum (or row-sum) norm. Assume we're talking about $n \times n$ matrices. Let $M$ be a matrix such that $|M_{ij} - I_{ij}| \leq 10^{-10}$. Then $\|M - I\| \leq n/{10^{10}}$. It follows that $\sigma(M) \subset [1-n/{10^{10}},1+n/{10^{10}}]$. It follows that $M$ must be invertible as long as $n < 10^{10}$. – Omnomnomnom Jul 28 at 20:23
Yes, if that's really interesting to you. I would remind you, however, that in discussing $10^{10}\times 10^{10}$-sized matrices, we have deviated drastically from practice. Could you elaborate on the motivation for this definition and this question? – Omnomnomnom Jul 28 at 20:29
## 2 Answers
The set of $n \times n$ matrices that are "invertible in practice" is exactly the set of $n \times n$ matrices that are invertible, as long as $n < 10^{10}$.
For $n \geq 10^{10}$, invertible matrices are invertible in practice, but not the other way around. For a counterexample, consider the matrix given by $$A_{ij} = \begin{cases} 1 - 1/n & i=j\\ -1/n & i \neq j \end{cases}$$ Noting that the row sums of $A$ are all zero, we may conclude that $A$ is not invertible. Nevertheless, it is "invertible in practice".
-
+1: Nice. We must have $\|A-I\| \ge 1$ for any sub-multiplicative norm. – copper.hat Jul 28 at 20:44
(My point was that it is curious how dimension plays into sub-multiplicative matrix norms.) – copper.hat Jul 28 at 21:00
@copper.hat thank you! And agreed; it's one of the many counter-intuitive things about $n$-dimensional space. – Omnomnomnom Jul 28 at 21:07
Technically, yes, but practically, no.
Technical answer:
Consider the $n\times n$ matrix
$$M = \left[\begin{array}{ccccc}1 & x & x & \ldots & x\\x & 1 & 0 & \ldots & 0\\x & 0 & 1 & \ldots & 0\\ \vdots & & & \ddots & \\ x & 0 & 0 &\ldots & 1\end{array}\right].$$
By row-reduction the determinant of this matrix is $1 - (n-1)x^2$ and in particular, $M$ is singular when $x = \sqrt{1/(n-1)}$. You can make $x$ arbitrary small (in particular, less than $10^{-10}$) by making $n$ arbitrary large; and then if $AB = M$, at least one of $A$ or $B^T$ is singular while being "practically invertible."
Practical answer: suppose your matrix is $n\times n$, where $n < 10^{10}$. Then any matrix $M$ that is close to the identity matrix, in the sense you describe, is strictly diagonally dominant and hence nonsingular. Therefore any $A, B$ with $AB=M$ are both nonsingular as well.
-
@ user7530: +1. The question arises: does there exist a counterexample of a different kind? – user64494 Jul 28 at 20:47 | |
yatish gosai
☆
India,
2019-05-22 07:49
Posting: # 20289
Views: 3,235
## Impartial Witness in BABE study [GxP / QC / QA]
Can Husband act as an Impartial witness for Screening and Enrollment of his wife in BABE study?
Yatish Gosai
QA Professional
Ohlbe
★★★
France,
2019-05-22 10:48
@ yatish gosai
Posting: # 20290
Views: 2,891
## Impartial Witness in BABE study
Dear Yatish Gosai,
» Can Husband act as an Impartial witness for Screening and Enrolment of his wife in BABE study?
I could not find a definition of an "impartial witness" in the New Drugs and Clinical Trials Rules, 2019. ICH GCP § 1.26 defines the impartial witness as
A person, who is independent of the trial, who cannot be unfairly influenced by people involved with the trial, who attends the informed consent process if the subject or the subject’s legally acceptable representative cannot read, and who reads the informed consent form and any other written information supplied to the subject.
In a Phase 3 trial I would say yes as there is no financial benefit. In a BA/BE trial, one may object that the husband has a direct interest in having his wife participate (he will benefit from the money). The husband is not impartial as he may be influenced by the financial incentive... The wife's consent may not be freely given (particularly in the case of illiterate subjects, which is the situation where you would need an impartial witness, where wifes are used to obeying their husband).
Regards
Ohlbe
yatish gosai
☆
India,
2019-05-22 11:04
@ Ohlbe
Posting: # 20292
Views: 2,913
## Impartial Witness in BABE study
Thanks for valuable opinion
Yatish Gosai
Helmut
★★★
Vienna, Austria,
2019-05-24 12:20
@ Ohlbe
Posting: # 20300
Views: 2,812
## New Drugs and Clinical Trials Rules: Wrong definition of BE
Dear Ohlbe and all,
» I could not find a definition of an "impartial witness" in the New Drugs and Clinical Trials Rules, 2019.
Thank you for pointing me to this reference! Unfortunately BE is wrongly defined (Chapter I, 2. Definitions (f), page 148):
“bioequivalence study” means a study to establish the absence of a statistically significant difference in the rate and extent of absorption of an active ingredient from a pharmaceutical formulation in comparison to the reference formulation having the same active ingredient when administered in the same molar dose under similar conditions;
(my emphases)
One error and a doubtful term:
1. Statistically significant? No way. Maybe the CDSCO’s gurus had a look at the FDA’s definition given in the CFR21 I D §320.23 (b)(1):
• Two drug products will be considered bioequivalent drug products if they are pharmaceutical equivalents or pharmaceutical alternatives whose rate and extent of absorption do not show a significant difference when administered at the same molar dose of the active moiety under similar experimental conditions, either single dose or multiple dose.
Note the absence of ‘statistically’. In the FDA’s definition ‘significant’ is used in its common meaning (1 or 2a). If BE would require “absence of a statistically significant difference”, should consider a union with .*
2. Similar conditions? Nope. They should be the same (food, beverages, time of administration, physical activity, ). Though similar is also stated in the FDA’s definition, it is not mentioned in the EMA’s BE-GL (common sense!)…
This reminds me on a story Salomon Stavchansky once told me. He wrote more or less single-handed ANVISA’s first guidances, only to discover that a wrong definition of bioavailability was not only stated in the guidance (Resolução) but also in the law (Legislação). Whilst the former could have been corrected rather quickly, it took Brazil two years to change the latter.
• Example for a drug with low variability. Minimum sample size (in India only if justified 12), 14 subjects dosed, no dropouts; consequently extremely high power:
library(PowerTOST) CV <- 0.1 n <- 14 pe <- seq(0.84, 1, length.out=100) pe <- sort(unique(c(pe, 1/pe))) res <- data.frame(pe=100*pe, lower=NA, upper=NA, BE=FALSE, PE=FALSE, p=NA) for (j in seq_along(pe)) { res[j, 2:3] <- round(100*CI.BE(pe=pe[j], CV=CV, n=n), 2) if (res$lower[j] >= 80 & res$upper[j] <= 125) res$BE[j] <- TRUE if (res$lower[j] < 100 & res$upper[j] > 100) res$PE[j] <- TRUE res$p[j] <- pvalue.TOST(pe=pe[j], CV=CV, n=n) } op <- par(no.readonly=TRUE) par(pty="s") plot(pe, res$p, type="n", log="x", xlab="point estimate", ylab="p", las=1) grid(); abline(h=0.05) box() lines(pe, res$p, lwd=3, col="red") lines(res$pe[res$BE == TRUE]/100, res$p[res$BE == TRUE], lwd=3, col="magenta") lines(res$pe[res$PE == TRUE]/100, res$p[res\$PE == TRUE], lwd=3, col="blue") legend("top", inset=0.02, box.lty=0, bg="white", lwd=3, col=c("red", "magenta", "blue"), legend=c("fails BE", "passes BE", "n.s. (CI includes 1)")) par(op)
Studies with point estimates of 85.6–116.8% pass everywhere except in , where the PE has to lie within 93.5–106.9%. Bizarre.
Dif-tor heh smusma 🖖
Helmut Schütz
The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Helmut
★★★
Vienna, Austria,
2019-05-22 10:52
@ yatish gosai
Posting: # 20291
Views: 2,915
## Husband ≠ impartial
Hi Yatish,
» Can Husband act as an Impartial witness for Screening and Enrollment of his wife in BABE study?
Look up impartial in a dictionary. A married couple is a paradigm of not being impartial.
See the definition of impartial witness in the ‘Ethical Guidelines for Biomedical Research on Human Subjects’ of the Indian Council of Medical Research:
A literate person, who is independent of the research and would not be unfairly influenced by people involved with the study, who attends the informed consent process if the participant […] cannot read, and understand the informed consent form and any other written information supplied to the participant.
Dif-tor heh smusma 🖖
Helmut Schütz
The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
yatish gosai
☆
India,
2019-05-22 11:04
@ Helmut
Posting: # 20293
Views: 2,893
## Husband ≠ impartial
Thanks for valuable opinion
Yatish Gosai
nobody
nothing
2019-05-22 15:56
@ yatish gosai
Posting: # 20294
Views: 2,862
## Husband ≠ impartial
These are no "opinions" (i.e. subjected to matters of taste or which can be discussed), these are facts. And it's amazing (to say the least) that such a question arises in the first place in a forum for BA/BE experts...
Kindest regards, nobody
ElMaestro
★★★
Belgium?,
2019-05-22 16:11
@ yatish gosai
Posting: # 20295
Views: 2,860
## Impartial Witness in BABE study
Hi yatish gosai
and all posters on this thread,
to be honest I think this is a great, great question and it is one that made me think very hard for a prolonged moment.
The mere fact that such a question is asked should give rise to a lot of reflection on the part of regulators. What is obvious to a regulator or someone from one geographical region is obviously not obvious to others. At the end of the day guidelines are not intended solely for regulators but for all involved parties.
So here's something to work with.
I could be wrong, but...
Best regards,
ElMaestro
"Pass or fail" (D. Potvin et al., 2008) | |
• entries
19
27
• views
21491
# Android, Pyweek and Seperate Axis Theorem
751 views
[BLURB:] Sooooo... Perhaps I should work on smaller projects... hmmm. In any case, spent some time taking a look into python on Android, tried the pyweek challenge, and NAILED THE HELL out of SATs... not the school education kind.
[DEVELOPMENT: Hell]
Hmmm... Archetype needs to take a little break (not that I haven't already put it on the back burner). I've thought this many times, but still, I'll say it... I need to work on smaller projects. To that end, I tried my hand at the pyweek challenge that just occurred. For those that don't know, the challenge is to write a game, either alone or in a team, within one week, based off a theme that isn't revealed until the moment the challenge begins. This challenge was "Nine Times".
So... did I succeed...
Ummm, not exactly.
Turns out, having to help move family from one apartment to another takes a hell of a lot of time (8 hours of day 1... lost).
Nine hours a day due to my full time job doesn't help much either.
Then, of course, my father's computer fails two days before the end of the challenge and who's the only one in the family that can fix it? Yeeeah...
None-the-less I still spent most of my free hours working on something, and while I didn't have anything playable until 5 minutes AFTER the end of the challenge, I learned a lot!
One thing I learned was that Android's SL4A is not a viable platform for writing games. I'm thrilled that there's a method for writing applications for the Android outside of Java and C++, but as I looked into the SL4A project, I soon realized that the Android graphics library was not exposed to the scripting languages. This is a HUGE oversite of the project, I think, and, as such, I lost a few hours of the pyweek challenge because I didn't realize this issue until I was preparing to develop my pyweek game on Android. DRAT! With any luck, SL4A will get a graphics hook and then we'll REALLY be cooking! HA!
[TUTORIAL: SATs]
Pyweek wasn't a total loss, though, as I learned and nailed Separate Axis Theorem collisions!! There's a bit of documentation out on SATs and it's all pretty good, but I figured I'd share my discoveries with my wide blog audience for no other reason than to solidify my understanding of the system.
• Firstly, I'd like to point out that this is for calculating collisions on 2D objects. While SAT works on 3D, I only really worked with 2D, and that's what I'm working with.
• Secondly... I haven't worked with sphere (or rounded) objects. It wasn't what I was focusing on when I was developing my application, and so, for the time being, I'm skipping rounded objects.
• Thirdly, SAT ONLY WORKS WITH CONVEX OBJECTS! These are objects that if you drew a strait line through them, the line could NEVER intersect the object in more than 2 locations regardless of where you drew the line.What is Projection:
To begin let's talk about projection. What is projection? It's a shadow. For a 3D object projected onto a plane, the result is a 2D shape (or, if you think of the real world... a shadow). For a 2D object, the projection is a line with a length long enough to encompass the entire 2D object when viewed from the axis in which one is projecting.
Huh?
Well... let's think of a rectangle. The axises of a rectangle are the X and Y axis, and the projection of a rectangle upon the X axis would be the same as the length of the rectangle.
Projection in regards to SATs:
Let's stick with the rectangle for a little while and start talking about SATs...
With the Separate Axis Theorem we can determine if two objects are colliding with one another if ALL of the axises of projection between the two objects in question overlap each other. If even a single projection doesn't, then there is no collision.
Think of the two rectangles... I dare you to try drawing two rectangles in such a way that the shadows on the X axis AND the Y axis for BOTH rectangles touch but the rectangles AREN'T colliding. Go ahead. I'll sit here and wait for you...
...
...
No? Couldn't do it? And that's the point. If all shadows for both objects are all overlapping each other, then your objects are colliding. If even a single pair of shadows ARE NOT colliding, then the objects ARE NOT colliding. That's the Separate Axis Theorem. Take your objects, find all axies in which you need to project upon. If even a single axis has a pair of projections that DO NOT overlap, then there is no collision and you're done.
Finding Axises to Project Upon:
Great! Now you may be wondering... "If I'm suppose to project upon some axises, what axises do I need to use?"... As it turns out, the axises you need to project against are simply the normal vectors for each side of your 2D object. Lets look at a little python code that does this...
Code Example: 1.0
# This is a 10x10 rectangle centered at the origin.points = [[-5, 5], [-5, -5], [5, -5], [5, 5]]axises = [] # an empty list at the moment# We loop through the edges of the objects...# which just so happens to be the same as# the number of points.for p in range(0, len(points)): if p == len(points)-1: edge = [points[0] - points[0][0], points[1] - points[0][1]] else: edge = [points[0] - points[p+1][0], points
[1] - points[p+1][1]] # Now that we have the edge, we need to find it's normal... There're actually TWO # normals you can use, depending on if you use the left handed or right handed # normal. For the most part, it doesn't matter which you use, as long as you're # consistent. # I'm going to use Left Handed normals.... norm = [edge[1], -edge[0]] # Or... (y, -x)... it's that simple. # At this point we've found our normal, which is more or less our axis. I say # "more or less" only because, for simplicity, our axis should be a UNIT vector. nlength = math.sqrt((norm[0]**2)+(norm[1]**2)) axis = [norm[0]/nlength, norm[1]/nlength] # <--- That's our axis for this edge! axises.append(axis) # and we add it to our list of axises for this object.
HEY! HEY! You're example uses a rectangle and creates FOUR axises! When you were talking to us about projection, you said rectangles only have TWO!!!
Ummm... well, yeah, sort of. Because the rectangle has four sides, it technically has four axises in which to test... however, two of those axises parallel the other two. In essence, it's like we have the same axis twice (even though they may both be going in to different directions). To solve this problem, you can change the last line of code above to...
Code Example: 1.1
AddAxis = True# We need to loop through our existing axises.for a in axises: # Calculating the dot product of the two axises... dp = (axis[0]*a[0]) + (axis[1]*a[1]) # Now we get the arc-cosine of the dot product. If the two axises are parallel # then the value of acos will be either a 1.0 or a -1.0 ... so we check the # absolute value of the acos to see if we get a 1.0. If we do, we DON'T want # to add the axis to the list, since a similar one already exists. if abs(math.acos(dp)) == 1.0: AddAxis = False breakif AddAxis: axises.append(axis)
Yes, it's a bit more complicated and eats more CPU cycles, but, depending on the complexity of your shape, it shouldn't effect you too much and doing the above would cut down on the number of tests you need to do in the upcoming examples.
One important thing to keep in mind is, we need to calculate the projections of BOTH objects over BOTH objects' axises! We'll see this in Code Example 3.0
Calculating Projections:
So... now that we have our axises in which to test against, how do we calculate out the projection of the object over an axis?
Well... we take the dot product of each point in our object against our axis, storing the minimum and maximum values.
Code Example: 2.0
# These are our minimum and maximum projection values.# Initially, we don't have any.proj_min = Noneproj_max = None# We loop through each point in our objects.for p in points: # Calculating the dot product. # NOTE: While the axis should be normalized (as we did in Code example 1.0), # we don't need to, nor should we normalize the point. dp = (axis[0]*p[0])+(axis[1]*p[1]) if proj_min == None: proj_min = dp proj_max = dp else: if dp < proj_min: proj_min = dp if dp > proj_max: proj_max = dpprojection = [proj_min, proj_max]
Keep in mind, we would have to do with for ALL axises from BOTH objects ON BOTH objects.
Huh?
I'll explain next...
Putting it All Together... Simply:
So now you see how we get our axises to test upon and how to calculate out projections... let's put it together!
Code Example: 3.0
# obj1 and obj2 are assumed to be a list of points like used in Example Code 1.0def collides(obj1, obj2): # Assume the CalculateAxises function does the same as Example Code 1.0 # and returns the axises list. # Calculate the axises for the first object o1axises = CalculateAxises(obj1) for axis in o1axises: # Assume the CalculateProjection function does the same as Example Code 2.0 # and returns the [min, max] projection. # Get the projection of obj1 over the axis. o1proj = CalculateProjection(axis, obj1) # Get the projection of obj2 over the axis o2proj = CalculateProjection(axis, obj2) # Assume the Overlaps function returns true if the two projections # overlap and false otherwise. if not Overlaps(o1proj, o2proj): # As soon as ONE pair of projections DON'T overlap # we KNOW there's no collision. Done. return False # NOPE! Not done yet. We now have to do the SAME THING for the axises of the # OTHER object. # Calculate the axises for the second object and repeat the same as we did above. o2axises = CalculateAxises(obj2) for axis in o2axises: # Assume the CalculateProjection function does the same as Example Code 2.0 # and returns the [min, max] projection. # Get the projection of obj1 over the axis. o1proj = CalculateProjection(axis, obj1) # Get the projection of obj2 over the axis o2proj = CalculateProjection(axis, obj2) # Assume the Overlaps function returns true if the two projections overlap and false # otherwise. if not Overlaps(o1proj, o2proj): # As soon as ONE pair of projections DON'T overlap # we KNOW there's no collision. Done. return False # We've now looped over all axises for both objects. If we're still here, then # ALL PROJECTIONS OVERLAP! # We've COLLIDED! return True
And now you know, using the Separate Axis Theorem, whether or not the two objects collide.
Of course, you usually want to know a little more than that... like, if they've collided, how do you break OUT of the collision? Turns out, that's not too much harder than what we've already done.
Putting it All Together... MTV Style:
No... not the TV station. In this case, MTV stands for Minimum Transition Vector... or, more simply... What's the quickest way out of here!!!
What we want is a vector showing us the way to non-collision safety. All we need for that is axis in which the minimum overlap was found. Lets go to code, shall we?
Code Example: 4.0
def collides(obj1, obj2): # These will hold the information we need to find our MTV. # For now, they're None... meaning we didn't find anything yet. MTVOverlap = None MTVAxis = None # Calculate the axises for the first object o1axises = CalculateAxises(obj1) for axis in o1axises: # Assume the CalculateProjection function does the same as Example Code 2.0 # and returns the [min, max] projection. # Get the projection of obj1 over the axis. o1proj = CalculateProjection(axis, obj1) # Get the projection of obj2 over the axis o2proj = CalculateProjection(axis, obj2) # We're getting rid of the Overlaps function from before, and using the # Overlap function (we dropped the 's'). Overlap will return a scalar # value equal to the amount of overlap between the two projections. # If there is no overlap, then Overlap will return a 0.0 ol = Overlap(o1proj, o2proj) if ol == 0.0: # We have no overlap... meaning we have no collision... meaning # we have NO MTV. We're done. return None # Here's where we do some new stuff... if MTVOverlap is None: MTVOverlap = ol MTVAxis = axis elif ol < MTVOverlap: MTVOverlap = ol MTVAxis = axis # Calculate the axises for the second object and repeat the same as we did above. o2axises = CalculateAxises(obj2) for axis in o2axises: # Assume the CalculateProjection function does the same as Example Code 2.0 # and returns the [min, max] projection. # Get the projection of obj1 over the axis. o1proj = CalculateProjection(axis, obj1) # Get the projection of obj2 over the axis o2proj = CalculateProjection(axis, obj2) # We're getting rid of the Overlaps function from before, and using the # Overlap function (we dropped the 's'). Overlap will return a scalar # value equal to the amount of overlap between the two projections. # If there is no overlap, then Overlap will return a 0.0 ol = Overlap(o1proj, o2proj) if ol == 0.0: # We have no overlap... meaning we have no collision... meaning # we have NO MTV. We're done. return None # Here's where we do some new stuff... if MTVOverlap is None: MTVOverlap = ol MTVAxis = axis elif ol < MTVOverlap: MTVOverlap = ol MTVAxis = axis # Ok... we've gotten this far, which means all projections overlap between # the two objects, so we want to return the MTV. We've already captured # the MTV, so we return it as a single vector... return [MTVAxis[0]*MTVOverlap, MTVAxis[1]*MTVOverlap]
And we're done... Yeah.. ummm... for the most part...
MTV Directionality:
This little bit caught me for a while when I was figuring this out. I had done everything right, but, when I tested two objects as certain angles, instead of the MTV moving the objects out of collision, it'd send them further IN! It had occurred to me, after a couple hours of frustration, that... What is the MTV was pointing in the same direction as the colliding object.
First thing I had to realize is that the winding of my object's points were important. Meaning, where they clock-wise or counter-clock-wise. For instance, for Code Example 1.0, the points in the "points" variable are counter-clock-wise. There's probably ways to determine this programatically, but, for simplicity, just say all objects must be drawn in the same winding. In my case, I was drawing in a CCW direction as well, so...
I thought to myself... "self", I said, "during the collision, one object has to be moving and the other not" (simple situation). So, I decided that I would calculate the vector between the two objects...
vector = collideeObj.position - colliderObj.position
...and normalize it...
vector.normalize()
Then I would get a normal vector of my MTV...
MTVNorm = MTV.normalize(returnCopy=True)
... find the dot product between my direction vector and my MTV vector...
dp = vector.dot(MTVNorm)
... and, for CCW winding...
if dp < 0.0: # This inverts the vector. Same path, different direction MTV = -MTV
... and finally, reposition the collider by the MTV
collider.position += MTV
And there you have it! OBJECTS COLLIDING PROPERLY!!
Caveats:
Like I said, this works for N-sided objects. I didn't even look at rounded objects because I wasn't focusing on those (I was pressed for time). Also, the objects MUST be CONVEX objects, but that's more a caveat with SAT than my code. Lastly, there's no compensation here for fast moving objects nor complete inclusion (one object totally within another).
If you want to do concave objects, simply create a group of convex objects and treat them as one object, except during collision testing.
Conclusion:
I hope at least someone finds this useful... but, at the very least, I can always look this back up if ever I need to code up SAT objects again. If anyone would like, I could post up a simple example program (in python), but for now, I need to eat dinner.
references references references!!!
[quote name='Gaiiden' timestamp='1302917158']
references references references!!!
[/quote]
Come again? | |
# Probability measure on $\mathcal{P}(\mathbb{R})$
This question has been bugging me for a while? Does there exist a probability measure on the measurable space $\bigl(\mathbb{R},\mathcal{P}(\mathbb{R})\bigr)$. If so, what is it?
-
Yes, $\delta_0$ for example. (Dirac at $0$). I guess you want additional conditions. – Davide Giraudo Sep 22 '12 at 11:43
As @DavideGiraudo said, you probably want more conditions. I am still looking for a natural measure (on $\mathcal{P}([0,1])$). Though I think I now know such an example. Let me know if you're interested in that. – Quinn Culver Sep 22 '12 at 12:42
To read about the space $\mathcal{P}(\mathcal{P}(\mathbb{R}))$, check out Billingsley's Convergence of Probability Measures and Parthasarathy's Probability Measures on Metric Spaces. – Quinn Culver Sep 22 '12 at 12:45
If you drop the axiom of choice then you can have the Lebesgue measure... – Asaf Karagila Sep 22 '12 at 13:19
@Quinn I think he means power set by ${\cal P}$, not the space of probability measures. – Byron Schmuland Sep 22 '12 at 14:49
He proves the following result due to Banach and Kuratowski: Assuming the continuum hypothesis, there is no measure $\mu$ defined on all subsets of $I:=[0,1]$ with $\mu(I)=1$ and $\mu(\{x\})=0$ for all $x\in I$. | |
With the school year starting, I can’t keep up with the one-post-a-day frequency anymore. Still, I want to keep plowing ahead towards class field theory.
Today’s main goal is to show that under certain conditions, we can always extend valuations to bigger fields. I’m not aiming for maximum generality here though.
Dedekind Domains and Extensions
One of the reasons Dedekind domains are so important is
Theorem 1 Let ${A}$ be a Dedekind domain with quotient field ${K}$, ${L}$ a finite separable extension of ${K}$, and ${B}$ the integral closure of ${A}$ in ${L}$. Then ${B}$ is Dedekind.
This can be generalized to the Krull-Azizuki theorem.
I’ll sketch the proof. We need to check that ${B}$ is Noetherian, integrally, closed, and of dimension 1.
• Noetherian. Indeed, ${B}$ is a finitely generated ${A}$ module. The ${K}$-linear map ${(.,.): L \times L \rightarrow K}$, ${a,b \rightarrow \mathrm{Tr}(ab)}$ is nondegenerate since ${L}$ is separable over ${K}$. Let ${F \subset B}$ be a free module spanned by a ${K}$-basis for ${L}$. Then since traces preserve integrality and ${A}$ is integrally closed, we have ${B \subset F^*}$, where ${F^* := \{ x \in K: (x,F) \subset A \}}$. Now ${F^*}$ is ${A}$-free on the dual basis of ${F}$ though, so ${B}$ is a submodule of a f.g. ${A}$ module, hence a f.g. ${A}$-module.
• Integrally closed. It is an integral closure, and integrality is transitive.
• Dimension 1. Indeed, integral extensions preserve dimension by lying over, going up, and a corollary.
So, consequently the ring of algebraic integers (integral over ${\mathbb{Z}}$) in a number field (finite extension of ${\mathbb{Q}}$) is Dedekind.
Extensions of discrete valuations
The real result we care about is:
Theorem 2 Let ${K}$ be a field, ${L}$ a finite separable extension. Then a discrete valuation on ${K}$ can be extended to one on ${L}$
Indeed, let ${R \subset K}$ be the ring of integers. Then ${R}$ is a DVR, hence Dedekind, so the integral closure ${S \subset L}$ is Dedekind too (though in general it is not a DVR). Now as above, ${S}$ is a finite ${R}$-module, so if ${\mathfrak{m} \subset R}$ is the maximal ideal, then
$\displaystyle \mathfrak{m} S \neq S$
by Nakayama. So ${\mathfrak{m} S}$ is contained in a maximal ideal ${\mathfrak{M}}$ of ${S}$ with, therefore, ${\mathfrak{M} \cap R = \mathfrak{m}}$. (This is indeed the basic argument behind lying over, which I could have just invoked.) Now ${S_{\mathfrak{M}} \supset R_{\mathfrak{m}}}$ is a DVR as it is the localization of a Dedekind domain at a prime ideal. So there is a discrete valuation on ${S_{\mathfrak{M}}}$. Restricted to ${R}$, it will be a power of the given ${R}$-valuation. This is ok, since a power of a discrete valuation is a discrete valuation too.
This completes the proof. Note that there is a one-to-one correspondence between extensions of the valuation on ${K}$ and primes of ${S}$ lying above ${\mathfrak{m}}$. Indeed, the above proof indicated a way of getting valuations from primes. For an extension of the valuation on ${K}$ to ${L}$, let ${\mathfrak{M} := \{ x \in S: \left| x \right| < 1\}}$.
Also, I think the above result can be extended to purely inseparable extensions (hence all algebraic extensions): if $L/K$ is purely inseparable of degree $p^i$, then define the valuation on $L$ by raising to the power $p^i$, taking the valuation in $K$, and raising the resulting real number to the power $1/p^i$.
Next up: Ramification. | |
## What an Author needs to know about LaTeX
21 Apr
This post is for authors who choose a publisher who likes to use LaTeX for layout, and want to make changes to the marked up source directly. It is quite easy to edit LaTeX files, as long as you just know a few things to avoid. Here is the minimum you need to know to be effective at editing the content.
You are the author, and you are responsible for the content of the book. You are not a LaTeX expert and you don’t need to be. Mixed in with the content are a bunch of LaTeX command which will ultimately cause the book to be formatted properly. Having someone else responsible for the layout is a good idea because LaTeX commands are capricious and there are a lot of esoteric rules you need to know to alter the way a particular publication will look. But you can leave that to the LaTeX experts! Your job is to focus on the content, and get the written word correct regardless of the formatting and style of the printed layout.
## Basics
The source file is a text file. That means you can use any text editor, whatever is your favorite. If you have no preference, MS WordPad will do fine. Better editors like TextPad offer coloring of command for TeX files, so look for that since it will help you avoid mistakes.
Most of the words and sentences in your book will appear as exactly the same words and sentences in the source file. A blank line separates one paragraph from another. A paragraph can all be on one line, or as many lines as you like, as long as there is no blank line in the middle, because that would start a new paragraph.
This is a paragraph on one long line.
This is
a paragraph
on multiple
lines.
## Special Characters
You are free to include most characters in your text. There are just ten special ones you need to be careful about. These are $, %, #, \, {, }, ^, _, ~, and &. If you want one of these characters to appear in your text, you need to put a backslash before them. For a$ to appear, you must type \$into the source. Thus, use \$, \%, #, \, {, }, \^, _, \~, and \&.
## Commands
When you first look at a LaTeX source file, you will probably immediately notice commands in an among the text, and these commands start with a backslash. The general form is backslash, then command name, then open curley brace, some text, and close curley brace. The following are some commands that you might see:
• \chapter{Chapter Titles are Important}
• \section{The Section Title}
• \index{Newton, Issac}
• \begin{quote}
• \end{quote}
• \emph{This phrase will print in italics}
The first starts a new chapter, and the text in the curley braces is the title of the new chapter. The second starts a section, again the section title in the braces. The third will cause an index entry in the index to point back to this location in the book. The fourth and fifth are placed around block quotes. The sixth will make a span of text to be set in italic font. Some commands have no parameters, some have multiple parameters, and some have optional parameters in square brackets:
• \newline
• \setmainfont[Ligatures={Common,TeX}, Numbers={OldStyle}]{Palatino Linotype}
One important think to know is that commands without any braces will consume any white space after them. To avoid this put a pair of braces with nothing between after the command. Thus \newline{} and \newline are the same thing, except the latter will consume the white space and it will be as there was no space there. For the most part, stay away from the complicated instructions. Those are not likely to have any significant content text in them.
## Quote Characters
LaTeX is set up to have proper quotation marks where the marks that start the phrase are different than the ones that end the phrase, but the writer needs to do this manually — there are no smart quotes. It is pretty easy: there is a “before” single quote character and an “after” single quote character. For double quotes, just use two of them. It works like this:
He yelled Watch out!'' before jumping in the pool.
The BEFORE quote character is the key that is usually on the top left of the keyboard, above the tab key, and on the same key with the tilde character. The AFTER quote character is the normal apostrophe which is usually on the right of the keyboard next to the enter key. LaTeX will transform these character into the appropriate begin and end quote supported by the font you have selected.
You should not use the double-quote character (") at all, and you can not use the special unicode before-double-quote nor the special unicode after-double-quote.
<h2>Hyphens and Dashes</h2>
(Why I am getting this <b>strange formatting</b> in wordpress I don't know.)
LaTeX has three kinds of dashes.
<ol>
<li>The shorted is a simple hyphen which appears between words to make a single hyphenated word. To do this use a single hyphen character as you would normally expect.</li>
<li>An N-Dash is a bit longer, and is used to specify a range of things, such as pages 32–35. For an N-Dash use two hyphens together.</li>
<li>An M-Dash is a longer dash that is used to separate a part of a sentence from another part. For this use three hyphen in a row.</li>
</ol>
Here are some examples:
The state-of-the-art video-game failed on first-run.
Mr. Jobs---no relation to Steve Jobs---was the first in line.
It is pretty straight-forward.
## Conclusion
So if you can use a text editor, can handle a few special characters, and some commands that start with backslash, that is all you need to know to edit the content of your LaTeX formatted book source. | |
Microbial 'omics
Brought to you by
# anvi-experimental-organization [program]
Create an experimental clustering dendrogram..
See program help menu or go back to the main page of anvi’o programs and artifacts.
## Usage
This program can use an anvi’o clustering-configuration file to access various data sources in anvi’o databases to produce a hierarchical clustering dendrogram for items.
It is especially powerful when the user wishes to create a hierarchical clustering of contigs or gene clusters using only a specific set of samples. If you would like to see an example usage of this program see the article on combining metagenomics with metatranscriptomics.
Edit this file to update this information.
Are you aware of resources that may help users better understand the utility of this program? Please feel free to edit this file on GitHub. If you are not sure how to do that, find the __resources__ tag in this file to see an example. | |
# Verify correctness of quantifier elimination, using SAT
Let $x=(x_1,\dots,x_n)$ and $y=(y_1,\dots,y_n)$ be $n$-vectors of boolean variables. I have a boolean predicate $Q(x,y)$ on $x,y$. I give my friend Priscilla $Q(x,y)$. In response, she gives me $P(x)$, a boolean predicate on $x$, and she claims that
$$P(x) \equiv \exists y . Q(x,y),$$
or in other words, that
$$\forall x . [P(x) \Leftrightarrow \exists y . Q(x,y)].$$
I would like to verify her claim somehow. How can Priscilla help me verify this claim?
You can assume that both $P$ and $Q$ are represented as CNF formulas, and that they're not too large (polynomial size, or something).
In an ideal world, it'd be awesome if I could reduce the problem of verifying this claim to SAT: I have a SAT solver, and it'd be great if I can use the SAT solver to verify this claim. However, I'm pretty sure that it's not going to be possible to formulate the problem of verifying this claim directly as a SAT instance; testing the validity of a 2QBF formula is almost certainly harder than SAT. (The $\Leftarrow$ direction is easy to formulate as a SAT instance, but the $\Rightarrow$ direction is hard because it inherently involves two alternating quantifiers.)
But suppose Priscilla could give me some additional evidence to support her claim. Is there some additional evidence or witness Priscilla could give me, which would make it easy for me to verify her claim? In particular, is there some additional evidence or witness she could give me, which would make it easy for me to formulate the problem of verifying her claim as an instance of SAT (which I can then apply my SAT solver to)?
One unusual aspect of my setting is that I'm assuming (heuristically) that I have an oracle for SAT. If you like complexity theory, you can think about it this way: I am taking the role of a machine that can compute things in $P^{NP}$ (i.e., in $\Delta^P_2$), and I'm looking to verify Priscilla's claim using an algorithm in $P^{NP}$. My thanks to mdx for this way of thinking about things.
My motivation/application: I'm looking to do formal verification of a system (e.g., symbolic model checking), and a key step in the reasoning involves quantifier elimination (i.e., starting from $Q$, obtain $P$). I'm hoping for some clean way to verify that the quantifier elimination was done correctly.
If there's no solution that works for all possible $P,Q$, feel free to suggest a solution that is "sound but not complete", i.e., a technique that for many $P,Q$ lets me verify the claimed equivalence. (Even if it fails to verify the claim on some $P,Q$ that do satisfy the claim, I can still try this as a heuristic, as long as it never inappropriately claims to have verified a false claim. On any given $P,Q$, it might work, or it might not; if it doesn't work, I'm no worse off than where I started.)
• If we give Priscilla a $Q(x,y)$ where y is irrelevant, are we not effectively solving $\text{TAUT}\in \text{coNP}$? If so, then there is no certificate that Priscilla could give you that could help unless $\text{NP}=\text{coNP}$. – mdxn Oct 5 '13 at 1:27
• @mdx, the peculiar thing about this setting is that I have a SAT solver, which (empirically) seems to almost always work on the predicates that I run into in practice. So, if I'm given $P(x),Q(x)$ and want to verify $\forall x . P(x) \Leftrightarrow Q(x)$, I can feed $(P(x) \land \neg Q(x)) \lor (\neg P(x) \land Q(x))$ into my SAT solver; if it finds this is not satisfiable, I've verified $\forall x . P(x) \Leftrightarrow Q(x)$ is true. So, even though that's effectively solving $\text{TAUT}$, it's still OK in practice. Or have I misunderstood the gist of your comment? – D.W. Oct 5 '13 at 1:33
• Ah, so I assume you are taking the role of a machine deciding problems in $\text{P}^\text{NP}=\Delta^P_2$ (or the heuristic equivalent of)? – mdxn Oct 5 '13 at 2:08
• @mdx, yeah, now that you mention it, that's a nice way to think about it. Thank you for suggesting that perspective! – D.W. Oct 5 '13 at 17:14
• I don't think the first-order-logic tag is justified. The question is all about quantified boolean formulas. – kne May 1 '18 at 16:29
Here are two techniques I've been able to identify:
• Identify an explicit Skolem function. Suppose Priscilla can identify an explicit function $f$ such that
$$\forall x . P(x) \Leftrightarrow Q(x,f(x))$$
holds. Then it follows that Priscilla's claim is correct.
This means that Priscilla can help us verify her claim by providing a function $f$ so that the above proposition holds. We can confirm that the above proposition holds by testing the following formula for satisfiability:
$$\neg (P(x) \Leftrightarrow Q(x,f(x))).$$
If this formula is not satisfiable, then Priscilla's claim has been verified.
One caveat is that Priscilla needs to be able to identify a suitable function $f$. A further caveat is that we need $f$ to be concretely representable in some concise form, say, as a polynomial-sized boolean circuit. However, if those conditions are met, then this technique should work.
• A hybrid argument. Consider the special case of this problem, where we are quantifying over a one-bit variable (rather than a $n$-bit variable); it turns out the problem is easy to solve in this case. This suggests that we try to chain that technique $n$ times, each time removing one more bit of $y$. It turns out that this idea will sometimes work, but not always.
Let me explain how to verify Priscilla's claim in the case where $y=(y_1)$ is a one-bit variable. Then $\exists y . Q(x,y)$ is equivalent to $Q(x,\text{False}) \lor Q(x,\text{True})$. The latter formula is at most twice as large as $Q$, so still polynomial sized. Now we can use our SAT solver to test whether $Q(x,\text{False}) \lor Q(x,\text{True})$ is equivalent to $P(x)$; the equivalence holds exactly if the following formula is not satisfiable:
$$\neg (P(x) \Leftrightarrow (Q(x,\text{False}) \lor Q(x,\text{True}))).$$
So, if we're quantifying over a single bit, this gives a way to verify that the quantifier elimination was done correctly.
To solve the original problem, apply this multiple times. Priscilla's job will be to give us $n+1$ boolean predicates $R_0,R_1,R_2,\dots,R_n$ such that
$$R_i(x,(y_{i+1},\dots,y_n)) \equiv \exists y_1,y_2,\dots,y_i . Q(x,y).$$
Our task will be to verify whether all of these boolean predicates were correctly generated. We can do this by testing whether $Q(x,y) \equiv R_0(x,y)$, $P(x) \equiv R_n(x)$,
$$R_{i+1}(x,(y_{i+2},\dots,y_n)) \equiv \exists y_{i+1} . R_i(x,(y_{i+1},\dots,y_n)) \qquad \text{for i=1,2,\dots,n-1}.$$
Notice that the latter is an instance of quantifier elimination with a single bit, so we've already described how to test it was done correctly using a SAT solver. We can also test whether $Q \equiv R_0$ and $P \equiv R$ using a SAT solver straightforwardly. So, we can check whether Priscilla generated $R_0,\dots,R_n$ correctly. If she did, then we've verified that $P$ was generated suitably.
One caveat is that Priscilla needs to be able to generate the $R_i$'s. A bigger caveat is that the size of all the $R_i$'s needs to be reasonable (say, polynomial-sized). If Priscilla generates the $R_i$'s naively, their size might grow exponentially with $i$, which is no good. So, Priscilla will need a way to simplify at each stage; there needs to exist some sequence of $R_0,\dots,R_n$ that are all polynomial-sized, and Priscilla needs to be able to find such a sequence. That is by no means guaranteed. That said, if Priscilla can do this, then this technique should work.
I'm not fully satisfied with these techniques -- they are incomplete heuristics, and they might fail on some/many problem instances -- so I would still be interested to see other ways of approaching this problem. | |
## haganmc Group Title how to solve this diff equation?? dx/dt−x^3=x one year ago one year ago
1. lgbasallote Group Title
have you tried Bernoulli's?
2. Algebraic! Group Title
dx/dt = x+x^3 divide by x+x^3 multiply by dt
3. haganmc Group Title
the final answer should be $x=\pm \sqrt{C ^{2t}/(1-Ce ^{2t})}$
4. ironmanjimbo Group Title
Multiplying by dt is impossible! Rates of change are as is.
5. Algebraic! Group Title
lol
6. haganmc Group Title
you get $\frac{ 1 }{ x+x^3 }dx=dt$
7. haganmc Group Title
but how do you integrate this??
8. ironmanjimbo Group Title
haganmc is right on, what he did is separation of variables which is not the same as multiplying by a component of a rate of change. Well done. Now go with factoring and partial fraction decomposition and you are on your way to a solution
9. Algebraic! Group Title
@haganmc ... looks good to me.
10. Algebraic! Group Title
11. Algebraic! Group Title
12. haganmc Group Title
no now You have to integrate both sides.. i am trying to get x by itself
13. haganmc Group Title
but how do you do partial fraction decomposition?
14. Algebraic! Group Title
use partial frac. expansion on 1/(x+x^3)
15. Algebraic! Group Title
A / x + Bx /(x^2+1)
16. haganmc Group Title
thats where i messed up... i didnt put an x after the B
17. ironmanjimbo Group Title
Algebraic, YES, nice going!
18. Algebraic! Group Title
_Multiplying by dt is impossible! Rates of change are as is._
19. ironmanjimbo Group Title
That is correct. What you are actually doing is separation of variables, It is an important distinction!!!
20. Algebraic! Group Title
go learn the chain rule
21. ironmanjimbo Group Title
I don't mind if you do not want to agree with me, but I will simply suggest that you look into what I'm pointing out regarding how the rate of change is separated in such equations. Simply put, it LOOKS like dt was multiplied, but it was not. | |
# Creating a TeXForm/TraditionalForm/etcForm - like function
How could we go about creating a function that behaves like those?
After a while thinking, my best try is with CellPrint printing an Output cell with the famous CellLabel of Out[blah]//myForm. This solution is good enough for me for now, but I'm using it as an excuse to understand all these issues better... This mimics the behaviour, its not the same... For example, you have to manually get the $Line, you get the GeneratedCell option to True, and I don't know what else I'm missing. In fact, the kernel actually doesn't seem to do anything in the real form functions. So this solution would behave wrongly, for example, if I wrapped it other things... FullForm[myForm[stuff]] should return myForm[stuff] in a FullForm-tagged cell.. - For extending TeXForm (rather than creating a completely new one), there's some useful information here. – Szabolcs May 25 '12 at 9:59 ## 3 Answers This is far from fully integrated into the system but it is a first order approximation of the behavior of other forms. Perhaps it will inspire someone else with a better method. formfunc = StringReplace[ ToString @ #, {"[" -> "(", "]" -> ")", x : DigitCharacter .. :> "-->" <> x <> "<--"} ] &; MakeBoxes[myForm[expr_], StandardForm] := InterpretationBox[#, expr] & @ ToBoxes @ formfunc @ expr ToString[expr_, myForm] ^:= formfunc[expr] This provides output that can be re-evaluated to recover the original expression: 2^(1/2) // myForm "Sqrt(-->2<--)" This produces a normal String: ToString[2^(1/2), myForm] "Sqrt(-->2<--)" - This might be of help:$OutputForms is a list of the formatting functions that get stripped off when wrapped around the output.
$OutputForms= {InputForm,OutputForm,TextForm,CForm,Short,Shallow,MatrixForm,TableForm,TreeForm,FullForm,NumberForm,EngineeringForm,ScientificForm,QuantityForm,PaddedForm,AccountingForm,BaseForm,DisplayForm,StyleForm,FortranForm,MathMLForm,TeXForm,StandardForm,TraditionalForm} I recently discovered while working on a way to better show rational matrices. - They include the PrintForms which include the BoxForms. I did some digging at the time. Thanks! I think the output forms are those heads that end up showing applied to the cell label such as in mmaOut[34]//InputForm. Try Unprotect@$OutputForms;AppendTo[$OutputForms, la]; la[5] – Rojo Feb 8 at 14:33 @Rojo Nice! I tried to figure out those cell tags once and failed. Handy! Thanks, unlikely. – Mr.Wizard Feb 8 at 16:34 unlikely, this solves something I wondered about for a while: (44189) -- you can now answer this question. Please do so! – Mr.Wizard Feb 8 at 16:39 Understanding now that you want to define a completely new format, but still not sure what you want that format to look like, perhaps this does what you want: Format[myForm[x_]] := {x, x} x = blah; y = myForm[x] Head[y] {blah,blah} myForm Note that the result printed as {blah,blah} but the Head of the result is myForm. First attempt I'm not sure exactly how you want myForm to behave but the standard way to do this is to define a value of Format. For example, Unprotect[Log]; Format[Log[x_], TraditionalForm] := ln[x] Protect[Log]; Now, TraditionalForm[Log[x]] will print like$ln(z)\$:
Alternatively, you could define an UpValue for Log:
Unprotect[Log];
Log /: MakeBoxes[Log[x_], TraditionalForm] :=
RowBox[{"ln", "(", MakeBoxes[x, TraditionalForm], ")"}];
Protect[Log];
The result should be the same. Of course, you could do something similar of your myForm.
-
I was actually thinking of another form. In fact, the Format documentation suggests "You can add your own forms for formatted output. " apart from CForm, FortranForm, and friends... – Rojo Mar 1 '12 at 0:28
@Rojo I'm not sure I understand your comment. Does my edit help clarify? – Mark McClure Mar 1 '12 at 0:32
Mark, MakeBoxes is assignable without the need to use UpValues. Just use MakeBoxes[something] = somethingElse (see the docs). I also found it less confusing than format, but never managed to figure out either of them completely. – Szabolcs Mar 1 '12 at 0:33
@Mark He means make a MyForm so that ToString[expr, MyForm] will work. E.g. make a PythonForm which outputs things formatted correctly for Python, or something similar. – Szabolcs Mar 1 '12 at 0:34
@Szabolcs I see. Yes my suggestion is not quite sufficient. – Mark McClure Mar 1 '12 at 0:42 | |
#### Volume 12, issue 4 (2012)
Mutation and $\mathrm{SL}(2,\mathbb{C})$–Reidemeister torsion for hyperbolic knots | |
# zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
Successive approximation of neutral functional stochastic differential equations with jumps. (English) Zbl 1196.60114
Existence and uniquenss of càdlàg mild solutions of a stochastic delay equation
$d\left[x\left(t\right)+g\left(t,x\left(t-r\right)\right)\right]=\left[Ax\left(t\right)+f\left(t,{x}_{t}\right)\right]\phantom{\rule{0.166667em}{0ex}}dt+\sigma \left(t,{x}_{t}\right)\phantom{\rule{0.166667em}{0ex}}dW\left(t\right)+{\int }_{𝒰}h\left(t,{x}_{t},u\right)\stackrel{˜}{N}\left(dt,du\right)$
in a Hilbert space $H$ with an initial condition $x\left(t\right)=\varphi \left(t\right)$ for $t\in \left[-r,0\right]$ is proved. Here ${x}_{t}\left(s\right)=x\left(t+s\right)$, $s\in \left[-r,0\right]$, $A$ generates a holomorphic semigroup of contractions on $H$, $W$ is a cylindrical Wiener process, $\stackrel{˜}{N}$ is a compensated Poisson martingale measure generated by a stationary Poisson point process in a $\sigma$-finite measure space $\left(𝒰,ℰ,\nu \right)$, the nonlinearities $g$, $f$, $\sigma$ and $h$ are defined on suitable spaces and, roughly speaking, $g$ is Lipschitz of at most linear growth and the modulus of continuity of $f$, $\sigma$ and $h$ is at most $\epsilon max\left\{1,|\rho \left(\epsilon \right)|\right\}$ where $\rho$ is of multiples of iterated logarithms growth near the origin.
##### MSC:
60H15 Stochastic partial differential equations 34G20 Nonlinear ODE in abstract spaces 60J65 Brownian motion 60J75 Jump processes
##### References:
[1] Albeverio, S.; Mandrekar, V.; Rüdiger, B.: Existence of mild solutions for stochastic differential equations and semilinear equations with non-Gaussian Lévy noise, Stochastic process. Appl. 119, 835-863 (2009) · Zbl 1168.60014 · doi:10.1016/j.spa.2008.03.006 [2] Bihari, I.: A generalization of a lemma of belmman and its application to uniqueness problem of differential equations, Acta. math., acad. Sci. hungar. 7, 71-94 (1956) · Zbl 0070.08201 · doi:10.1007/BF02022967 [3] Caraballo, T.; Real, J.; Taniguchi, T.: The exponential stability of neutral stochastic delay partial differential equations, Discrete contin. Dyn. syst. 18, 295-313 (2007) · Zbl 1125.60059 · doi:10.3934/dcds.2007.18.295 [4] Da Prato, G.; Zabczyk, J.: Stochastic equations in infinite dimensions, (1992) · Zbl 0761.60052 [5] Datko, R.: Linear autonomous neutral differential equations in Banach spaces, J. differential equations 25, 258-274 (1977) · Zbl 0402.34066 · doi:10.1016/0022-0396(77)90204-2 [6] Goldstein, A.; Jerome: Semigroups of linear operators and applications, (1985) · Zbl 0592.47034 [7] Govindan, T. E.: Almost sure exponential stability for stochastic neutral partial functional differential equations, Stochastics 77, 139-154 (2005) · Zbl 1115.60064 · doi:10.1080/10451120512331335181 [8] Hausenblas, E.; Seidler, J.: Stochastic convolutions driven by martingales: maximal inequalities and exponential integrability, Stoch. anal. Appl. 26, No. 1, 98-119 (2008) · Zbl 1153.60035 · doi:10.1080/07362990701673047 [9] Ikeda, N.; Watanabe, S.: Stochastic differential equations and diffusion processes, (1989) [10] Kolmanovskii, V.; Koroleva, N.; Maizenberg, T.; Mao, X.; Matasov, A.: Neutral stochastic differential delay equations with Markovian switching, Stoch. anal. Appl. 21, No. 4, 819-847 (2003) · Zbl 1025.60028 · doi:10.1081/SAP-120022865 [11] Kolmanovskii, V. B.; Nosov, V. R.: Stability of functional differential equations, (1986) [12] Liu, K.: Uniform stability of autonomous linear stochastic fuctional diferential equations in infinite dimensions, Stochastic process. Appl. 115, 1131-1165 (2005) · Zbl 1075.60078 · doi:10.1016/j.spa.2005.02.006 [13] Liu, K.; Xia, X.: On the exponential stability in mean square of neutral stochastic functional differential equations, Systems control lett. 37, No. 4, 207-215 (1999) · Zbl 0948.93060 · doi:10.1016/S0167-6911(99)00021-3 [14] Mahmudov, N. I.: Existence and uniqueness results for neutral sdes in Hilbert spaces, Stochastic analysis and applications 24, 79-95 (2006) · Zbl 1110.60063 · doi:10.1080/07362990500397582 [15] Mao, X.: Exponential stability in mean square of neutral stochastic differential functional equations, Systems control lett. 26, 245-251 (1995) · Zbl 0877.93133 · doi:10.1016/0167-6911(95)00018-5 [16] Mao, X.: Razumikhin-type theorems on exponential stability of neutral stochastic functional–differential equations, SIAM J. Math. anal. 28, No. 2, 389-401 (1997) · Zbl 0876.60047 · doi:10.1137/S0036141095290835 [17] Pazy, A.: Semigroups of linear operators and applications to partial differential equations, Applied mathematical sciences 44 (1983) · Zbl 0516.47023 [18] Wu, J.: Theory and applications of partial functional differential equations, Applied mathematical sciences 119 (1996) | |
## Monday, February 19, 2007
### Andrei Linde: eternal feast
This press release of Stanford University is certainly more serious than the "solution" to the twin paradox but it is still kind of amusing:
While Alan Guth has discovered that the Universe is the ultimate free lunch, Andrei Linde has improved this theory. He argues that the Universe is an eternal feast because all possible dishes are being served all the time. The menu offers 10^{1000} different tasty meals, previously known as the landscape.
Figure 1: The landscape, 2007 edition. For the sake of simplicity, (10^{999}-1) x 10 vacua were omitted.
It's somewhat entertaining that this evolution of the popular metaphors proposed by the two famous Gentlemen kind of mimicks the evolution of the actual discoveries within inflationary cosmology. | |
# Simple probability proof using conditional probabilities
• February 12th 2010, 03:08 PM
sirellwood
Simple probability proof using conditional probabilities
Im terrible at proofs....
Use the definition of conditional probabilities to prove that any events A, B, C, D, E and F,
P(A $\cap$B $\cap$C $\cap$D $\cap$E $\cap$F) = P(A $\cap$B $\cap$C $\cap$D $\mid$E $\cap$F)P(E $\cap$F)
and P(A $\cap$B $\cap$C $\cap$D $\cap$E $\cap$F) = P(A $\cap$B $\mid$C $\cap$D $\cap$E $\cap$F)P(C $\cap$D $\cap$E $\mid$F)P(F)
Also, can anyone form a similar identity?
• February 12th 2010, 04:09 PM
johanS
they all come from the definition of conditional probability:
$P(A | B) = \frac{P(A \cap B)}{P(B)} \Rightarrow P(A \cap B) = P(A|B)P(B)$
and the associative law of intersection of sets:
$(A \cap B) \cap C = A \cap ( B \cap C )$
The last law allaws you to write the left or right side of the equality as $A \cap B \cap C$ | |
## Christopher Duffy ; Sonja Linghui Shan - On the existence and non-existence of improper homomorphisms of oriented and $2$-edge-coloured graphs to reflexive targets
dmtcs:6773 - Discrete Mathematics & Theoretical Computer Science, March 29, 2021, vol. 23 no. 1 - https://doi.org/10.46298/dmtcs.6773
On the existence and non-existence of improper homomorphisms of oriented and $2$-edge-coloured graphs to reflexive targets
Authors: Christopher Duffy ; Sonja Linghui Shan
We consider non-trivial homomorphisms to reflexive oriented graphs in which some pair of adjacent vertices have the same image. Using a notion of convexity for oriented graphs, we study those oriented graphs that do not admit such homomorphisms. We fully classify those oriented graphs with tree-width $2$ that do not admit such homomorphisms and show that it is NP-complete to decide if a graph admits an orientation that does not admit such homomorphisms. We prove analogous results for $2$-edge-coloured graphs. We apply our results on oriented graphs to provide a new tool in the study of chromatic number of orientations of planar graphs -- a long-standing open problem.
Volume: vol. 23 no. 1
Section: Graph Theory
Published on: March 29, 2021
Accepted on: February 26, 2021
Submitted on: September 11, 2020
Keywords: Computer Science - Discrete Mathematics,Mathematics - Combinatorics,05C60 | |
# Cebu to Bohol
Cebu and Bohol are both located in the Central Visayas region, 90 kilometers apart. When talking about going from Cebu to Bohol we mean going from Cebu City, on Cebu Island, to the port in capital of Bohol Island, Tagbilaran. The two popular destinations are separated by the Cebu Strait, and unless you are a great long distance swimmer then going by ferry is your best choice.
The two most popular ferry companies to go with are 2GO Supercat and OceanJet. The travel time with either of them takes around 2 hours and cost about the same (PhP 400-1000), depending on class and time. OceanJet has more departures, but their ferries are older than the 2GO Supercat ferries.
On average, the ferry trip to go from Cebu to Bohol is around 2 hours long.
The direct route between Cebu and Bohol is approximately 90 kilometer.
Depending on what ferry company you choose the ferry tickets normally cost between PhP 400 (~8 USD) and PhP 1000 (~20 USD)
## Daily schedule for Cebu to Bohol
Ferry Cebu - Bohol ₱ 634–1,288 2h → Tourist Class 05:10, 06:00, 07:00, 08:00, 08:20, 09:20, 10:40, 11:40, 13:00, 14:00, 15:20, 16:20, 17:40, 18:40 → Open-Air 05:10, 06:00, 07:00, 08:00, 08:20, 09:20, 10:40, 11:40, 13:00, 14:00, 15:20, 16:20, 17:40, 18:40 → Business Class 05:10, 06:00, 07:00, 08:00, 08:20, 09:20, 10:40, 11:40, 13:00, 14:00, 15:20, 16:20, 17:40, 18:40
## Cebu to Bohol Online Booking
Use our tool to check available tickets and compare prices on all transportation modes from Cebu to Bohol. Remember that the earlier you book, the cheaper the price will be.
Return date can be selected after searching, in case you are looking for a round-trip. | |
InTechOpen uses cookies to offer you the best online experience. By continuing to use our site, you agree to our Privacy Policy.
Mathematics » "Optimization Algorithms - Methods and Applications", book edited by Ozgur Baskan, ISBN 978-953-51-2593-8, Print ISBN 978-953-51-2592-1, Published: September 21, 2016 under CC BY 3.0 license. © The Author(s).
# Genetic Algorithm-Based Approaches for Solving Inexact Optimization Problems and their Applications for Municipal Solid Waste Management
By Weihua Jin, Zhiying Hu and Christine W. Chan
DOI: 10.5772/62475
Article top
## Overview
Figure 1. Optimistic scheme, f + .
Figure 2. Zoom-in of the optimistic scheme, f + .
Figure 3. Conservative scheme, f − .
Figure 4. Zoom-in of the conservative scheme, f − .
Figure 5. Case study of municipalities and waste management facilities.
Figure 6. System cost comparisons.
# Genetic Algorithm-Based Approaches for Solving Inexact Optimization Problems and their Applications for Municipal Solid Waste Management
Weihua Jin, Zhiying Hu and Christine W. Chan
Show details
## Abstract
This chapter proposes a genetic algorithm (GA)-based approach as an all-purpose problem-solving method for optimization problems with uncertainty. This chapter explains the GA-based method and presents details on the computation procedures involved for solving the three types of inexact optimization problems, which include the ILP, inexact quadratic programming (IQP) and inexact nonlinear programming (INLP) optimization problems.
In the three-stage GA-based method for solution of ILP problems, also called GAILP, the upper and lower bounds of the inexact numbers of coefficients can be calculated directly without any uncertainty in the coefficients by substituting the initial suboptimal decision variables into the objective function. The GAILP has been extended to solve the IQP problems and the more complicated INLP problems. The implementation of these approaches was performed using the Genetic Algorithm Solver of MATLAB.
The proposed GA-based approaches were applied for management of a set of case scenarios related to municipal solid waste management. A comparison of the results generated by the proposed GA-based optimization approach with those produced by the traditional interactive binary analysis method reveals that the proposed approach has fewer limitations and involves less complex procedures in solving the inexact optimization problems.
Keywords: genetic algorithms, inexact optimization problem, linear programming, quadratic programming, nonlinear programming
## 1. Introduction
Linear and nonlinear programming are considered powerful optimization tools suitable for modeling and solving complex optimization problems in engineering. To handle uncertainty in real world data, inexact parameters and constraints are combined with various kinds of optimization techniques. Often a detailed solution of an inexact programming optimization problem involves a large number of direct comparisons to interactively identify the uncertain relationships among the objective function and decision variables, whether the problems are medium-sized or larger-scaled. When these methods are applied to complicated and nonlinear problems, the number of direct comparisons can become exponential.
The genetic algorithm (GA) method is a suitable optimization approach especially for solving problems that involve nonsmooth and multimodal search spaces. The GA-based optimization technique is suitable for solving linear and nonlinear programming optimization problems with inexact information; and the fields of application include operations research, industrial engineering and management science.
This chapter is organized as follows. Section 2 presents the background and literature review of this research. Section 3 discusses the proposed GA-based methods for solving inexact linear programming (ILP), inexact quadratic programming (IQP) and inexact nonlinear programming (INLP) problems. Section 4 presents the case study of using GAINLP in the solution of an INLP problem of solid waste disposal planning. Section 5 is the conclusion.
## 2. Background and literature review
Economic optimization in the operation programming of solid waste management was first proposed in the 1960s [1]. Different models of waste management planning have been developed in the following decades. The primary considerations involved include cost control, environmental sustainability and waste reutilization. The techniques employed include linear programming [25], mixed integer linear programming [6], multiobjective programming [79], nonlinear programming [10, 11], as well as their hybrids, which involve probability, fuzzy set and inexact analysis [1216]. Due to complexity of the nonlinear programming problems for solid waste management, research works in the area are scant; some exceptions include [17, 18].
The approach of operational programming with inexact analysis often treats the uncertain parameters as intervals with known lower and upper bounds and unclear distributions. In real-life problems, while the available information is often inadequate and the distribution functions are often unknown, it is generally possible to represent the obtained data with inexact numbers that can be readily used in the inexact programming models. For decision makers, it is usually more feasible to represent uncertain information as inexact data than to specify distributions of fuzzy sets or probability functions. Hence, various kinds of inexact programming such as ILP, IQP, inexact integer programming (IIP), inexact dynamic programming (IDP) and inexact multiobjective programming (IMOP) have been developed and are well discussed [10, 11, 19]. It can be observed from these studies that applications of inexact models to practical solid waste planning systems are effective. These research reports demonstrated substantial effort has been developed to traditional binary analysis for ILP and IQP. However, traditional binary analysis methods for ILP and IQP involve unavoidable simplifications and assumptions, which often increase the chance for error in the problem-solving process and adversely affected the quality of the results. Moreover, a more complex model often increases the chance of error in the solution. It has been observed that more complex models often produce less optimal results, and studies that focus on INLP problems are scarce. For example, in [20], the methodology mainly focused on combining endpoint values of the inexact parameters to form a set of deterministic problems, which will only work for particular monotone functions within a small-scale model. Therefore, a more flexible problem-solving method for the general inexact optimization problems is desired.
Engineering problems that have traditionally been formulated as IQP or INLP problems often involve large and uneven search spaces, for which a global optimal solution is often not required. GA is a suitable optimization tool especially for solving complex and nonlinear problems, which involve nonsmooth and multimodal search spaces. Therefore, we suggest a GA-based method as a more effective problem-solving approach than the traditional inexact programming methods.
For implementation of GA, the Genetic Algorithm Solver of Global Optimization Toolbox (GASGOT), developed by MATLAB (Trademark of MathWord), has been adopted. GASGOT implements simulated evolution in the MATLAB environment using both binary and floating point representations and the ordered base representation. This enables flexible implementation of the genetic operators, selection functions, termination functions and evaluation functions. GASGOT was developed by the Department of Industrial Engineering of North Carolina State University as a toolbox of MATLAB. Hence, it runs in a MATLAB workspace and can be easily invoked by other programs.
In this study, the GA linear program solving engine of GASGOT has been adopted for ILP problems and GA nonlinear program solving engine of GASGOT has been adopted for IQP and INLP problems.
## 3. Methodology
### 3.1. GA-based method for solving ILP problems (GAILP)
A typical ILP problem can be expressed as follows:
Max f±=∑j=1n[cj±xj±] (1)
s.t.j=1naij±xj±bi±,i=1,2,m
xj±0,j=1,2,,n
where aij±,bi±,cj± are inexact parameters and xj± is an inexact variable. It is assumed that an optimal solution exists. For an inexact number g±[g,g+] , g+ and g are the upper and lower bounds, respectively.
GA has been adopted for solving ILP problem. In this GA approach, the upper and lower bounds of the inexact numbers of coefficients aij±,bi±,cj± can be determined by substituting the initial suboptimal decision variables into the objective function. f+ and f can be calculated directly without any uncertainty in the coefficients. This approach is called the GA-based method for solving ILP problems, or the GAILP method.
GAILP has been designed to include three stages, which are discussed as follows:
The objective of the first stage is to get an initial suboptimal xjs for the following problem, which is transformed from the ILP problem defined in Eq. (1):
Max f±=∑j=1n[cjrxjs] (2)
s.t.j=1naijrxjsbir,i=1,2,,m
xj0,j=1,2,,n
Where aijr,bir,cjr are random numbers that satisfy the continuous uniform distribution in the intervals of [aij,aij+] , [bi,bi+] and [cj,cj+] ,respectively. Then, the problem is solved by the GA linear program solving engine of GASGOT, which uses the objective function in Eq. (2) as the positive term of the fitness function and the constraints of Eq. (1) as the negative punishment terms. Thus, a suboptimal solution fs can be identified and the corresponding decision variables of xjs are also obtained.
In the second stage, the inexact coefficients of aij±,bi±,cj± will be determined. Let the determined coefficients corresponding to f+ be aij±+,bi±+,cj±+ and those corresponding to f be aij±,bi±,cj± . These two sets of coefficients can be obtained using the following method.
Substituting xjs into the formula of Eq. (1) will convert it into Eq. (3).
Max f±=∑j=1n[cj±xjs] (3)
s.t.j=1naij±xjsbi±,i=1,2,,m
To identify the coefficients aij±,bi±,cj± corresponding to f± , a set of objective functions needs to be constructed and solved. Since xjs are suboptimal variables, which tend to make the objective function closer to f+ , consider aij±,bi±,cj± as variables, then the objective function of Eq. (4) can be constructed so as to find cj±+ .
Max f±=∑j=1n[cj±xjs] (4)
s.t.j=1naij±xjsbi±,i=1,2,,m
The coefficients cj±+ are considered to be corresponding to f+ .
Meanwhile, the objective function presented in Eq. (5) can be constructed so as to find cj± .
Min f±=∑j=1n[cj±xjs] (5)
s.t.j=1naij±xjsbi±,i=1,2,,m
There are two kinds of decision schemes for inexact programming problems, which are the conservative scheme and the optimistic scheme [21]. The former assumes less risk than the latter, so that for a maximization objective function, planning for the lower bound of an objective value represents the conservative scheme and planning for the upper bound of an objective value represents the optimistic scheme. In terms of constraints, the conservative scheme involves more rigorous or stringent constraints and the optimistic scheme adopts more tolerant ones.
Thus, the problem of searching for aij±+,bi±+ of the optimistic scheme and corresponding to the upper bound of the objective value of f+ can be represented as follows:
Max∑j=1nabs(aij±xjs−bi±) (6)
s.t.j=1naij±xjsbi±,i=1,2,,m
The problem,
Min∑j=1nabs(aij±xjs−bi±) (7)
s.t.j=1naij±xjsbi±,i=1,2,,m
will give aij±,bi± of the conservative scheme, corresponding to the lower bound of the objective value of f .
Hence, the values of aij±+,bi±+,cj±+ and aij±,bi±,cj± can be calculated.
In the third stage, the problem represented in Eq. (1) is converted into the following two subproblems:
For f+ ,
Max f+=∑j=1n[cj±+xj±] (8)
s.t.j=1naij±+xj±bi±+,i=1,2,,m
xj±0,j=1,2,,n
For f ,
Max f−=∑j=1n[cj±−xj±] (9)
s.t.j=1naij±xj±bi±,i=1,2,,m
xj±0,j=1,2,,n
This step eliminates the inexact parameters in Eq. (1) and generates instead Eq. (8) and Eq. (9) as typical linear programming (LP) problems, which can be solved easily.
Generally speaking, the interactive binary algorithm (IBA) proposed in [19, 22] can be used for solving inexact linear problems reliably and relatively quickly. However, this binary algorithm has some limitations. One of them, for example, is the limitation that the upper and lower bounds of an inexact coefficient cannot have different signs. In contrast, the GAILP does not have this kind of limitation because the GA method does not depend on any assumed distribution of the inexact parameter. Hence, the GAILP method effectively extends the scope of problems solvable using the methods of ILP. It is more adaptable for real world applications of optimization problems with uncertainty.
A sample ILP problem in [22] is as follows,
Max f±=c1x1±+c2x2± (10)
s.t.a11x1±+a12x2±b1
a21x1±+a22x2±b2
where c1 = [26 , 30], c2 = [− 6, − 5.5], a11 = [8, 10], a12 = [− 14, − 12], b1 = [3.8, 4.2], a21 = [2.4, 2.8], a22 = [3.4, 4], b2 = 6.5
By using the traditional IBA method [22], two submodels are obtained,
Maxf+=30x1+5.5x2
s.t.8x1+14x24.2
2.4x1++4x26.5
x1+0,x20
and
Maxf=26x16.0x2+
s.t.10x112x2+3.8
2.8x1+3.4x2+6.5
x10,x2+0
The results were f+=45.78,x1=1.64,x2=0.64;f=30.77,x1=1.37,x2=0.79 .
By using the GAILP, the results can be calculated with the following objective functions:
Maxf+=30x1+5.5x2+
s.t.8x1+14x2+4.2
2.4x1++3.4x2+6.5
x1+0,x2+0
and
Maxf=26x16.0x2
s.t.10x112x23.8
2.8x1+4x26.5
x10,x20
The results were f+=48.15,x1=1.73,x2=0.69;f=29.15,x1=1.29,x2=0.72 .
The GAILP method generates a solution, which is different from that obtained using the IBA proposed in [22]. A comparison will be discussed as follows:
For the f+ optimistic scheme, the GAILP method can generate a result that is guaranteed to be as close as possible to the upper bound of the constraints. Hence, the maximized value of the objective function is greater than that produced by the IBA. For the f conservative scheme, the GAILP method has a higher probability of satisfying the requirements of the constraints as close as possible to the lowest limit. Hence, the maximized objective value is smaller.
### Figure 1.
Optimistic scheme, f+ .
### Figure 2.
Zoom-in of the optimistic scheme, f+ .
In Figures 1 to 4, the bold lines denote the boundaries of the constraints, which limit the possible values for x1,x2 to the left lower area. The constraint a11x1±+a12x2±b1 is shown in these figures as the grey bold solid lines, which is the same for both the IBA and GAILP methods. The dark bold dotted lines represent the constraint of a21x1±+a22x2±b2 given by the IBA and the dark bold solid lines represent the same constraint given by the proposed GAILP method.
The boundaries, together with the x 1, x 2 axes, enclose the entire area defined by the constraints. The objective functions f+=30x1+5.5x2+ or f=26x16x2 are groups of parallel lines, as shown in Figures 1 to 4 by the thin solid and dotted lines. With different values of x1 and x2 , these objective function lines would produce different intercepts on the two axes. These constraints restrict the objective function lines to cross with the constraints area, so that, at some vertex, the objective function would reach its extreme (i.e., maximized or minimized) values.
In Figures 1 to 4, the thin dotted lines are given by the IBA and the thin solid lines represent the objective functions given by the proposed GAILP method. The legends for Figure 1 to 4 are listed in Table 1.
### Figure 3.
Conservative scheme, f .
### Figure 4.
Zoom-in of the conservative scheme, f .
The constraint a21x1±+ a22x2±≤b2 given by IBA The constraint a21x1±+ a22x2±≤b2 given by GAILP The constraint a11x1±+a12x2±≤b1 Objective function line given by IBA Objective function line given by GAILP
### Table 1.
Legends for Figures 1 to 4.
### 3.2. GA-based method for solving IQP problems (GAIQP)
The GAILP method can be extended to solve the IQP problems or other more complicated INLP problems.
A typical IQP problem is formulated as follows:
Maxf±=∑j=1n[cj±xj±+dj±(xj±)2] (11)
s.t.j=1naij±xj±bi±,i=1,2,m
xj±0,j=1,2,,n
where aij±,bi±,cj±,dj± are inexact parameters and xj± is an inexact variable.
In stage one, to obtain an initial suboptimal xjs from a problem transformed from the IQP problem:
Maxf=∑j=1n[cjrxj+djr(xj)2] (12)
s.t.j=1naijrxjrbjr,i=1,2,,m
xj0,j=1,2,,n.
where aijr,bir,cjr,djr are random numbers that satisfy the continuous uniform distribution in the intervals [aij,aij+],[bi,bi+],[cj,cj+] and [dj,dj+] . Then, a suboptimal solution fs can be identified, and the corresponding decision variables xjs are also obtained.
In the second stage, substituting xjs into the formula in Eq. (11). To determine the coefficients aij±,bi±,cj±,dj± corresponding to f±
Maxf±=∑j=1n[cj±xjs+dj±(xjs)2] (13)
s.t.j=1naij±xjsbi±,i=1,2,,m
and
Minf±=∑j=1n[cj±xjs+dj±(xjs)2] (14)
s.t.j=1naij±xjsbi±,i=1,2,,m
To determine aj±+,bi±+ of the optimistic scheme and corresponding to the upper limit of the objective value of f+
Max∑j=1nabs(aij±xjs−bi±) (15)
s.t.j=1naij±xjsbi±,i=1,2,,m
To obtain aj±,bi± ,
Min∑j=1nabs(aij±xjs−bi±) (16)
s.t.j=1naij±xjsbi±,i=1,2,,m
In the third stage, the problem expressed in Eq. (11) has been converted into the following two subproblems:
For f+ ,
Maxf+=∑j=1n[cj±+xj±+dj±+(xj±)2] (17)
s.t.j=1naij±+xj±bi±+,i=1,2,,m
xj±0,j=1,2,,n
For f ,
Maxf−=∑j=1n[cj±−xj±+dj±−(xj±)2] (18)
s.t.j=1naij±xj±bi±,i=1,2,,m
xj±0,j=1,2,,n
The inexact information has been incorporated in these two subproblems. These two subproblems, as typical nonlinear programming problems, can be solved by the GA nonlinear program solving engine of GASGOT.
### 3.3. GA-based method for solving inexact nonlinear problems (GAINLP)
Quadratic programming problems are specific cases of nonlinear programming problems. Due to the lack of generally applicable algorithms for handling the nonlinear structure and the inexact information embedded in the structure, most nonlinear programming problems are difficult to solve. The IBA method proposed in [11, 22] is not intended for dealing with generic nonlinear problems. In contrast, the GA-based method can be used as a general problem solver for this type of problems because there is not much difference for GA between treating the term of xi2 in quadratic programming problems and the terms xixj or xi0.28 in generic nonlinear programming problems. GAIQP can be modified to solve generic inexact nonlinear programming.
In the following, a computation experiment will be conducted to illustrate how the GAINLP method can handle complicated inexact nonlinear problems. A sample INLP problem is as follows:
Maxf±=c1±x1±−c2±(x1±)0.3−d1±x2±+d2±(x1±x2±) (19)
s.t.a11±(x1±)0.5+a12±x2±b1±,
x1±+a2±x2±b2±,
xj±0,j=1,2.
where aij±,bi±,cj±,dj± are inexact parameters and xj± is an inexact variable. In this experiment,
[c1,c1+]=[16,18];[c2,c2+]=[12,14];[d1,d1+]=[4,5];[d2,d2+]=[14,15];[a11,a11+]=[4.5,5.5];[a12,a12+]=[1.8,2.2];[b1,b1+]=[1.8,2.1];[a2,a2+]=[1.8,2.2];[b2,b2+]=[0.9,1.1].
GAINLP has been designed to include the three stages of problem solving.
In stage one, to obtain the initial suboptimal xjs , the random numbers of aijr,bir,cjr,djr were selected to transform this INLP problem into a NLP problem, such that aijr,bir,cjr,djr satisfy the continuous uniform distribution in the intervals of [aij,aij+],[bi,bi+],[cj,cj+] and [dj,dj+] .
Maxfs=c1rx1s−c2r(x1s)0.3−d1rx2s+d2r(x1sx2s) (20)
s.t.a11r(x1s)0.5+a12rx2sb1r,
x1s+a2rx2sb2r,
xjs0,j=1,2.
Then, the heuristic search algorithm of the GA nonlinear program solving engine of GASGOT can be used to identify a suboptimal solution fs , and the corresponding decision variable xjs . The objective function in Eq. (20) was used as the positive term of the fitness function and the constraints of Eq. (19) adopted as the negative punishment terms. The results are x1s=0.346,x2s=0.171,fs=2.296 .
In stage two, by substituting x1s,x2s into Eq. (19), the inexact coefficients of aij±,bi±,cj±,dj± will be determined. The x1s,x2s obtained in stage one are used to construct two optimization problems in order to determine the coefficients of aij±+,bi±+,cj±+,dj±+ and aij±,bi±,cj±,dj± , respectively. The coefficients from the first group are considered to be corresponding to the optimistic scheme f+ , while those from the second group correspond to the conservative scheme f . Considering cj±,dj± are variables, the following two objective functions can be constructed:
Maxf+=c1±+x1s−c2±+(x1s)0.3−d1±+x2s+d2±+(x1sx2s) (21)
and
Minf−=c1±−x1s−c2±−(x1s)0.3−d1±−x2s+d2±−(x1sx2s) (22)
s.t.c1±+,c1±[16,18]
c2±+,c2±[12,14]
d1±+,d1±[4,5]
d2±+,d2±[14,15]
To determine aij±+,bi±+ of the optimistic scheme corresponding to the upper limit of the objective value f+ , the objective function can be constructed as follows:
Maxabs(a11±(x1s)0.5+a12±x2s−b1±) (23)
s.t.a11±(x1s)0.5+a12±x2sb1±
and
Maxabs(x1s+a2±x2sb2±)
s.t.x1s+a2±x2sb2±
The objective functions to get aij±,bi± of the conservative scheme are
Minabs(a11±(x1s)0.5+a12±x2s−b1±) (24)
s.t.a11±(x1s)0.5+a12±x2sb1±
and
Minabs(x1s+a2±x2sb2±)
s.t.x1s+a2±x2sb2±
By solving Eqs. (21)–(24), the values of all the inexact coefficients are obtained, i.e., a11±+=4.5,a12±+=1.8,b1±=2.1,a2±+=1.8,b2±+=1.1; a11±=5.5,a12±=2.2,b1±=1.8,a2±=2.2,b2±=0.9 ; c1±+=18,c2±+=12,d1±+=4,d2±+=15 ; c1±=16,c2±=14,d1±=5,d2±=14
In stage three, the objective function presented in Eq. (20) is converted into the following two subproblems:
Maxf+=18x1±12(x1±)0.34x2±+15(x1±x2±)
s.t.4.5(x1±)0.5+1.8x2±2.1,
x1±+1.8x2±1.1,
x1±0,x2±0.
and
Maxf=16x1±14(x1±)0.35x2±+14(x1±x2±)
s.t.5.5(x1±)0.5+2.2x2±1.8,
x1±+2.2x2±0.9,
x1±0,x2±0.
The inexact parameters in Eq. (20) have been eliminated, and two typical nonlinear optimization problems have been generated instead. The solution of the example (Eq. (19)) is f±=[5.5575,1.72] , x1±=[0.24727,0.38496] , and x2±=[0.1989,0.2053].
As demonstrated above, it can be seen that the GAINLP method can generate the optimal result without any simplification or assumption, and it can be adapted for applications of optimization problems with uncertainty. The next section demonstrates application of this method to a real world regional waste management problem.
## 4. Case study
Solid waste management is the process of removing waste materials from the surrounding environment, which involves the collection, separation, storage, processing, treatment, transport, recovery and disposal of solid waste. Landfill and incineration are two of the most commonly used solid waste disposal methods. The objective of a solid waste management process is to dispose discarded materials in a timely manner so as to prevent the spread of disease, minimize the likelihood of contamination and reduce their effects on human health and the environment.
The economy of scale (ES) is a microeconomics term, and it refers to the advantages that enterprises obtain due to their size or scale of operation, with the cost per unit of output generally decreasing as the scale increases and fixed costs are distributed over more units of output. In a solid waste management system, ES exists within the transportation process [23] and it can be expressed as a sizing model with a power law [11].
Ct=Cre(Xt/Xre)1+m (25)
where Xt (t/d) is a waste flow decision variable; Xre (t/d) is a reference waste flow; Ct ($/t) is the transportation unit cost due to the ES of waste flow Xt (t/d); Cre ($/t) is a coefficient reflecting the significance of the ES to the unit cost of waste transported for reference waste flow Xre (t/d), Cre<0 ; and m is an ES exponent which reflects the unit cost decline with respect to the waste flow, 1<m<0 .
#### Figure 5.
Case study of municipalities and waste management facilities.
The study region includes three municipalities, a waste-to-energy (WTE) facility and a landfill, as shown in Figure 5. Three time periods are considered; each has an interval of five years. Over the 15-year planning horizon, an existing landfill and WTE facilities are available to serve the municipal solid waste (MSW) disposal needs in the region. The landfill has an existing capacity of [2.05,2.30]×106t , and the WTE facility has a capacity of [500,600]t/d .The WTE facility generates residues of approximately 30% (on a mass basis) of the incoming waste streams, and its revenue from energy sale is [15,25]$/t combusted. Table 2 shows the waste generation rates of the three municipalities and the operating costs of the two facilities in the three periods. Time periodk=1k=2k=3 Waste generation WGjk± (t/d) Municipality 1 (j=1)[260, 340][310, 390][360, 440] Municipality 2 (j=2)[160, 240][185, 265][210, 290] Municipality 3 (j=3)[260, 340][260, 340][310, 390] Operation cost OPik± ($/t)
Landfill (i=1)[30, 45][40, 60][50, 80]
WTE facility (i=2)[55, 75][60, 85][65, 95]
#### Table 2.
Data for the waste generation and treatment/disposal.
Taking into consideration the effects of the ES, the INLP model can be formulated as follows:
Minf±=∑i=12∑j=13∑k=13Lkxijk±[Areijk±+Creijk±(xijk±/Xreijk±)1+m+(OPik±)+∑k=13Lk(FE∗∑j=13(x2jk±)){AreWTE−LFk+CreWTE−LFk(FE∗∑j=13x2jk±/XreWTE−LFk±)1+m+(OP1k±)}−∑k=13∑j=13x2jk±REk± (26)
s.t.j=13k=13Lk[x1jk±+x2jk±FE]TL±
j=13x2jk±TE±,k
i=12xijk=WGjk±,j,k
Xijk±0,i,j,k
where i is the type of waste management facility ( i=1,2 , where i=1 for landfill, 2 for WTE); j is the city, j=1,2,3 ; k is the time period, ; L k is the length of period k, L1=L2=L3=3655 (day); OPik± is the operating cost of facility during period k ($/t); REk± is the revenue from WTE during period k ($/t), RE1±=RE2±=RE3±=[15,25] ; TE± is the capacity of WTE (t/d); TL± is the capacity of the landfill (t); WGjk± is the waste disposal demand in city during period k (t/d); xijk± is the waste flow from city j to facility i during period k (t/d).
In this objective function (Eq. (26)), the first term on the right side reflects the transportation costs in each management period (k=1 to 3) from each city to each waste treatment unit, and the related operation costs. The second term reflects the cost incurred in transporting the products from the WTE facility to the landfill, and the operation cost at the landfill. The third term is the revenue generated from the WTE facility.
The MSW generation rates generally vary between different municipalities and for different periods, and the costs for the waste transportation and treatment also vary temporally and spatially. Furthermore, interactions exist between the waste flows and their transportation costs due to the effects of the ES (Eq. (25)). Table 3 and Table 4 show the parameters related to the ES, which include the fixed unit transportation cost Are , the reference waste flow Xre and the coefficient Cre corresponding to Xre .
Fixed unit transportation cost ($/t)Reference waste flow (t/d) k=1 k=2 k=3 k=1 k=2 k=3 City-to-landfill Are11k± [14.58, 19.40][16.04, 21.34][17.64, 23.48] Xre11k± [220, 250][240, 280][260, 320] Are12k± [12.65, 16.87][13.92, 18.56][15.31, 20.41] Xre12k± [160, 200][180, 220][220, 260] Are13k± [15.30, 20.49][16.83, 22.53][18.52, 24.79] Xre13k± [160, 200][180, 240][200, 240] City-to-WTE Are21k± [11.57, 15.42][12.73, 16.97][14.00, 18.66] Xre21k± [200, 240][240, 280][280, 320] Are22k± [12.17, 16.15][13.39, 17.76][14.73, 19.54] Xre22k± [120, 170][150, 190][180, 220] Are23k± [10.60, 14.10][11.67, 15.51][12.83, 17.06] Xre23k± [220, 270][220, 270][240, 270] WTE-to-landfill AreWTELFk± [5.71, 7.62][6.28, 8.38][6.91, 9.33] XreWTELFk± [170, 200][200, 260][240, 270] #### Table 3. Fixed unit transportation costs and reference waste flows. k=1k=2k=3 k=1k=2k=3 Cre11k −2.7−3.4−3.8 Cre21k −1.9−2.6−3.3 Cre11k+ −4.1−5.0−6.3 Cre21k+ −3.1−4.0−5.0 Cre12k −1.7−2.1−2.8 Cre22k −1.2−1.7−2.2 Cre12k+ −2.8−3.4−4.5 Cre22k+ −2.3−2.8−3.6 Cre13k −2.1−2.5−3.1 Cre23k −2.0−2.2−2.6 Cre13k+ −3.4−4.5−5.0 Cre23k+ −3.2−3.5−3.9 CreWTELFk −0.8−1.1−1.4 CreWTELFk+ −1.3−1.8−2.1 #### Table 4. Cre ($/t) The economy of scale coefficient corresponding to reference waste flow Xre .
[i] - Note: The + and – superscript sign of Cre represents the value of Cre relevant to the upper and lower bound of Xre only.
Hence, it can be observed that the traditional IBA cannot solve this problem without additional assumptions or simplifications. The following discussion will explain how traditional methods solve this problem by simplifying the nonlinear effects of the ES.
• (i) Let m = 1 , the effects of the ES are totally ignored. This converts the INLP problem to an ILP problem, and the GAILP method can solve the problem.
• (ii) Assuming 0.2<m<0.1 , it is indicated that the nonlinear relationships in Eq. (26) can be approximated with grey quadratic functions within a predetermined degree of error. Thus, the INLP problem is converted into an IQP problem.
The left two columns of Table 5 list the solutions for m = 1 and 0.2<m<0.1 .
Both of the above simplifications introduce inaccuracy and limitations. When the value of m deviates away from the predetermined value, this inaccuracy will increase dramatically.
Applying the GAINLP model on the inexact nonlinear programming problem, the optimization problem can be solved directly without additional assumptions for the effects of the ES. Three different scenarios, ( m=0.1,m=0.3,andm=0.5 ) have been tested, and the solutions given by the GAINLP model are shown in the right three columns of Table 5.
The above three scenarios assume that the ES exponent is universal in the whole region during the entire period. However, this is not always necessarily true for practical engineering problems. More common situations may involve different scale exponents for various combinations of municipalities and facilities in different periods. Thus, Table 6 illustrates the solutions for the 4th scenario, which involves different scale exponents.
Decision variable(t/d)ILP solution IQP solutionOther solutions
m=−1−0.2<m<−0.1m=−0.1m=−0.3m=−0.5
x111± [210, 290][250, 290][203, 292][100, 221][35, 88]
x112± 0[310, 350][1, 36][1, 44][1, 36]
x113± [0, 30][360, 440][1, 44][126, 190][240, 300]
x121± 0[0, 30][1, 43][60, 141][144, 240]
x122± [0, 65][185, 225][1, 73][20, 103][75, 148]
x123± [210, 290][50, 80][200, 290][200, 259][197, 260]
x131± [0, 30]0[1, 37][90, 190][225, 312]
x132± [260, 330]0[247, 332][189, 270][120, 200]
x133± [170, 200]0[154, 209][139, 210][143, 192]
x211± 50[10, 50][35, 58][120, 167][220, 307]
x212± [310, 390][0, 40][295, 390][299, 385][295, 390]
x213± [360, 410]0[329, 426][202, 323][120, 161]
x221± [160, 240][160, 210][147, 240][55, 145][1, 30]
x222± [185, 200][0, 40][165, 222][142, 200][80, 154]
x223± 0[160, 210][1, 25][1, 40][1, 43]
x231± [260, 310][260, 340][230, 320][122, 164][12, 40]
x232± [0, 10][260, 340][1, 28][30, 100][108, 167]
x233± [140, 190][310, 390][125, 200][125, 194][120, 214]
#### Table 6.
Solutions when m is different for each municipality and each period.
[i] - Note: for transportation from WTE facility to landfill, m=−0.5.
#### Figure 6.
System cost comparisons.
The results also show that when the value of the ES exponent m becomes smaller, from −0.1, −0.3 to −0.5, for both f positive scheme and f negative scheme, the value of the minimized objective function also becomes smaller. At the same time, the range of the intervals of the minimized objective function also decreases. This reflects how the ES exponent affects the overall cost for the entire period. A comparison of the results for the four scenarios is given in Figure 6.
## 5. Conclusions
In this chapter, the GA-based methods have been proposed and applied for identifying an all-purpose optimization solution for the ILP, IQP and INLP problems. These methods are called GAILP, GAIQP and GAINLP. Compared to these GA-based methods, the traditional problem-solving method has limitations due to the complexity involved in selecting the upper or lower bounds of variables and parameters when the subobjective functions are being constructed. The complexity arises due to the extensive computation and necessary associated assumptions and simplification. The solution procedures of the proposed GA-based optimization methods do not involve any such assumption or simplification, and the quality of the result is guaranteed. The GAINLP was applied to a solid waste management optimization problem, and the result analysis illustrates the practicality and flexibility of the proposed GAINLP method for solving more complex INLP problems.
GAILP, GAIQP and GAINLP have been implemented in MATLAB, and can be easily extended to include other nonlinear operation programming software packages so as to enhance the flexibility and efficiency of the problem-solving process. The GA-based heuristic optimization approach is flexible and it can be extended to find solutions for various types of operation programming scenarios that involve nonlinear optimization and inexact information. It can also be used as an all-purpose algorithm for economic optimizations.
## Acknowledgements
The authors gratefully acknowledge the support of the Canada Research Chair program and the Natural Sciences and Engineering Research Council of Canada.
## References
1 - Anderson LE, Nigam A. A mathematical model for the optimization of a waste management system. University of California at Berkeley, Sanitary Engineering Research Laboratory, SERL Report. 1968 (68–1).
2 - Christensen HL, Haddix GF. A model for sanitary landfill management and design. Computers & Operations Research. 1974;1(2):275–81.
3 - Fuertes LA, Hudson JF, Marks DH. Solid waste management: Equity trade-off models. Journal of the Urban Planning and Development Division. 1974;100(2):155–71.
4 - Jenkins L. Parametric mixed integer programming: An application to solid waste management. Management Science. 1982;28(11):1270–84.
5 - Jacobs TL, Everett JW. Optimal scheduling of consecutive landfill operations with recycling. Journal of Environmental Engineering. 1992;118(3):420–9.
6 - Badran MF, El-Haggar SM. Optimization of municipal solid waste management in Port Said–Egypt. Waste Management. 2006;26(5):534–45.
7 - Sushi A, Vrat P. Waste management policy analysis and growth monitoring: An integrated approach to perspective planning. International Journal of Systems Science. 1989;20(6):907–26.
8 - Chang NB, Wen CG, Chen YL, Yong YC. A grey fuzzy multiobjective programming approach for the optimal planning of a reservoir watershed. Part A: Theoretical development. Water Research. 1996;30(10):2329–34.
9 - Chang NB, Shoemaker CA, Schuler RE. Solid waste management system analysis with air pollution and leachate impact limitations. Waste Management & Research. 1996;14(5):463–81.
10 - Huang GH, Baetz BW, Patry GG. Grey integer programming: An application to waste management planning under uncertainty. European Journal of Operational Research. 1995;83(3):594–620.
11 - Huang GH, Baetz BW, Patry GG. Grey quadratic programming and its application to municipal solid waste management planning under uncertainty. Engineering Optimization. 1995;23(3):201–23.
12 - Li Y, Huang G. Dynamic analysis for solid waste management systems: An inexact multistage integer programming approach. Journal of the Air & Waste Management Association. 2009;59(3):279–92.
13 - Tan Q, Huang GH, Cai Y. A superiority-inferiority-based inexact fuzzy stochastic programming approach for solid waste management under uncertainty. Environmental Modeling & Assessment. 2010;15(5):381–96.
14 - Ekmekçioğlu M, Kaya T, Kahraman C. Fuzzy multicriteria disposal method and site selection for municipal solid waste. Waste Management. 2010;30(8):1729–36.
15 - Pires A, Martinho G, Chang NB. Solid waste management in European countries: A review of systems analysis techniques. Journal of Environmental Management. 2011;92(4):1033–50.
16 - Beliën J, De Boeck L, Van Ackere J. Municipal solid waste collection and management problems: A literature review. Transportation Science. 2012;48(1):78–102.
17 - Or I, Curi K. Improving the efficiency of the solid waste collection system in Izmir, Turkey, through mathematical programming. Waste Management & Research. 1993;11(4):297–311.
18 - Sun W, Huang GH, Lv Y, Li G. Inexact joint-probabilistic chance-constrained programming with left-hand-side randomness: An application to solid waste management. European Journal of Operational Research. 2013;228(1):217–25.
19 - Huang GH, Baetz BW, Patry GG. A grey fuzzy linear programming approach for municipal solid waste management planning under uncertainty. Civil Engineering Systems. 1993;10(2):123–46.
20 - Chang NB, Schuler RE, Shoemaker CA. Environmental and economic optimization of an integrated solid waste management system. J. Resour. Manage. Technol. 1993;21(2):87–98.
21 - Huang G, Baetz BW, Patry GG. A grey linear programming approach for municipal solid waste management planning under uncertainty. Civil Engineering Systems. 1992;9(4):319–35.
22 - Huang GH, Baetz BW, Patry GG. Grey dynamic programming for waste‐management planning under uncertainty. Journal of Urban Planning and Development. 1994; 120(3):132–56.
23 - Callan SJ, Thomas JM. Economies of scale and scope: A cost analysis of municipal solid waste services. Land Economics. 2001;77(4):548–60. | |
## Introduction
In the recent years, soft robotics has emerged as a candidate to create novel robotic systems with pre-programmable capabilities, while capable of withstanding large deformations. These systems have shown to be potentially useful in diverse application fields ranging from bio-inspired robotic systems1,2, adaptable locomotion in unstructured environments1,3, grasping/manipulation of objects4, invasive surgical instruments5 and assistive/rehabilitative devices6.
These intrinsically soft robots have advantages over conventional rigid robots by being low-cost, lightweight, highly compliant, and inherently safe when interacting with the unknown environment and human body2,6. Therefore, these soft robots can be utilized for rehabilitation, prevention of injuries, or augmentation of the capabilities of healty individuals6,7.
Soft wearable assistive/rehabilitative robots are generally categorized based on the joints they assist as well as the type of actuators actuators utilized to design them6. Upper-body soft wearable robots have been developed to actively support fingers8,9,10,11,12,13, wrists14, elbows15,16, shoulders17,18,19, necks20, forearms21,22, and spines23,24. Lower-body soft wearable robots have provided assistance to the hips25, knees26,27, and ankles28,29,30,31. Common soft actuation methods for assistive/rehabilitative tasks include cable-driven11,14,25, origami32,33, and soft pneumatic actuators (SPAs)2,6,34.
SPAs broadly categorizes soft actuators that require positive or negative pressure to generate pre-programmable motion2,6,34. Pneumatic artificial muscles20,29, elastomeric10,23,35 and inflatable fabric soft pneumatic actuators (FSPAs) all fall under this category9,13,18,19,21,22,26,27,28,36,37. SPAs can also be further classified according to how they are mechanically programmed to move whether in the macro or micro-scale2,6,34. Their motion paths can be programmed using combinations of multiple inflatable chambers or actuators as seen with peano muscles and bellow actuators15,37,38,39,40,41,42,43,44,45,46,47,48,49,50. A form of external/internal flexible mechanical metamaterials51 (for example, reinforcements6,34,52,53,54 and auxetic structures55,56,57,58) or origami structures32,33,59,60,61,62 can also be used for mechanical programmable motions. The programmable motions include2,6,34,42,54,63: twisting64,65, bending39,50, stiffening26,28, contracting30,39,41,44,46,66, or extending/growing67,68,69,70 in space. Further, by combining multiple actuators together in a modular unit, a continuum, multi-chambered and multi-DOF actuator can be created38,50,71,72.
The development of wearable technologies has generated a lot of interest in the use of textiles or fabrics, both terms used interchangeably in this work, due to their versatility, repeatable production, and omnipresent nature73. Fabrics have also shown to be a promising medium to incorporate functionalities like: soft computing, flexible electronics, energy harvesting, sensing and actuation73. Soft fabric actuation has shown possibilities of utilizing fabric to generate movement and provide assistance74. The construction of these fabric actuators has been through either intrinsic or extrinsic modifications of the materials74.
Wearable assistive devices have seen a growth in utilizing extrinsically-modified fabric actuation technology9,13,16,28,36,37. Extrinsically-modified fabric actuators, are fabricated by superficially attaching active materials on the surface of the substrate fabric, for example laminating thermoplastic polyurethane (TPU) material on the substrate fabric to create FSPAs73,74. This paradigm shift has lead to design of SPAs that are easily integrated with or hidden underneath the users’ clothes. Along with the ease of fabrication, wearability, pliability, and availability, these actuators also provide enough torque and force assistance to the extremity, making this technology more adoptable for everyday life8,9,12,13,15,16,17,18,21,22,26,28,31,37,75.
FSPAs are further classified based on the types of fabrics used to make them. In this work, we focus on two categories of extrinsically modified FSPAs, woven and knit FSPAs, shown in Fig. 1a,b. Because of how each type of fabric material is manufactured, woven fabrics are generally puncture resistant but less deformable, while knit fabrics are easily deformable and have an innate mechanical anisotropy (showing variable stretchability in bi-directions)73. Recent research have seen woven fabrics used to create highly robust twisting, contracting and bending actuators12,13,15,18,19,24,27,28,29,30,31,37,43,50,75, as well as the use of the knit textiles to create bending actuators for grippers and wearable robots8,9,76.
There have been various computational and analytical studies on the prediction of fabric properties at the fiber or yarn level, but not for the entire set of the fabric structural hierarchy73. Only recently have models for woven FSPAs been developed to predict their force and motion capabilities8,22,28. Our preliminary work has shown promise in utilizing computational models for woven FSPAs for the elbow15 and also continuum assistive robots50. On the other side of the spectrum, modeling of knit FSPAs are still in the nascent phase of development8,9.
In this paper, we further investigate the combination of various textile layers to mechanically program actuators in order to perform various motion profiles, as highlighted in previous work15,24,26,36,50,75. Specifically, two categories of multi-material and multi-layered woven and knit FSPAs as shown in Fig. 1, are studied and fabricated. A comprehensive material study of both the various woven and knit anisotropic textiles are conducted for large deformations to generate material models. To accurately predict the complex mechanical response of the FSPAs, we opt to create computational finite element method (FEM) models. Computational FEM models have the ability to generate detailed models, based on the actuator’s variable geometrical parameters non-linear behaviors and capture the detailed stress-strain distributions of multi-material and multi-layered7,23,77. We develop an all-inclusive design tool using the computational models, that will benchmark the design criteria for developing a new robust woven or knit fabric actuator based on the desired geometrical parameters and application force/torque requirements. This comprehensive tool will allow for scalability and customizability of diverse FSPAs prior to fabrication.
## Design and Fabrication of the FSPAs
The two main fabrics used in this work include, the woven non-stretch thermoplastic polyurethane (TPU)-coated nylon fabric (6607, Rockywoods Fabric, Loveland, CO) and the bi-directional high-stretch knitted fabric (24350, Darlington Fabrics, Westerly, RI). Both fabrics are seen under a microscope (OMAX A355U, OMAX Microscope, Seattle, WA) with a magnification factor of 40× and numerical aperture of 0.65, as shown in Fig. 2g, h. The two directions of stretch include the wale (in the $$y$$-direction) and the course (in the $$x$$-direction).
Woven fabrics are generally created with vertical (warp) yarns interlaced with horizontal (weft) yarns in a checkered pattern, as seen in Fig. 2g73. Material properties of woven fabrics are dependent on the strain properties of the yarns used to create them. The nature of the weaving method, creates a tight interconnected thread system resulting in a more stable, rigid, and difficult to deform fabric73. On the other hand, knit fabrics are created by the interlocking loops of a single yarn (i.e. weft knits) or multiple yarns (i.e. warp knits)73. The knitted fabric used in this work, is created by using a warp knitting. The fabric is made of 83% semi-dull nylon and 17% spandex. This essentially means that warp knits often have mechanical anisotropy, because one stretch direction is relatively stretchier than the other (the preferential strain direction), as seen in Fig. 2h. Thus, the knit fabrics show high bi-directional stretchability and elastic recovery, comparable to the hyperelastic properties of elastomers.
These woven and knit fabrics are used to create two categories of FSPAs: woven FSPAs highlighted in Fig. 1a, and knit fabric-reinforced textile actuators (FRTAs) as highlighted in Fig. 1b. The woven non-stretch fabric actuators generate motion by combining multiple pouch fabric actuators, that inflate to a set size, in various array formations, to contract, straighten, bend, or elongate. In contrast, the knitted FRTAs are developed by combining an internal knit fabric shell with strain-limiting woven fabric reinforcement layers, so the fabric’s overall anisotropic behavior can be augmented during pressurization. Further, the woven fabric reinforcements also reduce the local stresses and strains on the internal shell and minimizes any surface damage, from abrasion commonly seen with the use of Kevlar threads as reinforcements, seen in previous work6. Finally, by arranging multiple actuators in different orientations we can also create multiple degree-of-freedom (DOF) actuators, as shown in Fig. 1. These various types of actuators generate motion profiles that can serve various target applications in the field of wearable assistive devices as featured in Fig. 1c and further mentioned in Supplementary Table 1.
### Fabrication of the FSPAs
The machines used in the fabrication procedure are shown in Fig. 2a. The laser-cutter (Glowforge Prof, Glowforge, Seattle, WA) is used to cut all the TPU (Fastelfilm 20093, Fastel Adhesive, Clemente, CA), woven and knit fabrics into the desired geometry, as shown in Fig. 2b. The TPU sheets are used to bond the knit fabric and the woven fabric reinforcements, while coating the knit fabric substrate to make it airtight. However, air leakage through the skin of the fabric is still noticed. Therefore, an additional airtight TPU bladder with a pneumatic connector (5463K361, McMaster-Carr, Elmhurst, IL), is still made using an impulse sealer (751143, Metronic, Seattle, WA) as seen in Fig. 2c.
There are two variations of fabricating the FRTAs, one for FRTAs that perform bending, in Fig. 2d, and the other FRTAs that elongate and/or twist, as shown in Fig. 2e. In the first variation of the fabrication method the knit stretch fabric, a single TPU sheet, and woven TPU-coated reinforcements are assembled and bonded all at once using a heat press (FLHP 3802, FancierStudio, Hayward, CA). The TPU bladder is placed in the middle of the prepared multi-layered fabric set, and the structure is folded and sewn, using a super-imposed seam along the center. The sewn portion creates the strain-limiting, inextensible seam to encourage bending towards that particular direction. In the second variation of the fabrication method, two sets of knit stretch fabric and woven reinforcements are created. The additional TPU bladder is placed between the two sets of multi-layered fabric sets and the edges of the layers are heat-sealed or sewn along the edges using high-stretch elastic thread (Maxi Lock Stretch, American & Efird, Mount Holly, NC). Different clockwise/counterclockwise twisting and elongating actuators can be developed by varying the angle of woven reinforcements.
In order to fabricate the woven FSPAs, the TPU-coated nylon fabric is cut into the desired geometries as seen in Fig. 2f. The woven TPU-coated nylon already has a side pre-laminated with a TPU coating to allow bonding. Pneumatic fittings are attached to the cutouts and aligned on the bed of the customized computer numerical control (CNC) router (Shapeoko 3, Carbide Motion, Torrance, CA) with a soldering iron tip set at 230 °C. The CNC router traces and seals the fabric cutouts to seal the individual fabric actuators. This procedure can instantly create the woven straightening or contracting FSPAs. In order to create the woven bending and elongating FSPAs, pouches with the same size as the actuators are created. The pouches are sewn together using a sewing machine (Memory Craft 6500 P, Janome, Hachioji, Tokyo) to create the actuator array structure for the sealed actuators to slot into. If the pouches are sewn one on top of each other, elongation actuators are created. If the pouches are sewn along the base onto a strain-limiting inextensible layer, the bending actuators are created, as seen in Fig. 2f. Finally, the manufacturing procedure for the multi-DOF continuum actuators, as seen in Supplementary Video 4, is discussed in Supplementary Materials.
## Constitutive Material Model Fitting of Fabrics and Textiles
We try to identify the appropriate material model parameters for the different textiles and fabrics we use as a precursor for the proposed FEM models. In Supplementary Materials, we further described the geometrical parameters, as shown in Fig. 3a–f, and experimental procedure for characterizing the different woven non-stretch and knitted stretch fabrics using uniaxial and/or biaxial universal tensile testing machines, as shown in Fig. 3g,h. We note that the material properties of the TPU-coated materials are within the elastic range while the properties of the knit stretch fabric is considered as an anisotropic hyperelastic material.
Previous FEM-based soft robot modeling work has focused on isotropic elastomeric materials7,77. Material properties of these actuators and robots were captured using Arruda-Boyce, Van-der-Waals, Mooney-Rivlin and Neo-Hookean models for smaller strains7, and Ogden, Yeoh, and higher-polynomial models for larger hyperelastic strains78. However, there have been only one preliminary example of computationally modeling the behavior of knit FSPAs9. In this work, we further model the behavior of multi-layered, multi-material (made of woven fabric reinforcements and a knit fabric shell) FRTAs. Some of the constitutive models included in ABAQUS (Simulia, Dassault Systemes) to model anisotropic models include generalized Fung and Holzapfel-Gasser-Ogden (HGO) models.
The material properties of the TPU-coated nylon are within the elastic range, and the Young’s modulus and Poisson’s ratio are calculated as E = 498 $$MPa$$ and 0.35 using a uniaxial tensile test as seen in Fig. 3g. The inextensible fabric layer used to hold the actuators in the actuator array has the properties E = 305 $$MPa$$, v = 0.35 and the properties used for the PLA connector caps (E = 3600 MPa, v = 0.3). All the components are modeled using shell explicit quadratic tetrahedral elements (C3D10M).
### Anisotropic material model of bi-directional textile materials
The anisotropic hyperelastic properties are evaluated with the HGO continuum model79. A non-linear regression model (Limited-memory BFGS77) was used to fit material data against HGO hyperelastic strain energy function (see Supplementary Materials for more details). The strain energy equation of the HGO model is as shown below:
$$U={C}_{10}({\bar{I}}_{1}-3)+\frac{1}{D}\cdot (\frac{{({J}^{el})}^{2}-1}{2}-ln{J}^{el})+\frac{{k}_{1}}{2{k}_{2}}\mathop{\sum }\limits_{\alpha \mathrm{=1}}^{N}{e}^{{k}_{2}{\bar{E}}_{\alpha }^{2}}-\mathrm{1,}$$
(1)
$${\bar{E}}_{\alpha }=\kappa ({\bar{I}}_{1}-\mathrm{3)}+\mathrm{(1}-3\kappa )({\bar{I}}_{\alpha \alpha }-\mathrm{1),}$$
(2)
where $${C}_{10},D,{k}_{1},{k}_{2},$$ and $$\kappa$$ are the five temperature-dependent material parameters. $$N$$ is the number of families of fibers ($$N\le 3$$); $${\bar{I}}_{1}$$ is the first invariant of the Cauchy-Green tensor, $${\bar{I}}_{\mathrm{4,6}}$$ are the invariants that represent the preferred directions for the fibers contributing to the strain-energy function. If $$\kappa$$ ($$0\le \kappa \le \frac{1}{3}$$) is close to $$0$$, it means the fibers are in the direction of $$\theta$$ (the course direction); if $$\kappa$$ is close to $$\mathrm{1/3}$$, it means the fibers are dispersed and the material would be considered isotropic.
The material fitting tool allows the user to set the poisson ratio, boundary conditions and initial parameters for the material parameters ($${C}_{10},D,{k}_{1},{k}_{2},$$ and $$\kappa$$) and the experimental equibiaxial testing data. The Cauchy stress ($${\sigma }_{\theta \theta }$$, $${\sigma }_{zz}$$) is in the course and wale directions. A least-squares fit for the stress-strain equations of both directions is used:
$$\chi =\mathop{\sum }\limits_{i\mathrm{=1}}^{n}[({\sigma }_{\theta \theta }-{\sigma }_{\theta \theta }^{model}{)}_{i}^{2}+{({\sigma }_{zz}-{\sigma }_{zz}^{model})}_{i}^{2}\mathrm{]}.$$
(3)
The material fitting toolkit also allows the use of multiple optimziation algorithms, such as Nelder-Mead, Powell, CG, L-BFGS-B, COBYLA, and SLSQP, given by the SciPy optimization function77. For every iteration, the coefficient of determination $${R}^{2}$$ and root mean square of the reduced chi-square $$\varepsilon$$ were used against the material testing data for the next optimization loop. For the equibiaxial protocol80, results were considered acceptable for $${R}^{2}\, > \,0.8$$ and $$\varepsilon \, > \,0.25$$.
After optimization using this scheme, the HGO model is used to fit four tensile testing data sets, two equibiaxial and two uniaxial, as seen in Fig. 3g,h. The same stretch fabric was used for all tests, one set was coated with a TPU layer to aid bonding and air impermeability and another set was not coated with a TPU layer.
For the uncoated uniaxial test, the parameters were identified as C10 = $$1.156$$, $${k}_{1}$$ = $$0.0925$$, $${k}_{2}$$ = $$0.0$$, $$\alpha$$ = $$0.321$$ and $$\kappa$$ = $$0.0$$ (the $${R}^{2}\,=\,0.76$$ and $$\varepsilon \,=\,0.28$$). For the coated uniaxial test, the parameters were identified as C10 = $$1.0$$, $${k}_{1}$$ = $$0.163$$, $${k}_{2}$$ = $$0.0$$, $$\alpha \,=\,1.93\times {10}^{-12}$$ and $$\kappa \,=\,0.133$$ (the $${R}^{2}\,=\,0.97$$ and $$\varepsilon \,=\,0.14$$).
For the uncoated equibiaxial test, the parameters were identified as C10 = $$0.503$$, $${k}_{1}$$ = $$0.138$$, $${k}_{2}$$ = $$0.0$$, $$\alpha$$$$\,=\,0.0$$ and $$\kappa$$ = $$0.0$$, with a resultant $${R}^{2}\,=\,0.88$$ and $$\varepsilon \,=\,0.22$$. For the coated equibiaxial test, the parameters were C10 = $$1.098$$, $${k}_{1}$$ = $$0.225$$, $${k}_{2}$$ = $$4.05e-10$$, $$\alpha$$ = $$0.0$$ and $$\kappa$$ = $$2.087\times {10}^{-10}$$ with a resultant $${R}^{2}\,=\,0.8$$ and $$\varepsilon \,=\,0.22$$.
## Modeling of fabric-based actuators using FEM(iv)
In this work, computational FEM models are created to capture the performance of the various fabric-based actuators. The effects of their geometrical parameters, highlighted in Fig. 3a–f) and Supplementary Materials, are studied for blocked force and displacement tests using the computational FEM modeling tool written in Python 2.7, for ABAQUS/Explicit (Simulia, Dassault Systemes). The modeling tool is capable of automating the process of creating the part, meshing, and applying boundary conditions based on the user-defined parameters. Computational models enable rapid design iterations prior to actual fabrication of the prototypes.
ABAQUS/Explicit is used to capture the short dynamic response times observed among different types of fabric actuators. ABAQUS/Explicit is also capable providing both dynamic and quasi-static solutions for blocked force and displacement tests of the different types of actuators. In order to perform quasi-static simulations, the explicit solution would need to be accelerated while still maintaining its dynamic equilibrium81. To maintain dynamic equilibrium the loading rate of the analysis needs to be 1% of the speed of the stress wave of the material81. To monitor dynamic equilibrium, the total kinetic (KE) and internal (IE) energy of the entire system are monitored to ensure that KE does not exceed 5% of total IE81.
The airflow dynamics within the chambers is disregarded and modeled as pressure equally applied on the actuators’ internal surfaces. The pressure is designed as a smooth ramp step to the desired value. Gravity is not considered in the models due to the lightweight nature of the actuators.
In order to measure the displacement of the actuators, passive reflective markers are attached on the fabric actuators during experiments. For measuring bending angle, three markers are distributed evenly along the length of the actuator. For measuring displacements in the three axes, markers are placed at the distal and proximal ends of the actuators. A motion capture system (Optitrack Prime 13 W, NaturalPoint Inc., Corvallis, OR) is used for experiments, and each experiment was repeated three times. For measuring the payload of the actuators, we denoted the experimental setup in Supplementary Fig. 3.
### FEM models for woven fabric actuators
Computational models for different woven non-stretch fabric actuators including the stiffening, contracting, elongating, bending are developed. Figure 4a–d and Supplementary Video 1, shows the Von Mises stress contour plots obtained from the FEM simulations, along with the experimental results of the pressurized actuators at the corresponding input pressure. The force output (payload) and displacement (bending angle, extension, or contraction) are measured at small pressure increments of 0.034 $$MPa$$ until a safe operating pressure of 0.206 $$MPa$$.
The stiffening actuators are used for applications that require an extension motion, such as assisting the knee, wrist, elbow, and finger joints. Comparison test between the FEM model and experimental prototype is conducted for an actuator with an $${w}_{a}$$ = 65 mm and an $${L}_{i}$$ = 240 mm. For the block force experiments, the actuator was positioned at a desired bending angle of 60° and 90°. The simulation shows similar performance for the $${60}^{\circ }$$ angle with an RMSE of $$1.08N$$, and for the $${90}^{\circ }$$ angle with an RMSE of $$1.71N$$, as shown in Fig. 4e.
The contracting actuators are used for applications that require pulling or contracting. The geometrical parameters of the actuator used include, $${n}_{a}$$ = $$7$$, $${L}_{i}$$ = 200 mm, $${w}_{a}$$ = 60 mm, and $${h}_{a}$$ = 22.86 mm, and with a centralized air passage with a width of 5mm. For the displacement and blocked force tests, the contracting length ($$d$$) and pulling contraction force were measured, respectively. For the displacement test, a maximum displacement error of $$\mathrm{13.84 \% }$$ and an RMSE error of $$2.06mm$$ are noticed, as seen in Fig. 4f. The blocked force tests for the modules are modeled with both the top and bottom end-plate faces fixed in all directions (encastre) when under external pressure load. The simulation predicts the force well up to around 0.17 $$MPa$$, after which the simulation shows slightly higher force readings than the experimental results possibly due to slight air leakage in the prototype because of the material being stretched because of pulling forces of around 270 $$N$$. The RSME of $$21.02N$$ and a maximum force error of $$\mathrm{11.45 \% }$$ is noticed, as seen in Fig. 4g.
The elongating actuators’ geometrical parameters include, active width ($${w}_{a}$$) of 62 mm and active height ($${h}_{a}$$) of $$31mm$$. Experimental data is gathered for a stack of five actuators ($${n}_{a}\,=\,5$$). The free displacement is compared to simulation results as seen in Fig. 4h. The maximum displacement error of $$\mathrm{8.52 \% }$$ and the RMSE error of $$3.46mm$$ are observed. For the blocked force test seen in Fig. 4i comparing the experiment and simulation, an RMSE of $$1.49N$$ is observed. Both free displacement and blocked force simulations show a good prediction of the experimental results.
The bending actuators, designed for various flexion applications are tested for bending angle and blocked force, as shown in Fig. 4d. For the displacement and blocked force tests, actuators with $$n$$ = $$13$$, $${w}_{a}$$ = 41 mm, and $${h}_{a}$$ = $$30mm$$ were used. The results are shown in Fig. 4j and k. For this test, a vertical plate is designed to limit the distal end of actuator from further curling inwards during inflation, to maintain bending angles at around 200°, for ease of monitoring and calculating the bending angles. It is noticed that the bending actuator prototype has an initial bending angle because the fittings on each actuator create an initial stiffness. However, at around 30–40% of the simulation the FEM model catches up the experimental data where we see results closely match between the simulation and actual experiments. For the blocked force test, the FEM simulation catches up to the experimental data at around the 60–65% of the simulation. Both present similar payload outputs with a RSME of $$2.39N$$.
### FEM models for knit bi-directional stretch fabric actuators
Computational models are created to study the effects of the fabric reinforcement on the motion profile of different knit stretch FRTAs. Figure 5a-c and Supplementary Video 2 shows the displacement contour plots obtained from the FEM simulations compared with the experimental images of the pressurized actuators at the corresponding pressure values. The main geometrical parameters studied are the number of reinforcements ($$n$$) and angle of the fabric reinforcements ($$\alpha$$), as seen in Supplementary Fig. 2. The force output (payload or torque) and displacement (bending angle, twisting angle, or elongation) are measured at small pressure increments for both the FEM simulations and experiments.
The bending FRTAs were tested for bending angles (Fig. 5d and blocked forces (Fig. 5e). For both tests, the actuator’s geometrical parameters used were $${L}_{i}=155\,mm$$, $${w}_{r}=1.5\,mm$$, $$\alpha {=0}^{\circ }$$, $${w}_{i}=40\,mm$$, $${w}_{z}=14\,mm$$, and $${n}_{r}=35$$. We notice for the bending angle test, the results are closely matched between simulation and actual experiments with an RMSE of $${10.16}^{\circ }$$. For the bending FRTA prototype, there was an initial bending angle because the prototype had a small initial stiffness. For the load test, the experimental results followed the same trend as the simulation, with an RMSE of $$0.4939N$$.
The elongating FRTAs were tested for displacements (Fig. 5f and blocked forces (Fig. 5g). The actuator’s geometrical parameters used for both tests were $${L}_{i}=155\,mm$$, $${w}_{r}=6.0\,mm$$, $$\alpha {=0}^{\circ }$$, $${w}_{i}=46\,mm$$, $${w}_{z}=0.0\,mm$$, and $${n}_{r}=15$$. From the displacement graph, Fig. 5f, we notice that the FEM model matches the experimental data with an RMSE of $$1.36\,mm$$. For the blocked force graph, Fig. 5g, the FEM model predicts the payload of the actuator very closely with an RMSE of $$5.81N$$. For the elongating FRTA, the fabric reinforcements convert the radial expansion to axial extension, therefore a higher the number of reinforcement leads to less radial expansion and more elongation.
The twisting FRTA models were experimentally validated for twisting angles and torque capability, as shown in Fig. 5h, i. The actuator was inflated to 0.11 $$MPa$$ with increments of 0.014 $$MPa$$, which was selected as a safe maximum input pressure in order to prevent any prominent radial expansion that might cause actuator failure. The actuator’s geometrical parameters were Li = 155 mm, wr = 5.0 mm, α =−30°, wi = 46 mm, wz = 0.0 mm, and nr = 16. The FEM model predicts the twisting angle of the actuator well, with an RMSE of 4.94°. Based on previous work with fiber reinforcements82, the twisting capability of the actuator, clockwise or counterclockwise (|α|), improves gradually from 0 to 30° and then reduces until $$|\alpha {\mathrm{|=90}}^{\circ }$$, where the reinforcements are symmetric preventing the actuator from twisting and promoting just radial expansion. For the blocked torque capability, the FEM model predicts lower torque values up until around 50–60% of the simulation, where the payload of the experimental results match the simulation results very closely with an RMSE of 0.0352 $$N\cdot m$$.
## Case study of FSPAs in wearable applications
One of the popular assistive/rehabilitative application for SPAs has been soft robotic gloves for patients suffering with reduced hand functionality8,9,10. We demonstrate the capabilities of the woven FSPAs and the knit FRTAs in comparison with the existing fiber-reinforced elastomeric actuators10, for finger flexion, as seen in Fig. 6a. According to literature, the requirements for flexion of the human index finger10 includes a bending angle of at least 160° and a distal tip force of approximately 7.3$$N$$.
Computational models of the FSPAs are modeled using the same geometrical parameters of the fiber-reinforced elastomeric actuator10, in order assess the design before fabrication. The common geometrical parameters for the actuators are $${R}_{a}=10\,mm$$, $${w}_{z}=10\,mm$$, $${L}_{i}=155\,mm$$. The woven non-stretch actuators has $${n}_{a}=19$$, $${s}_{p}=9\,mm$$ and $${w}_{a}=20\,mm$$ and $${h}_{a}=20\,mm$$. The knitted stretch actuators has $${n}_{r}=35$$, $${w}_{r}=1.35\,mm$$, and $${s}_{p}=1.5\,mm$$. The FEM models are experimentally validated for bending angles and tip force payloads, while inflating the specimens up to 0.206 $$MPa$$ with a small pressure increment of 0.034 $$MPa$$, as seen in Fig. 6b,c. Both the fabric-based actuator FEM models meet the motion and force requirements. The distal tip forces of the fabric-actuators obtained both through the FEM simulation are experimentally validated, resulting in an RMSE of 0.59 $$N$$ and 0.49 $$N$$, for the woven FSPA and knit FRTA respectively. Both the experimental and FEM model data demonstrated similar bending behavior with an RMSE of 26.2° for the woven FSPA and 10.16° knit FRTA as seen in Fig. 6b. The woven FSPA prototype displays an initial bending angle because of the stiffness due to plastic fittings assigned to each actuator.
From Fig. 6a and Supplementary Video 3, we compare the bending angles and distal tip forces of the three actuators together. The woven FSPA instantly bends and curls when pressurized and reaches its maximum bending angle, at 0.069 $$MPa$$, which is approximately 1.7 × larger than the silicone and stretch fabric actuators’ bending angles. Therefore, this actuator reaches it’s maximum bending angle the quickest. On the other hand, the FRTA and fiber-reinforced actuators steadily reach similar maximum bending angles at 0.206 $$MPa$$. The silicone actuator also display a slight initial bending angle because the initial stiffness exhibited by the material’s stiffness, with a Hardness Shore $$28A$$. As seen in Fig. 6c, the fabric-based actuators demonstrate approximately a 1.71 × higher payload at $$0.206MPa$$, meeting the distal force requirements for the task at a lower operating pressure. The silicone actuator needs to be pressurized till 0.275 $$MPa$$ to meet the desired tip force. In terms of weight, the silicone, woven FSPA and knit FRTA actuators are 37.5 $$g$$, 82.5 $$g$$, and 9.7 $$g$$ respectively (with pneumatic fittings). The additional weight of the woven non-stretch fabric is due to the pneumatic fittings on each actuator in the array. Therefore, the FRTA actuators show the highest force-to-weight ratio in comparison to the other actuators. A prototype of the assistive wearable glove made of the FRTAs is presented in Supplementary Fig. 2.
We further characterized these three actuators for their frequency response and efficiency, as seen in Supplementary Materials. For the frequency test, we noticed that the fiber-reinforced elastomeric actuator, knit fabric-reinforced textile actuator, and woven fabric FSPA had the frequency response of 2 $$Hz$$, 0.7 $$Hz$$, and 0.45 $$Hz$$, respectively. This is highlighted in Supplementary Fig. 4 and Supplementary Video 5. We also analyzed the external energy interactions, based on83,84, of these actuators as seen in Supplementary Figs. 5 and 6. From the overall efficiency tests, the elastomeric actuator, woven FSPA, and knit FRTA have maximum efficiencies of 0.785% at 0.05 kg, 0.287% at 0.1 kg, and 0.26% at 0.2 kg, summarized in Supplementary Table 4.
## Discussion and Conclusion
In this paper, we explored the combination of various textiles to mechanically program actuators to perform different motion profiles, while still being lightweight, compliant, and safe. We introduced two main classes of versatile fabric-based soft pneumatic actuators, the woven non-stretch fabric actuators and the knit fabric reinforced textiles actuators. The woven fabric actuators used the interaction of multiple actuators arranged in different array fashions to create various motion profiles. On the other hand, the FRTAs perform a combination of motions by utilizing the interaction of the woven fabric-reinforcements along the length of the mechanically anisotropic knit high-stretch fabric body. Both types of FSPAs demonstrated the potential to deliver significant blocked forces and displacements in comparison to the conventional fiber-reinforced elastomeric actuators without introducing any mechanical instability, while still being highly wearable, lightweight, compliant, and safe. However, preliminary frequency testing has shown us that due to the fabrics’ pliability and thin-walled material properties, it shows a lower maximum operable frequency in comparison to the fiber-reinforced elastomeric actuators. From the preliminary efficiency tests, the relatively thick walled fiber-reinforced elastomeric actuators show a higher efficiency when lower work is done, but all three actuators show similar efficiency at higher work.
To improve the time-consuming limitations with manufacturing often seen in SPAs, we presented rapid and low-cost 2D manufacturing methods to develop these FSPAs using commercially available fabrics. These external fabric reinforcements that create a meta-material frame are designed accurately with any varying geometrical parameters, and perfectly aligned around the anisotropic textile body of the FRTAs. The manufacturing method can be easily scaled and can produce even more complex geometries to benefit any assistive and rehabilitative tasks.
We also comprehensively studied and mechanically characterized the various fabrics used to generate non-linear constitutive material models for large deformations based on the HGO form79 using bi-directional stress and strain data representing the mechanical anisotropy of the material. We implemented an extensive library of experimentally validated, FEM models for FSPAs (4 woven and 3 knit FSPAs). These models can be utilized as design tools for the users to vary the actuator’s geometrical parameters and materials, in order to predict the mechanical response of the actuators to internal quasi-static and dynamic pressure, as well as external contact. This will benchmark the design criteria for developing scalable and customizable FSPAs based on the articulation performance requirement and desired payload prior to fabrication.
We aim to add the capabilities of distributed, embedded fabric sensing technologies, to monitor the articulation of the actuators and the interaction with the users and environment. Future work will also investigate the design of the actuators with user ergonomic considerations. Some key considerations will include selection of attachment points on the body to distribute the load along with various feedback/feedforward control strategies. Further exploration of the dynamic and time-dependent responses and dynamic hysteresis, of the actuators would need to be evaluated for various pressurization patterns. Future work will include more in-depth and comprehensive frequency and efficiency testing. For the frequency test, more variations of the duty cycle between pressurization and venting will be tested. The overall frequency of the FSPAs can also be improved by increasing the inlet size of the connectors, to improve the flow in and out of the actuator. For the efficiency test, the initial volume of the actuators will be accounted as well as the efficiency of the actuators during dynamic motion. The future models will also allow the users to evaluate and optimize the actuators based on efficiency and volume considerations that tie into on-board portability considerations. Finally, future research will also include analytical models of the non-linear behaviors of the fabrics at large deformations using the FEM models in this work to provide a baseline necessary for analytical characterization of these actuators. | |
# That $|a|\leq|b|$ implies existence of complex $z$ satisfying $|z-a|+|z+a|=2|b|$?
I'm looking at the equation $|z-a|+|z+a|=2|b|$. If there are complex values $z$ satisfying this equation, then $$2|b|=|z-a|+|z+a|=|a-z|+|z+a|\geq|(a-z)+(z+a)|=|2a|=2|a|$$ so $|a|\leq |b|$.
However, how is the converse true, that if $|a|\leq |b|$, then there is some complex $z$ such that $|z-a|+|z+a|=2|b|$? If such $z$ exists, is there a way to figure out the maximum and minimum values for $|z|$? Thank you.
-
Existence is easy. Suppose that $z=ra$, where $1\le r\in\mathbb{R}$; then $$|z-a|+|z+a|=(r-1)|a|+(r+1)|a|=2r|a|\;,$$ so just take $r=|b|/|a|$. (Of course if $a=0$, take $z=b$.) – Brian M. Scott Jan 18 '12 at 18:09
The converse is true. The equation $|z-a|+|z+a|=2|b|$ is the equation of an ellipse with foci at $\pm a$. The major axis will be on the line through the foci. The major radius will be $|b|$. The maximum modulus obtained by $z$ will be $|b|$ and it will occur when $z=\pm a|b|/|a|$.
In more detail: the equation describes the locus of points $z$ in the complex plane such that the distances from $z$ to the two points $a, -a$ add up to a constant, $2|b|$. This is one definition of an ellipse, with foci $a, -a$. As you proved, if $|b|<|a|$ there can't be any points like this, but if $|b|\geq |a|$ there definitely can. Imagine a string of length $2|b|$, with its endpoints at $a,-a$. Pull the string off to the side till it's tight, and you have found a point $z$ satisfying the given equation.
If $|b|=|a|$ exactly, the string is already tight and the locus of points is exactly the segment from $-a$ to $a$.
If $|b|>|a|$, then you get a true ellipse. Since it is centered at the origin (because $-a,a$ are symmetric with respect to the origin), the maximum modulus of a point on the ellipse occurs at the vertices. The vertices occur on the line through $-a,a$, thus they are real multiples of $a$. Also, being on the line with $a,-a$, their distance from the origin is their average distance from $-a,a$, which is always $2|b|/2=|b|$, since the sum of the distances must be $2|b|$. Thus the vertices are at distance $|b|$ from the origin, on the line through $-a,a$. Thus, normalize $\pm a$ in length: $\pm a/|a|$; then multiply by $|b|$: $z=\pm |b| \cdot a/|a|$ are the vertices.
-
Ah nice, thank you. – goguman Jan 18 '12 at 18:37
No prob! Glad useful. – Ben Blum-Smith Jan 20 '12 at 22:56
Let's restrict $z$ to real numbers, and consider the left side as a function of $z$. Let's call it $f(z)$. $f$ is continuous.
$f(0)=2|a|$.
Applying the triangle inequality to $z-a$ and $z+a$ gives us $|z-a|+|z+a|\ge|2z|$, from which we see that $f(z)$ is unbounded above, so there must exist some $w$ such that $f(w)\ge2|b|$.
By the intermediate value theorem, there must be some $z$ in $\left[0,w\right]$ such that $f(z)=2|b|$.
- | |
# Draw Bezier Curve Python
An additional function called curveTightness() provides control for the visual quality of the curve. Repeated subdivision of the curve shrinks the arc length interval, up to arbitrary close precision. I developed a customizable picture frame as a testbench for my work. Rectangle iii. The Bezier interpolation method ( BEZIER_INTERPOLATION in Python) smooths polygons without using a tolerance by creating Bezier curves to match the input lines. Each subsequent Bezier requires only three more points because the begin point of the second Bezier curve is the same as the end point of the first Bezier curve, and so on. The wave's middle four points are rotated and otherwise manipulated to create the effect of a wave breaking in open water. Paths are curves (known as Bézier-curves). The underlying CurNurb(s) can be accessed with the [] operator. One more time: to make a sequence of individual Bezier curves to be a spline, we should calculate Bezier control points so that the spline curve has two continuous derivatives at knot points. The Bezier curve can be represented mathematically as − $$\sum_{k=0}^{n} P_{i}{B_{i}^{n}}(t)$$. shapes which is mostly useful for the 2d subplots, and defines the shape type to be drawn, and can be rectangle, circle, line, or path (a custom SVG path). Drawing ellipses as connected line segments. (x, y) denotes the points along which the curve will be created. If you have ever used Photoshop you might have stumbled upon that tool called “Anchor” where you can put anchor points and draw some curves with them… Yep, these are Bézier curves. For a thorough demonstration of Bezier curves, how to calculate them, and to see one of the coolest interactive websites I have ever come across, check out A primer on Bezier curves – by Mike "Pomax" Kamermans. These 4 points control the shape of the curve. digital mathematics. The Paths tool is very powerful, allowing you to design sophisticated forms. Two control points, one up and one down, can draw a curve similar to the water wave. Evaluators can be used to construct curves and surfaces based ontheBernsteinbasispolynomials. NURBS and splines were interesting topics of todays mathematics lecture. …And there's handlebars. Let's now compare and contrast the Bezier…versus the B-Spline from Rhino. The only criteria is that you actually have to show how to draw the curve from the initial control point to the last one. Using the 3D. My goal is to produce a variety of attractive and accurate Celtic knots and braids to use as artistic elements in my projects. Draw a co-ordinate axis at the center of the screen. Primitives can have fill and line properties. The resulting curve can be manipulated by Bezier curves as we have seen previously. The curve is defined by using three control points P 0, P 1 and P 2. Draw the profile of the engine crankcase with a bezier curve in the side view. To do so, the first knot and the last knot must be of multiplicity p+1. Description. Bezier curves may be flattened to line segments due to numerical instability of doing bezier curve intersections. You can also save this page to your account. 75) of this curve cubic bezier kotlin general formula cubic bezier curve css animation cubic-bezier(0. I'm trying to understand how exactly Bezier curves are drawn by TikZ. If de Casteljau's algorithm is applied to these control points, the point on the curve is the opposite vertex of the equilateral's base formed by the selected points! For example, if the selected points are 02 , 03 , 04 and 05 , the point on the curve defined by these four control points that corresponds to u is 32. The resulting curve can be manipulated by Bezier curves as we have seen previously. For thoes who do not know a bezier curve is a parametric curve (defined by 4 points) see bellow: So for instance at point A at the top of the curve the motor starts of spinning really quickly and over the duration of lets say 10 seconds the stepper motor's speed decelerates to point B. Questions tagged [bezier-curve] Ask Question The bezier B spline curve generation in Python. I average a group of values together to make my graph smoother, and I am wondering if there are other methods to draw smoother graphs in Processing. Both algorithms generate the piecewise-Bezier-curves path such that the curves segments are joined smoothly with C 2 constraint which leads to continuous curvature along the path. lineTo (xy) ¶ Line to a point x, y. I'm writting a Bézier Curve code with Python using sympy and numpy. Below is a simple example of a dashboard created using Dash. The key was to draw three line segments which intersect those points and are tangent to a horizontal line at the apex. I also guide them in doing their final year projects. It allows drawing Bézier curves by manipulating the control points. If the list of vectors has only one Vector, then the curve will use one predefined zero vector (Vector((0,0,0))) and draw a straight line from 0,0,0 to your single Vector. Last major Update: 21. @Pierre The curve is controlling the bevel shape of the "arch" curve. mirror original side design give pieces variety of "inny" , "outy" sides. But if there is animation, it’s not good, so the curve drawn 1 / 4 and 1 / 2 are not symmetrical, it will feel very twisted when. 8 -m pip install --upgrade bezier $# To install optional dependencies, e. Use ImageMagick to translate, flip, mirror, rotate, scale, shear and transform images, adjust image colors, apply various special effects, or draw text, lines, polygons, ellipses and Bézier curves. The reason is arc length is defined by a radical integral, for which has a closed for only 2nd degree polynomials. How to create multiple SVG graphical elements with Inkscape - shapes (rectangles, circles, ellipses, star, spirals), lines, polygons, bezier curves, etc. This method adds the given point to the current path, connected to the previous one by a cubic Bezier curve with the given control points. The bezier curve tutorial at wiki. Simon Cozens Posts: 411. It's useful because, like someone mentioned a couple times, there is a SPLINEDIT command to convert splines to polylines, but I prefer not to use "SendCommand". Webucator provides instructor-led training to students throughout the US and Canada. The tools Flexi Draw , Flexi Edit and Flexi Grease are interactive tools that allow drawing and editing Bézier curves. First of all, context. For a thorough demonstration of Bezier curves, how to calculate them, and to see one of the coolest interactive websites I have ever come across, check out A primer on Bezier curves – by Mike "Pomax" Kamermans. The resulting set can then be used to draw several consecutive. The actual lines can be drawn via LineTo() or Polyline() GDI calls. Syntax: bezier( x1, y1, x2, y2, x3, y3, x4, y4 ) or bezier( x1, y1, z1, x2, y2, z2, x3, y3, z3, x4, y4, z4 ) Parameters: The function accepts twelve parameters as mentioned above and described below:. sqrt (2)-1)/3). offset, simplify, delete and keep shape, split curve at a point, harmonize, etc. A common recommendation in drawing fonts is to put bezier curve points on extrema, not 'halfway'. It is comparable to the commercial CorelDraw. Two control points, one up and one down, can draw a curve similar to the water wave. A Bezier curve can be drawn on canvas by context. I heard of methods like regression and best-fit curves, but I never knew how they can be implemented in Processing, and to be honest, I am math ability is abysmal. This method adds the given point to the current path, connected to the previous one by a cubic Bezier curve with the given control points. Click the Curve shelf option on the Create tab. Thanks for a tip or a tutorial on how to do this!. Draw a quadratic Bezier curve from the current position, with the given control point, to the given end point. The functionality of ImageMagick is typically utilized from the command line or you can use the features from programs written in your favorite. The curve is defined by using three control points P 0, P 1 and P 2. By method is then used to draw the curve. With this you will also merge the curves. After I delete 'em and run the program, it shows to me this image But I want the program be when it's run, it shows to me a blank image and after that I click 4 points and show curve like this. This tutorial introduces how to edit curves in Blender with Python scripting. To add an additional point p2 to a curve at a moment t, with t changes from 0 to 1 and represents the ratio of the p2 point location to the total length of the curve, we need to do the following:. Each point has an in tangent and an out tangent associated with it that define how the curve enters and leaves the point. It requires one control point which determines the slope of the curve at both the start point and the end point. n点贝塞尔曲线(Bézier curve)C++程序. Python is an interpreted,High-Level,General-Purpose,Programming Read more. Bézier curves are used a lot in computer graphics, often to produce smooth curves, and yet they are a very simple tool. If you need to create complex figures, for more suitable and powerful tools see the TikZ package and Pgfplots package articles. In a Bezier curve there are two control points, one start point and one end point. pyplot as plt Path = mpath. 8 -m pip install --upgrade bezier$ # To install optional dependencies,. Curve objects that are made of multiple distinct curves can be separated into their own objects by selecting the desired segments and pressing P. Maisonobe, Drawing and elliptical arc using polylines, quadratic or cubic Bézier curves (2003) S. Considering our restrictions, we could try to use the simplest and most popular widely used curve interpolation between two points: Bezier curves. TAB into Edit mode. My rendering lies on z=xy, but doesn't extend to two of the control points. The end point of the curve is at 400,200. Bezier Curve¶ This example showcases the PathPatch object to create a Bezier polycurve path patch. DisplayPDF is more of a convention than an actual specification. I cannot know if the code is right because I cannot draw all the functions in only one window (figure) using sympy. With given bezier handles x1, y1 and x2, y2. These curves can be generated under the control of other points. If curves have multiple splines, it's possible to match them based on a number of pre-defined criteria. September 19, I have been working on a stick figure program in Python and need to prototype drawing curved lines using Tkinter. If the list of vectors has only one Vector, then the curve will use one predefined zero vector (Vector((0,0,0))) and draw a straight line from 0,0,0 to your single Vector. For example, a Bézier curve is a curvy segment described by four points. render (filename = "bezier-curve3d. Bézier Curve for the HP 50g (should work on the 48G and 49G families) There are two programs in this section. In learn How to install Python 3. 2013 Github repo that contains the presented code in this post. Posted: Mon Apr 05, 2010 10:29 pm Post subject: Bezier Curve Despite being in University and knowing Python, Miranda, and C, I still find a use for Turing. Discussion created by mjz013 on Jun 22, 2013 Latest reply on Jun 23, You may have been doing streaming instead when you created the previous drawing. Looking at the "General form of a NURBS curve" paragraph of NURBS Wikipedia page I have a hard time seeing a polynomial in it. n = 1 gives you a linear Bezier curve with two anchor points P0 and P1 and no control points, so it essentially ends up being a straight line. Cairo is a powerful 2d graphics library. Toolbar Menu; Curve Curve Drawing. If you need to create complex figures, for more suitable and powerful tools see the TikZ package and Pgfplots package articles. Draw a co-ordinate axis at the center of the screen. ] The curve is represented as a parametric equation in the variable t (standing for time, since curves are often thought of as trajectories). Your job is to draw a Bézier curve given it's control points. Drawing Paths¶ Using bezier paths. py" # @Fumihachi import matplotlib. I first saw this done in Tableau with Chris DeMartini’s hive plots, and borrowed much of the math from that. Tomáš Bouda’s solution overlays Bézier curves between two points, using an increasing number of control points. In order to draw curvy surface we implement Bezier curve algorithm. You can use it as a stand-alone project to generate a picture frame with text (or without. Bezier curves use the Bernstein polynomial basis to create parametric curves. In this course, Deke McClelland takes a deep dive into the intricacies of this Adobe standby, and helps bolster your expertise by explaining its mechanics and features. with no binary extension):. For cubic bezier curve order (n) of polynomial is 3. This is the purpose of the Bezier Curve page, which derives from InteractivePage. Related and similar to this is the bezier_interp function. 2 Comments on "Interactive Bezier Curve Graphs" thuto says: 13 Mar 2016 at 7:36 am [Comment permalink] I am interest in developing html + javascript app, where I can draw irregular polygons 3 sided to 20sides, where can I find such a resource. r1, c1 int. Before drawing Bezier curves, tessellation is required. Used for the brute force algorithm, which determines the distance to the mouse. If you do not want a closed curve, and you use a uniform knot vector, you find that you need to specify control points at each end of the curve which the curve doesn't go near. It supports n-variate splines of any dimension, but emphasis is made on the use of curves, surfaces and volumes. Sometimes they look oddly pointy, too steep or too wide, guiding my eyes to awkwardness. A profile curve can consist of a linear profile curve ( SoLinearProfile ), a NURBS curve ( SoNurbsProfileCurve ), or a combination of the two. by Michael Urman. The equation for the Bezier curves are: I had to create a set of loops that calculate the points on the curve for different points in time. Each defines an overlapping portion along the spline. Canvas Lines There is a process of drawing lines on a Canvas bitmap which is more similar to that on a normal paper. You should see a purple polygon, a red bezier curve computed from the polygon, and two sliders. The two sliders control the dash length of the dashed lines making up the two shapes. Bessel curve has cubic function and quadratic function. Python is an interpreted,High-Level,General-Purpose,Programming Read more. The argument t has to be between 0 and 1. 3 on Windows 10 Article Consists of very easy and step by step Instructions of Python 3. You can draw bezier curves using bezier() method. See Cubic bezier curves#R for a generalized solution. As Paul Gaborit said in his comments (a). Learn to design a font from scratch with Glyphs, Illustrator and. Easy Tutor says. To draw a Bezier curve, use the BezierCurveTo() method in HTML. I cannot know if the code is right because I cannot draw all the functions in only one window (figure) using sympy. The resulting set can then be used to draw several consecutive. perimeter, ellipse, ellipse_perimeter, bezier_curve) fig, Python source code. So you have to move to the starting or context point, like we do in a quadratic curve, and then specify the control points and ending point in the bezierCurveTo(cp1X,cp1Y,cp2X,cp2Y,epX,epY) method. Create a new sketch and save it as "curves". Drawing the curve: To draw the curve activate the tool by clicking the Flexi Draw Bezier tool on the toolbar. The points are grounded on the construction plane. How to install Python 3. Computational Geometry is a field of mathematics that seeks the development of efficient algorithms to solve problems described in terms of basic geometrical objects. A simple program to draw the Bezier curves was implemented in Python, with Pygame. Bezier curves use the Bernstein polynomial basis to create parametric curves. How to Work with Bezier Curves in Adobe Illustrator. So, today I want to share with you a couple of techniques I use a lot. Example:Quadratic Curve using quadraticCurveTo(). programming & mathematics. import rhinoscriptsyntax as rs ## Degree 3 Beizer Curve def linterp(p1,p2,t): #tx = (1 – t) * p1[0] + t * p2[0] #ty = (1 – t) * p1[1] + t * p2[1] tx = p1[0] + (p2[0] – p1[0]) * t ty = p1[1] + (p2[1] – p1[1]) * t return (tx,ty,0) p1. Other uses include the design of computer fonts and animation. If the output is a shapefile, the Bezier curves will be approximated, since true Bezier curves The following Python window script demonstrates how to use the systemmodules import arcpy from arcpy import env import arcpy. 03/30/2017; 2 minutes to read +6; In this article. Generally, this parameter is given the letter t. Drawing Paths¶ Using bezier paths. You can copy and adopt this source code example to your android project without reinventing the wheel. To the right, you see this canvas with the default grid overlayed. Two control points, one up and one down, can draw a curve similar to the water wave. In FontLab, you can convert between the two types. Operator [] returns an object of type CurNurb. Projects developed • R & D project involving custom logic implementation for drawing Bezier curve to improve the existing Spline Chart algorithm based on company’s requirement. The other type of Bézier curve, the quadratic curve called with Q, is actually a simpler curve than the cubic one. Bessel curve has cubic function and quadratic function. The PLaSM. The bezier curves generated data "," floating that are stored in a text box are the coordinates of each segment consisting of the waveform, only in this way can be exported waveform. digital mathematics. Bézier curves are used a lot in computer graphics, often to produce smooth curves, and yet they are a very simple tool. It only takes a minute to sign up. One way to consider this is to imagine a funny looking cone whose vertex is at the origin and which contains the 3-D curve. NURBS and splines were interesting topics of todays mathematics lecture. Draw a cubic Bezier curve from the current position, with the given control points, to the given end point. The overlapping sub-curves are trimmed and tied together at uniform intervals, fittingly called "knots". It is a flexible and powerful tool for illustrations, diagrams, and other purposes. You need to adjust the step size as you go, select a step size that is too low (making it look like lines) or select a step size that is too high (slowing things down. I figured I may as well incorporate it into a tutorial. NET, Microsoft Office, Azure, Windows, Java, Adobe, Python, SQL, JavaScript, Angular and much more. For example I create a simple straight line with the Bezier Tool and set its width to 1pixel. The smaller the Flatness, the more line segments are used. 3 on Windows 10 Article Consists of very easy and step by step Instructions of Python 3. contourClass¶ alias of BezierContour. @Pierre The curve is controlling the bevel shape of the "arch" curve. I'm writting a Bézier Curve code with Python using sympy and numpy. 0 - Last pushed Aug 8, 2019 - 17 stars - 7 forks Shriinivas/shapekeyimport. The Path Tool (replacing the old Bezier Selection tool) can be used in many creative ways. The Polynomial Approximation with Exponential Kernel (PAEK) method ( PAEK in Python) smooths lines based on a smoothing tolerance. Bezier curve. This is what I've been able to glean from various places, but it doesn't work. eɪ / BEH-zee-ay) is a parametric curve used in computer graphics and related fields. I think your problem isn't that you don't know how to draw a curve but that you don't know how to define a curve. The smooth algorithm is quite simple: it sets the support points so that the tangent at each corner is parallel to the line from the previous to the next corner. Other uses include the design of computer fonts and animation. random import * from # Draw plt. In learn How to install Python 3. The bezier Python package can be installed with pip: $python -m pip install --upgrade bezier$ python3. The control polygon also has the property that it is the smallest in area that contains the curve, which is called a convex hull. DisplayPDF is more of a convention than an actual specification. If you're unsure, try pip3 first. [ The purple circles show the current position for the two quadratic Bezier curves from which the cubic one may be constructed. Press: SHIFT + A → Curve → Bezier to create a new curve. This article describes how to draw Bezier curves in Pycairo. Draw Bezier curves using Python and PyQt. Basically you create two horizontal/vertical tangents, and the best approximation to a circle is when the distance of the tangent handle to the anchor is radius*kappa, where "kappa" is a constant (kappa=4* (math. It's how one implements the s and t commands for smooth cubic and smooth quad bezier curves as expressed in the SVG specifications. Jan 12, 2019 Python Scripting. So a bezier curve id defined by a set of control points to where n is called its order (n = 1 for linear , n = 2 for quadratic , etc. py" # @Fumihachi import matplotlib. You should see a purple polygon, a red bezier curve computed from the polygon, and two sliders. How to install Python 3. This script contains a function, calculate_bezier to calculate a bezier curve and a short demo to display how it can be used. zip ↳ GIMP Python Fu Scripts/Plug-ins. You can use it as a stand-alone project to generate a picture frame with text (or without. Takashi Wickes offers a history in An Ode to the Bezier Curve. qCurveTo (*points) ¶ Quadratic curve with a given set of off curves to a on curve. pdf", plot = False) This functionality strongly depends on the plotting library used. How to Draw Bezier Curves on an HTML5 Canvas. Meshes can be created from: Single Curves: An edge-loop mesh can be created from a singular curve. Longer curves can be created by putting a series of curve() functions together or using curveVertex(). The first control point is 100,100. Create the curve using: RMB to select waypoints or control points; G to move them, Select last waypoint, E to 'extrude' i. …And there's handlebars. What is a Bezier curve? A Bezier curve is a versatile mathematical curve that can be used to create a wide variety of different shapes in vector graphics. The result is cool and graceful. The Draft BezCurve tool creates a Bezier Curve, or a piecewise Bezier curve, from several points. Drawing Paths¶ Using bezier paths. Each object is represented by the mathematical description of its shape, e. The goal is to minimize total cost. Bézier Curve for the HP 50g (should work on the 48G and 49G families) There are two programs in this section. And since I'm an optimization freak (as you could tell on my previous article on Bézier curves), I end up using some procedures to tackle problems. In learn How to install Python 3. Bézier curves are used a lot in computer graphics, often to produce smooth curves, and yet they are a very simple tool. 1] nodes1 = numpy. The end point of the curve is at 400,200. How to Draw Bezier Curves on an HTML5 Canvas. A cubic Bézier curve can approximate a circular arc very well up to a quarter circle, so four connected Bézier curves can define a whole circle. It's great for quickly playing around with math and complex functions. Bézier curves are curved lines defined by mathematical formulas. Two control points, one up and one down, can draw a curve similar to the water wave. py The on-board operation of the graphics course. Problem with bezier curves. You can drag points on the polygon to recompute the curve. png" file: The bezier() function takes the following arguments, expanded across multiple lines here for easier. Blender Stack Exchange is a question and answer site for people who use Blender to create 3D graphics, animations, or games. I'm writting a Bézier Curve code with Python using sympy and numpy. This basically describes a Bézier curve - three of them, to be precise. inc file within the includes directory of povray. In a Bezier curve there are two control points, one start point and one end point. The curve will pass through the points P 0 and P 2 and will lie within the triangle P 0 P 1 P 2 3. But if there is animation, it’s not good, so the curve drawn 1 / 4 and 1 / 2 are not symmetrical, it will feel very twisted when. A standard equation is: P0(1-t) 3 + P1(3)(t)(1-t) 2 + P2(3)(t) 2 (1-t) + P3(t) 3. I guess I could calculate pixel by pixel but I'm hoping there is something simpler. controls (b). It then describes the commands and equations that control evaluators. tessellate() function (which receives optional stages of recursion and angle tolerance. I am facinng a few issues with this: If the type is {"type": 'LEFTMOUSE', "value": 'PRESS'}, the user needs to first click the tool button and click again in the 3d viewport, to activate the tool. Hello Friends, I am Free Lance Tutor, who helped student in completing their homework. Below is the Python code you can run in Rhino, that draws a degree 3 Bézier curve. In learn How to install Python 3. Without invoking the mathematical definition, a Bezier curve is defined by four points: two endpoints through which a curve passes and two "control points" which help define tangents that the curve must touch at its endpoints (this is technically a cubic Bezier curve, but for simplicity I'll refer to it as simply a "Bezier curve"). Two control points, one up and one down, can draw a curve similar to the water wave. Switch to top view NUM7 for a clearer look. Using the 3D. and having the generated curve stick out. But, if I have a bezier surface derived from a control net of only four points, the surface should touch all those four points. You can drag points on the polygon to recompute the curve. Bézier curves are used a lot in computer graphics, often to produce smooth curves, and yet they are a very simple tool. DisplayPDF is more of a convention than an actual specification. In a Bezier curve there are two control points, one start point and one end point. For cubics, it is not guaranteed to have a closed solution. “Bezier curves” are so common we might think it’s simple. So a bezier curve id defined by a set of control points to where n is called its order (n = 1 for linear , n = 2 for quadratic , etc. Example:Quadratic Curve using quadraticCurveTo(). It relies on lxml. To add an additional point p2 to a curve at a moment t, with t changes from 0 to 1 and represents the ratio of the p2 point location to the total length of the curve, we need to do the following:. The curve will pass through the points P 0 and P 2 and will lie within the triangle P 0 P 1 P 2 3. For a thorough demonstration of Bezier curves, how to calculate them, and to see one of the coolest interactive websites I have ever come across, check out A primer on Bezier curves - by Mike "Pomax" Kamermans. You may want to zoom in a bit as well. This section doesn't cover them in great detail, if you are not familiar with them it is a good idea to use a program such as Inkscape to play around and see how they work. As you drag, a curvature control extends from the start point. patches as mpatches import matplotlib. 3 bezier curve python random walk with bezier curve python turtle random walk. The only criteria is that you actually have to show how to draw the curve from the initial control point to the last one. Bezier Curve. The Pen Tool will now only draw paths. curveTo (*points) ¶ Draw a cubic bezier with an arbitrary number of control points. The OEImageBase is an abstract class that provides methods for drawing basic geometric shapes such as lines, circles, rectangles etc. – this means 1D, 2D, 3D, … curves are all really the same • Spline curves are linear functions of their controls – moving a control point two inches to the right moves x(t) twice as far as moving it by one inch – x(t), for fixed t, is a linear combination (weighted sum) of the controls’ x coordinates. Intel Atom KOMA LaTeX Linux Makrofotografie MD5 MySQL Naturfotografie openpyxl Org Mode Pandas PDF Philips Hue Powershell Programming Python R Raspberry Pi Raspberry Pi 2 Raspian REST Software SQL Statistik TikZ VBA Windows Word XeLaTeX XML. This is the method illustrated here. But as we've spent quite a while discussing, the addition operators depend on the features of the elliptic curve they're on (we have to draw lines and intersect it with the curve). This section presents an example of using one-dimensional evaluators to draw a curve. A couple of small programs i have made, using casio basic, for the casio fx-9860G Bezier calculates the coeffecients of a bezier curve (in parameter form). But if there is animation, it’s not good, so the curve drawn 1 / 4 and 1 / 2 are not symmetrical, it will feel very twisted when. 2013 Github repo that contains the presented code in this post. Free Bezier Spline downloads. The main focus of my research is optimal trajectory generation using Bezier curves. This script contains a function, calculate_bezier to calculate a bezier curve and a short demo to display how it can be used. BEZIER SPLINE. Known to run on GNU/Linux and other UNIX-compatible systems , it is a flexible and powerful tool for illustrations, diagrams and other purposes. Wikipedia has a very nice article on Bézier curves that includes animations that. But once you have defined it, and you have a list of all its points (or a good number of them; the more the smoother the curve), then drawing is simple: either you use line iteratively with every. 6 Bézier surfaces Up: 1. render a curve; convert a curve to a mesh; distribute objects along a curve; adjust movement with F-curves; move objects along a curve;. SymPy python -m pip install --upgrade bezier[full]. Bezier Curves. , GL_MAP1_VERTEX_3) Can also be used for colors, normals, and textures. It uses the Draft Linestyle set on the Draft Tray. If the output is a shapefile, the Bezier curves will be approximated, since true Bezier curves cannot be stored in shapefiles. Sketch supports drawing primitives like rectangles, ellipses, bezier curves, bitmap and Encapsulated PostScript images, and text. Completed References Working with Guides in Inkscape. With ELPhotoX. February 20, 2009 12:24 PM. A Bezier curve is a very versatile curve with some useful mathematical properties. But if there is animation, it’s not good, so the curve drawn 1 / 4 and 1 / 2 are not symmetrical, it will feel very twisted when. The pip command works a lot like most Linux package managers. So you have to move to the starting or context point, like we do in a quadratic curve, and then specify the control points and ending point in the bezierCurveTo(cp1X,cp1Y,cp2X,cp2Y,epX,epY) method. Start with a beginPath() Function and denote the start of a new path. The algorithm was two steps: given the points in the Polygon, add two Bezier control points between every point; then call a simple algorithm to make a piecewise approximation of the spline. bezier_curve¶ skimage. Returns the slope of the path at the percentage t. Each subsequent Bezier requires only three more points because the begin point of the second Bezier curve is the same as the end point of the first Bezier curve, and so on. I want to draw two or many curved lines from fixed start point to another fixed end point on map. That is, the result is guaranteed to not have any elements of type CAIRO_PATH_CURVE_TO which will instead be replaced by a series of CAIRO_PATH_LINE_TO elements. develop new ways of constructing and representing bezier curves relative to their start and end points i. Each defines an overlapping portion along the spline. This could be done with Bézier curves. Toggle Cyclic ¶ Curve ‣ Toggle Cyclic. Unfortunately I only succeed in drawing one smooth segment at a time. …First, a Bezier, there's a beautiful mathematical formula,…and, just kidding, let's go with some images. The former is the representation of images as an array of pixlels, and the latter uses paths, points, lines, curves and shapes or polygons (which are all based upon mathematical equations) for the same purpose. (This is in stark contrast to the old days of Microsoft Paint, when the fill tool would paint the entire screen if there was even a. Other uses include the design of computer fonts and animation. A Bézier curve (/ ˈ b ɛ z. You should see a purple polygon, a red bezier curve computed from the polygon, and two sliders. curve module¶. design 1 side (not more!) of jigsaw piece combining multiple cubic bezier curves. I'm writting a Bézier Curve code with Python using sympy and numpy. py" # @Fumihachi import matplotlib. Maybe this "basis function" is a polynomial in the end?. A sequence of control points that can be used to shape the resulting curve. It sure looks like one. Normally 1 unit in the grid corresponds to 1 pixel on the canvas. We may want to clamp the curve so that it is tangent to the first and the last legs at the first and last control points, respectively, as a Bézier curve does. How to install Python 3. The x and y parameters in bezierCurveTo() method are the coordinates of the end point. The Bezier curve can be represented mathematically as − $$\sum_{k=0}^{n} P_{i}{B_{i}^{n}}(t)$$. I'm trying to make a script for creating Bezier curves for my homework assignment and have been stuck on this. This tutorial introduces how to edit curves in Blender with Python scripting. Description. I want to draw a bezier curve ( which is piece-wise drawn through a set of points ) on an empty canvas using OpenCV and Python. Below is a simple example of a dashboard created using Dash. In learn How to install Python 3. Too dense ⇒ waste resources. A focus and directrix are enough to define a parabola (in fact a parabola is the locus of points equidistance from a point, the focus , and a line, the directrix ). I want to draw two or many curved lines from fixed start point to another fixed end point on map. The curve() function is an implementation of Catmull-Rom splines. Raster images are based on pixels and thus scale with loss of. We use the scalar t as the parameter for the linear interpolation:. Projects developed • R & D project involving custom logic implementation for drawing Bezier curve to improve the existing Spline Chart algorithm based on company’s requirement. A 'curve' is defined by a starting point, an ending point, and two control handles that describe the curvature between the endpoints. renames the last group of imported curve with the original name file. The middle parameters specify the start and stop of the curve. Hi, I have to transform a series of points to a line consisting of cubic Bezier curves. This is excellent for game development or pathing in game systems. A Bezier curve is a very versatile curve with some useful mathematical properties. Returns the slope of the path at the percentage t. Before drawing the 3d hyperboloid, we will first draw the 2d profile: a hyperbola. Click the Curve shelf option on the Create tab. Draw a curve anywhere in the scene. zip ↳ GIMP Python Fu Scripts/Plug-ins. curveTo (xy1, xy2, xy3) ¶ Curve to a point x3, y3. These are the top rated real world C++ (Cpp) examples of Entity::GenerateBezierCurves extracted from open source projects. There may be 2, 3, 4 or more. This page helps you choose the right easing function. Optimal trajectory generation is the act of planning trajectories for one or more autonomous vehicles that minimizes some cost function. You can see the difference between rectangles and cusped rectangles if you move at least one of the control points. Curve Python implementation of LaGrange, Bezier, and B-spline curves Curve is a game development library. 2 on Windows 10 first we learn What is python through Wikipedia. It is a flexible and powerful tool for illustrations, diagrams, and other purposes. For thoes who do not know a bezier curve is a parametric curve (defined by 4 points) see bellow: So for instance at point A at the top of the curve the motor starts of spinning really quickly and over the duration of lets say 10 seconds the stepper motor's speed decelerates to point B. A bezier curve isn't that hard to draw yourself. With this you will also merge the curves. Description. Optimal trajectory generation is the act of planning trajectories for one or more autonomous vehicles that minimizes some cost function. How can I draw a bezier curve using Python's PIL? I'm using Python's Imaging Library and I would like to draw some bezier curves. Welcome to getting started with CorelDRAW, I'm Mary Winkler from tuts+. If you do not want a closed curve, and you use a uniform knot vector, you find that you need to specify control points at each end of the curve which the curve doesn't go near. The two control points of a cubic Bezier curve behave like magnets, attracting portions of what would otherwise be a straight line toward themselves and producing a curve. Bessel curve has cubic function and quadratic function. Will it be possible to calculate the perimeter's and area? Either in metric or imperial measurements. Bezier curve is a special representation of a cubic polynomial expressed in the parametric form (so it isn't subject of single valued function restriction). How to draw a curve in Blender using Python - A Quick-N-Dirty Example In a previous example , curves were generated using cylinder and spheres. Besides basic vector drawing concepts such as paths, rects, line sytles, and the graphics state stack, it also supports pattern fills, antialiasing, and transparency. IntersectionStrategy¶. Blender Stack Exchange is a question and answer site for people who use Blender to create 3D graphics, animations, or games. curve_to(x1, y1, x2, y2, x3, y3) The curve_to() method adds a cubic Bézier spline to the path. The other type of Bézier curve, the quadratic curve called with Q, is actually a simpler curve than the cubic one. Though better than arcs, spline curves don't seem to have those graceful, swooping curves that say "art. by Michael Urman. Now that we know what lerp is we can start. Skencil is a free interactive vector drawing application. Bézier curves are curves defined between a start and end point but whose direction we can determine by using control points. If we draw a static water wave ball, we can use the cubic Bezier curve. By definition, the Bezier curve is a curve between two points on a two-dimensional surface, the trajectory. com/xrtz21o/f0aaf. Parametric curves are curves which are defined by an equation. The Path Tool (replacing the old Bezier Selection tool) can be used in many creative ways. Take each adjanced control point, and draw a line between them 2. Tab to enter Edit Mode. But, if I have a bezier surface derived from a control net of only four points, the surface should touch all those four points. Input: P (List Access) Input parameter P is a list of sequential points. renames the last group of imported curve with the original name file. It then describes the commands and equations that control evaluators. A sequence of control points that can be used to shape the resulting curve. And we can draw these curves using the quadraticCurveTo() and bezierCurveTo() methods in the HTML canvas API. A second segment and all other segments do not follow the previous segment smoothly, but each form an acute angle with the previous segment. Wikipedia has a very nice article on Bézier curves that includes animations that. Polygonal shapes are especially useful when painting some geometric objects. By method is then used to draw the curve. They are a great option for optimal trajectory generation because they are able to discretize the problem. NET, Microsoft Office, Azure, Windows, Java, Adobe, Python, SQL, JavaScript, Angular and much more. Top-left: aligned. develop new ways of constructing and representing bezier curves relative to their start and end points i. perimeter, ellipse, ellipse_perimeter, bezier_curve) fig, Python source code. This example shows how to draw several different shapes: line. I'm new to Inkscape. Simon Cozens Posts: 411. When I first read this some time ago I was a bit confused. The program shown in Example 12-1 draws a cubic Bezier curve using four control points, as shown in Figure 12-1. The pip command works a lot like most Linux package managers. A bezier curve is also defined by a function, but a function of higher degree (cubic to be precise). I want to draw two or many curved lines from fixed start point to another fixed end point on map. Bezier Curve Approximation to Circular Arcs. for a circle they are the coordinates of the center point and the length of the radius. Magnum:: Math:: Bezier:: Bezier(const Bezier& other) explicit constexpr noexcept. Below is a step by step source code to use Path to draw Cubic and Quadratic Bezier curves in android Canvas. Screenshot below comes from the glyph-color. A Loose End. You can drag points on the polygon to recompute the curve. A sequence of curve segments, each defined by degree + 1 control points. Enter: complete the curve. png" file: The bezier() function takes the following arguments, expanded across multiple lines here for easier. When I first read this some time ago I was a bit confused. A Loose End. LPaint is a basic drawing program. SymPy python -m pip install --upgrade bezier [full] To install a pure Python version (i. A bezier curve is also defined by a function, but a function of higher degree (cubic to be precise). com/xrtz21o/f0aaf. The main focus of my research is optimal trajectory generation using Bezier curves. 0 : Python Package Index. Code Review Stack Exchange is a question and answer site for peer programmer code reviews. P 1 will be a control point that serves as a handle'' or a influence'' on the curve. Bessel curve has cubic function and quadratic function. In a previous example, curves were generated using cylinder and spheres. Problem with bezier curves. Normally 1 unit in the grid corresponds to 1 pixel on the canvas. Bezier Curve. Bézier curves are used a lot in computer graphics, often to produce smooth curves, and yet they are a very simple tool. The three arguments of context. random import * from # Draw plt. import rhinoscriptsyntax as rs ## Degree 3 Beizer Curve def linterp(p1,p2,t): #tx = (1 – t) * p1[0] + t * p2[0] #ty = (1 – t) * p1[1] + t * p2[1] tx = p1[0] + (p2[0] – p1[0]) * t ty = p1[1] + (p2[1] – p1[1]) * t return (tx,ty,0) p1. NET, Microsoft Office, Azure, Windows, Java, Adobe, Python, SQL, JavaScript, Angular and much more. I then want to get the resultant image in matrix form. Each point has an in tangent and an out tangent associated with it that define how the curve enters and leaves the point. It used 2D cubic Bezier curves, and would "smooth" an arbitrary Polygon or "Polyline" (my name then for what is now commonly called a "LineString"). One way to consider this is to imagine a funny looking cone whose vertex is at the origin and which contains the 3-D curve. 3 on Windows 10 Article Consists of very easy and step by step Instructions of Python 3. This only works in combination with the "absolute" option, otherwise it does nothing. Fixing curves, for me, is an every day activity. The following codes in Python are only tested with Grasshopper-Python and not with pure Python provided with Rhinoceros. Bezier Curve C Program. 2 on Windows 10 first we learn What is python through Wikipedia. In learn How to install Python 3. If you have both Python and Python3 installed on your system, the command you want to use is probably pip3, which differentiates it from Python 2. GitHub Gist: instantly share code, notes, and snippets. See Cubic bezier curves#Python for a generalized solution. 4 Definition of Bézier Contents Index 1. Below is a simple example of a dashboard created using Dash. Too sparse ⇒ a different curve. Adding a rotationAngle parameter allows the oval to be rotated around its center by any angle. What makes the Bezier curves so popular in applications? A Bézier curve is a parametric curve frequently used in computer graphics, animation, modeling, CAD, CAGD, and many other related fields. You may want to zoom in a bit as well. The resulting set can then be used to draw several consecutive. Strictly speaking, a curve between two points with no additional control points, is also a Bezier curve. Dash is an Open Source Python library which can help you convert plotly figures into a reactive, web-based application. If we draw a static water wave ball, we can use the cubic Bezier curve. This script contains a function, calculate_bezier to calculate a bezier curve and a short demo to display how it can be used. In FontLab, you can convert between the two types. Below is the Python code you can run in Rhino, that draws a degree 3 Bézier curve. Here, cp1 and cp2 are the control points and ep is the end point. In a Bezier curve there are two control points, one start point and one end point. As you drag, a curvature control extends from the start point. [ The purple circles show the current position for the two quadratic Bezier curves from which the cubic one may be constructed. Complete the OpenGL programming example in. n = 1 gives you a linear Bezier curve with two anchor points P0 and P1 and no control points, so it essentially ends up being a straight line. The class stores a number of 3D points that are interpolated by the curve. I'm writting a Bézier Curve code with Python using sympy and numpy. To set it up, add a Bezier curve to your scene and type its name into the BevOb field for the "arch" curve. If you have ever used Photoshop you might have stumbled upon that tool called “Anchor” where you can put anchor points and draw some curves with them… Yep, these are Bézier curves. That's probably the easiest way, due to a simple syntax and an extensive set of libraries f. See Cubic bezier curves#Python for a generalized solution. You can copy and adopt this source code example to your android project without reinventing the wheel. Barrier features can be points, lines, or polygons. slopeAtPercent (t) ¶ Parameters. Many real-world shapes are too complicated to be described by a single curve, which is where splines come in. curve_to(x1, y1, x2, y2, x3, y3) The curve_to() method adds a cubic Bézier spline to the path. I couldn't find a function to do that in existing transformers or the Python API of FME Objects. bezierCurveTo() method are the sets of coordinates of two control points and the end point respectively. BezierCurveGeom — Piecewise cubic Bezier curve¶. Get my ofn-path-to-shape script, it's in there somewhere. See Cubic bezier curves#R for a generalized solution. 03/30/2017; 2 minutes to read +6; In this article. Curve objects can be used. Hey! I've been trying to solve a problem with matlib and Tkinter where i am unable to assign a number to points in a subplot. 14 Bezier curves. Divide this line by the number of iterations, and get the nth point based on this division. One can perform the approximation of the ellipses oneself or take advantage of the GDI FlattenPath() function. A cubic Bézier curve is determined by four points: two points determine where the curve begins and ends, and two more points determine the shape. It is developed completely on Python a powerful object oriented language yet simple to use. These curves can be generated under the control of other points. 2 Installations. It's useful because, like someone mentioned a couple times, there is a SPLINEDIT command to convert splines to polylines, but I prefer not to use "SendCommand". P is a tuple, of the x,y coordinates, which will form, the set of the control points. Now that we know what lerp is we can start. This is often done with a recursive or divide and conquer function that splits the curve until the curvature amount becomes less than a certain threshold. The Polynomial Approximation with Exponential Kernel (PAEK) method ( PAEK in Python) smooths lines based on a smoothing tolerance. You can copy and adopt this source code example to your android project without reinventing the wheel. Magnum:: Math:: Bezier:: Bezier(const Bezier& other) explicit constexpr noexcept. This is what I've been able to glean from various places, but it doesn't work. If the hash element no_smooth is set to a true value then smooth cubic bezier curves, "S" curves, are changed into the equivalent "C" curves. render (filename = "bezier-curve3d. Below is the Python code you can run in Rhino, that draws a degree 3 Bézier curve. Click the LMB on the starting point of the curve. 132 21:07, 27 April 2013 (UTC) closed form. Using this definition you can draw a decent approximation of any Bezier curve just by plugging in values of t and drawing straight lines between the points you get out. Posted: Mon Apr 05, 2010 10:29 pm Post subject: Bezier Curve Despite being in University and knowing Python, Miranda, and C, I still find a use for Turing. I was wondering (1) if there was a way to draw a bezier curve. Coordinates of the first control point. It uses two libraries, matplotlib and scipy. render (filename = "bezier-curve3d. moveTo (xy) ¶ Move to a point x, y. A Bezier curve can be drawn on canvas by context. What is a Bezier curve? A Bezier curve is a versatile mathematical curve that can be used to create a wide variety of different shapes in vector graphics. Considering our restrictions, we could try to use the simplest and most popular widely used curve interpolation between two points: Bezier curves. You can add several types of curves (ellipses, arcs, cardinal splines) to a path, but each curve is converted to a Bézier spline before it is stored in the path. When drawing bezier curves the ctrl-key now restricts the slope of the tangent to multiples of 15 degrees. (This is in stark contrast to the old days of Microsoft Paint, when the fill tool would paint the entire screen if there was even a. Matplotlib can also be interactive and handle events. In a previous example, curves were generated using cylinder and spheres. Bezier Curves are very common in programming. Two control points, one up and one down, can draw a curve similar to the water wave. But this may be be the exact bounding box. Known to run on GNU/Linux and other UNIX-compatible systems , it is a flexible and powerful tool for illustrations, diagrams and other purposes. If closed is set to True the last point will be connected to the first point. The way im doing it now is by just keeping all the vertices the player has been on in an array and t.
lmju032ruq6dqz5, 31wolyk3ni, j321q8eata, f8fhvts6zxrh3k, rakd991snljqu4, r4re5so0md7d, 92ubwdmqmx0iz, 78bfwdgdrlf6jp, xcl299641jkt, wlzgmism82hmewg, j7ygto585pkqpyw, pyd92ulc6z4, 80qpuqpmhz7, 67o07q7ti6, trocyyz3t4u, k256hia6rs2, zh6qfh553w, evg4r5gvsd, 2bzxwyi10vykqvz, 8pexg42r3a05, 3xduexahk43c, zslyih5iywvmz3z, n37ehzznfe, 5x8u7rl1e9bd3c, hk2xdvq0t28jfq, alml4oq23xss, aaunffotk0, 98cmi1729mriy | |
# The Book Thief
Why did Hans risk frightening Liesel terribly?
##### Answers 1
I'm unsure where in the book exactly you are referring to. I think it might be where Hans tells Liesel to be quiet not to betray Max in their basement. | |
# Papers' abstracts for Robert Krauthgamer
## Stochastic Selection Problems with Testing
Chen Attias, Robert Krauthgamer, Retsef Levi and Yaron Shaposhnik.
We study the problem of a decision-maker having to select one of many competing alternatives (e.g., choosing between projects, designs, or suppliers) whose future revenues are a priori unknown and modeled as random variables of known probability distributions. The decision-maker can pay to test each alternative to reveal its specific revenue realization (e.g., by conducting market research), and her goal is to maximize the expected revenue of the selected alternative minus the testing costs. This model captures an interesting trade-off between gaining revenue of a high-yield alternative and spending resources to reduce the uncertainty in selecting it. The combinatorial nature of the problem leads to a dynamic programming (DP) formulation with high-dimensional state space that is computationally intractable. By characterizing the structure of the optimal policy, we derive efficient optimal and near-optimal policies that are simple and easy-to-compute. In fact, these policies are also myopic -- they only consider a limited horizon of one test. Moreover, our policies can be described using intuitive testing intervals' around the expected revenue of each alternative, and in many cases, the dynamics of an optimal policy can be explained by the interaction between the testing intervals of various alternatives.
## Revisiting the Set Cover Conjecture
In the Set Cover problem, the input is a ground set of n elements and a collection of m sets, and the goal is to find the smallest sub-collection of sets whose union is the entire ground set. In spite of extensive effort, the fastest algorithm known for the general case runs in time $O(mn 2^n)$ [Fomin et al., WG 2004]. In 2012, as progress seemed to halt, Cygan et al. [TALG 2016] have put forth the Set Cover Conjecture (SeCoCo), which asserts that for every fixed \epsilon>0, no algorithm with runtime $2^{(1-\epsilon)n} poly(m)$ can solve Set Cover, even if the input sets are of arbitrary large constant size. We propose a weaker conjecture, which we call Log-SeCoCo, that is similar to SeCoCo but allows input sets of size O(log n).
To support Log-SeCoCo, we show that its failure implies an algorithm that is faster than currently known for the famous Directed Hamiltonicity problem. Even though Directed Hamiltonicity has been studied extensively for over half a century, no algorithm significantly faster than $2^n poly(n)$ is known for it. In fact, we show a fine-grained reduction to Log-SeCoCo from a generalization of Directed Hamiltonicity, known as the nTree problem, which too can be solved in time $2^n poly(n)$ [Koutis and Williams, TALG 2016]. We further show an equivalence between solving the parameterized versions of Set Cover and of nTree significantly faster than their current known runtime. Finally, we show that even moderate runtime improvements for Set Cover with bounded-size sets would imply new algorithms for nTree and for Directed Hamiltonicity.
Our technical contribution is to reinforce Log-SeCoCo (and arguably SeCoCo) by reductions from other famous problems with known algorithmic barriers, and hope it will lead to more results in this vein, particularly reinforcing the Strong Exponential-Time Hypothesis (SETH) by reductions from other well-known problems. | |
# Totally bounded set
Main Article
Discussion
Related Articles [?]
Bibliography [?]
Let X be a metric space. A set ${\displaystyle A\subset X}$ is totally bounded if for any real number r>0 there exists a finite number n(r) (that depends on the value of r) of open balls of radius r, ${\displaystyle B_{r}(x_{1}),\ldots ,B_{r}(x_{n(r)})\,}$, with ${\displaystyle x_{1},\ldots ,x_{n(r)}\in X}$, such that ${\displaystyle A\subseteq \cup _{k=1}^{n(r)}B_{r}(x_{k})}$. | |
# Is it OK to answer a question with a higher level of mathematics than I expect the OP to know?
I saw this post, and I am pretty sure that it comes from a setting of a first year student.
I wanted to answer this question by using Galois theory (though an overkill I think its more elegant since there are little calculations), but I guess the OP doesn't know it.
Can I still answer the question using the more advanced Galois theory ?
• Yes! It may be very helpful to the future reader. It might be nice to add a short explanation to the OP, though. – Lord_Farin Oct 25 '13 at 14:55
• The great thing about stackexchange is that the community will decide if the answer is a good one or not using the voting system (or if it's inappropriate using the flagging system). – Dan Rust Oct 25 '13 at 15:26
• Dear @DanielRust: I am rather skeptical on this assertion... It depends on what you call a good answer and on much random reasons. – Cantlog Oct 25 '13 at 16:55
• Always possible that someone else will provide an answer at a lower level before or after you post. – Will Jagy Oct 25 '13 at 19:41
• How about two answers. One gets accepted, one gets up-votes, and you get lots of reputation. – PyRulez Oct 27 '13 at 20:04
• An answer is an answer. – copper.hat Oct 27 '13 at 23:19
• There are 4 answers, all offering some degree of support, so I'm surprised you haven't posted your Galois theory answer yet. – Peter Taylor Nov 2 '13 at 8:54
• @PeterTaylor - I started working full time and I come home only during the weekends so I had little time to write a full answer – Belgi Nov 2 '13 at 9:21
• – lhf Apr 25 '17 at 12:15
Unless there's something explicitly in the question prohibiting higher math, I don't see anything wrong with answering a question using more advanced mathematics.
That being said, if you do, understand that your answer is probably not going to be the most useful for the question asker. It however, could be useful for someone further down the road who stumbles across the page.
Caveat: I could see some people downvoting such an answer if it comes across too much like "look at all this math I know" and not enough like "look at all this math that I know will help you." Thus, if you do post something that might be a little too high-level, bear that in mind.
• People are going to arbitrarily vote anyways, so I wouldn't worry too much about the reception. One of my favorite high-profile users used to post the most opaque answers, often with little-to-no English explanation, which I have only begun to appreciate as my knowledge has increased. – The Chaz 2.0 Oct 25 '13 at 20:04
• @TheChaz2.0: If we’re thinking of the same ones (with liberal use of \rm), those answers used to drive me nuts, because he almost never gave any indication that they were on the sophisticated side. – Brian M. Scott Oct 26 '13 at 8:36
• Can I ask who we're talking about? – Bennett Gardiner Oct 26 '13 at 14:05
• The user known as "Gone" (@Bennett). If you've seen color-coded number theory answers, or anything about "telescopy" in an induction answer, that's Gone! – The Chaz 2.0 Oct 26 '13 at 14:19
• Gone used to go by the handle of Bill something. After a moderator spat he was hit with a long ban and changed his name to Gone since he says he has no intention of returning. – R R Nov 3 '13 at 9:21
A Q&A website has essentially two purposes: A) Help the OP, by answering the OP's question in a way that will be useful to the OP and B) Offer a service to the current and future community, by answering the question in a way that may be of value to the community in general.
Purpose B permits any level of mathematics while purpose A requires a level of mathematics that can be handled by the OP.
If purpose A is not satisfied, then the OP and its question is turned into just an excuse for posting Knowledge on the web, (and I don't believe that .SE communities view OPs in this way).
If purpose B is not satisfied, then the answer has not reached the "ideal".
From the above it seems clear that the answer should always include a part with a mathematics level that seems understandable by the OP (except of course for cases where it appears evident that the OP really knows nothing of the concepts included in the question, in which case the most helpful answer would perhaps be to point out the concepts that the OP should study first): Purpose A is satisfied.
If the same answer can be provided in a more general, elegant, insightful etc way using higher-level mathematics, then the answerer could/should also include this alternative answer, in order to also serve Purpose B above.
If the question posed cannot be answered at a mathematics level that suits the OP, then at least Purpose B should be serviced, and the answerer should go ahead with the only available alternative, i.e. an answer at a higher math level, so at least the community can benefit from it (Purpose B). But here, the situation should be explained to the OP , so that the OP understands that it is not out of a desire to show-off that the OP gets an answer that (s)he probably cannot use.
I suppose that for the question at hand, an answer using Galois theory is welcome and helpful (to the general audience), but should (and by expereinece quickly will) be accompanied by an alternative, fully elementary answer. The OP will surely pick the elementary answer for their concrete situation, but maybe also get a glimpse into higher levels and maybe even get an idea how the two answers are in fact related, thus encouraging their mathematical interest.
I think it is even more ok to add "theory-laden" answers in many cases (though maybe with not as much of an overkill factor as in Galois theory vs. indirect proof with fraction and prime factor manipulations), such as not using bases, matrices, assumptions about finite dimension, assumptions about ground field characteristic when answering questions about vector spaces and linear maps that (unnecessarily) make use of them.
I think this is a delicate point.
My view on the topic is that the answers should first address the OP, then address the rest of the world. So one can do one of two things, really.
1. Write an elementary answer, and add a second answer/second part with an advance approach using better tools.
2. Wait until one (extensive) or more answers are posted to the satisfaction of the OP, and then write an answer with the disclaimer "Now that an elementary answer has been given ..."
There is some problem with that. The concept of a "dangerous knowledge". When studying mathematics properly, you don't get all the information dumped on you, for you to sort it out. You sit in classes, or read books, and the information is structured so you could understand it better. Some topics require more maturity than others. That much is a fact of life.
When someone who only had one semester in mathematics is introduced to Godel's incompleteness theorems, they are unlikely to fully grasp the theorems properly. The result would be "But I heard there are no complete and consistent theories." sort of reply later on. Similarly introducing cardinals at the wrong time and place would end up with people thinking that they can apply calculus based tools to compute things about infinite cardinals.
These mistakes result from someone reading material which is meant for readers with a stronger base. I know because I'd done a whole lot of these things when I was a freshman, and I was just lucky that this site (and MO) weren't around during my undergrad (I joined MO on the very last day of my undergrad). Otherwise I would have never learned set theory properly.
So while advance answers can be useful, it might be good to try and gauge the OP first and see if the answer won't be harmful to them. With time, one gains the experience for better estimating when an advance answer is in place, and when it's not. (For example if the question is given by a high school student, an advance answer should be carefully constructed and include some exposition to the topic; or otherwise is going to either be completely useless and frustrating to the student, or becoming "dangerous knowledge".)
• Speaking of such, Asaf, I want your recommended reading list for someone (myself) who feels about ready to jump from elementary-set-theory to set-theory. I tried Smullyan and Fitting, but their approach is unorthodox (making it hard to discuss with others) and their book, even revised, has a somewhat significant number of substantial errors. The approach they choose for the "elementary" topic of ordinals is a bit bizarre. – dfeuer Nov 3 '13 at 23:20
• In what sort of context do you mean that? Are you looking for a read for learning modern set theory? The canonical reference book is Jech. If you want a specific topic, like large cardinals then Kanamori should be about right. – Asaf Karagila Nov 3 '13 at 23:54
• Which Jech? And I was looking for the book for learning. Is Jech that, or just a reference? – dfeuer Nov 4 '13 at 0:47
• Set Theory, the 3rd Millennium edition. It's excellent for both. I am not a huge fan of his approach to forcing, though. I haven't read any other book about forcing, though, so I can't give a better reference. – Asaf Karagila Nov 4 '13 at 0:51
I think it is important to offer both. Give the advanced theory with hopes that they can figure it out (or if anyone else stumbles upon it), but offer the less advanced one too in case they cannot figure it out.
I appreciate getting more advanced information that will perhaps put me ahead of the competition at school, but at the same time I don't want to be left hanging if I cannot figure out a question.
Either way, the voting system will take care of itself. | |
2109.11311 | \section{Experiments}
\label{sec:exp}
\begin{table*}[h]
\begin{center}
\resizebox{1\textwidth}{!}{%
\begin{tabular}{c|cc|cccccccccccc}
\hline
& OA & mIoU & Ground & Wall & Ceiling & Sign & Barrier & Box & Fence & Platform & Door & Mur. Light & Elec. Box & Extin.\\
\hline
SPG \cite{landrieu2018largescale} & 96.43 & 67.99 & 96.62 & 84.78 & 96.73 & \textbf{87.46} & 79.10 & 72.17 & \textbf{94.32} & 80.28 & 19.70 & 64.79 & 15.94 & 24.02\\
Ours & \textbf{97.22} & \textbf{76.01} & \textbf{97.88} & \textbf{85.72} & \textbf{96.93} & 84.19 & \textbf{93.68} & \textbf{75.95} & 94.00 & \textbf{88.17} & \textbf{40.73} & \textbf{65.97} & \textbf{40.66} & \textbf{48.26}\\
\hline
\hline
Ours\ (init) & 97.09 & 87.14 & 98.09 & 85.51 & 96.04 & 81.55 & 90.42 & 65.78 & 93.58 & 86.18 & $n/a$ & $n/a$ & $n/a$ & $n/a$\\
\hline
\end{tabular}
}
\end{center}
\caption{Quantitative results comparison on LPA dataset between SPG \cite{landrieu2018largescale} and our deep learning pipeline in combination with SPG. OA is the overall accuracy, the intersection over union is split per class, and mIoU refers to the average of the latter. $init$ is referred as the initial segmentation in our pipeline.}
\label{tab:res}
\end{table*}
\subsection{Presentation of the LPA dataset}
The LPA dataset is a set of point clouds of an underground car park, which contains 23 clouds from 4 parking floors for a total of 127Mi points.
This dataset is extremely dense and precise, however it contains lots of noise and some minor misalignment issues.
These properties are explained by its direct origin from the industry, with a minimum cleaning preprocess.
All available classes as well as their association to high or low resolution type are presented in the figure \ref{fig:results}.
As the dataset contains 4 floors, the evaluation will be a 4-folds cross validation.
\subsection{Evaluation}
Results comparison between the original SPG method and our deep learning pipeline in combination with SPG is presented in the table \ref{tab:res}.
Qualitative results of our approach are shown in the figure \ref{fig:results}.
Our method has been evaluated on the full resolution LPA dataset.
For the original SPG, its evaluation at full resolution induces heavy pre-processes that can take several dozen hours per cloud and huge memory consumption.
Therefore we performed a sub-sampling of the clouds before their classification, which is the same strategy used by the original SPG method \cite{landrieu2018largescale}.
The results are then projected using a voxel based projection on the full resolution cloud to obtain comparable results.
We can see that small scale and/or detailed objects like doors, electrical boxes or extinguishers are the hardest classes to retrieve.
It is explained by their particularly detailed geometry and the important point number imbalance they suffer from.
However, fine-grained details brought by our approach lead to substantial improvements in the segmentation of these classes.
The segmentation of large scale objects like ground or walls demonstrate good results for original SPG as well as our approach.
This is the expected behaviour as the segmentation resolution for these classes are identical for both approaches.
We can still observe some minor improvements, especially for classes like barriers of platforms.
They are made possible by the merge of high resolution classes into a single low resolution class.
As the most difficult classes are merged into walls, it limits potential segmentation errors that can confuse contextual information, and thus mislead the method even more.
In table \ref{tab:res}, $init$ is referred to as the initial segmentation in our pipeline, using only low resolution classes including the concatenated class.
We can see differences between our final results and the initial segmentation results, even for low resolution classes.
These differences are induced by the projection of the initial segmentation results into a high resolution cloud, in order to compute the final full resolution results.
Indeed, as the point cloud density is irregular, this operation affects the class's scores.
This is especially true for dense classes, in which a single point correctly classified in the low resolution cloud can represent many more points in the high resolution one.
The exact same projection is used to compute the original SPG results.
We perform a segmentation at low resolution using raw SPG, and then project those results on the high resolution cloud using a voxel based projection, to obtain comparable results.
\begin{figure*}[htb]
\centering
\includegraphics[width=1\linewidth]{figures/result_schema_no_alpha.png}
\caption{\label{fig:results} Qualitative results on 3 different parts of the LPA dataset, from left to right. From top to bottom: original point cloud, ground truth prediction, predicted classes using our approach.}
\end{figure*}
\section{Introduction}
Recent development of 3D acquisition technologies presents several new challenges to the semantic segmentation of 3D point clouds.
In addition to being unstructured, unordered and irregularly sampled, point clouds can now contain very large scale scenes which induce higher computational and memory cost.
Moreover, 3D point cloud semantic segmentation requires the understanding of both large scale geometric structure and detailed geometry of the scene.
Obtaining these two elements is even harder with the substantial scale difference brought by large scale point clouds.
Few works have succeeded in processing these massive point clouds.
Among them, RandLA-Net \cite{randlanet2020hu} can process up to 1 million points in a single pass with a smart use of random sampling.
SPG \cite{landrieu2018largescale} uses a superpoint graph as an intermediate structure to learn from clouds with several million points.
Flex-Convolution \cite{flexconvolution2020groh} proposes a new convolution kernel designed to benefit from GPU acceleration, which allows very large point cloud processing.
All of these approaches have been tested on publicly available dataset like semantic3D \cite{semantic3D2017hackel} or S3DIS \cite{S3DIS2017armeni}.
These datasets contain scenes with each approximately 0.1 and 0.2 million points per m$^2$.
Our work will focus on extremely dense point clouds with up to 1 million points per m$^2$, see figure \ref{fig:res}.
These clouds are provided by the LPA dataset, see section \ref{sec:exp}, a set of 23 labeled 3D point clouds from an underground car park.
It contains large scale objects, like ground or walls, as well as small scale objects, like electric boxes or extinguishers
High density point clouds allow to capture much more fine details.
However, to exploit the latter a segmentation needs to operate at full resolution which induces high memory and computational cost.
Although details are useful to segment detailed objects, they can become problematic for large scale objects due to noise or small geometric artefacts that can alter their local geometry.
To tackle these issues we propose a new generic deep learning pipeline which adapts the cloud resolution according to the suitable level of details for the segmentation of each object.
This approach exploits the full cloud precision but only for objects that require details, which allow its usage even on large scale point clouds.
To do so, we split up the segmentation into multiple sub-networks which operate on different resolutions and with each their specific objects to segment.
Although this approach can be used with any deep learning framework, we used it in combination with the Super-Point graph framework \cite{landrieu2018largescale} for our experiment.
\begin{figure}[htb]
\centering
\includegraphics[width=.49\linewidth]{figures/full_res.png}
\includegraphics[width=.49\linewidth]{figures/low_res.png}
\caption{\label{fig:res} Comparison of different densities of a cloud from LPA dataset. Left: original LPA density, 1 million points per m$^2$. Right: density of the S3DIS dataset, 0.2 million points per m$^2$.}
\end{figure}
\section{Method}
Our method proposes a generic deep learning pipeline to exploit the full cloud precision only when details are useful to the segmentation.
To do so, we split up the process into multiple sub-networks which operate on different cloud resolutions and with each their optimized learning parameters.
\subsection{High and low resolution classes}
Low resolution classes are associated with objects that do not need fine details analysis to be segmented.
They generally include large-scale objects like walls, ground or ceiling.
For such classes, details can even bring noise and small unwanted geometry artefacts that can alter their local geometry and thus mislead the network.
On the other hand, high resolution classes are associated with detailed objects that can benefit from the precision of a full resolution point cloud.
They generally include small-scale objects such as electrical boxes or mural lights from the LPA dataset.
However, it is important to point out that the size alone is not sufficient to determine the class resolution.
The details of the local geometry should always be considered.
For example, objects like doors can be seen as large scale objects.
However, they contain fine geometry like handles or frames that help a lot to dissociate them from a wall.
Thus, as they benefit from details, doors are considered as high resolution class.
Another exception are signs from the LPA dataset, they do not benefit from details because of their circular shape that is sufficient to dissociate them from the ceiling, see figure \ref{fig:results}.
They are therefore considered as low resolution class despite their small scale.
\subsection{Multi resolution segmentation}
\label{sec:merge}
To ensure the most discriminating geometry possible, we propose to classify each class at its suitable resolution.
However, in order to adapt the resolution we need to know if each point is considered as high or low resolution class, and an unlabelled cloud does not contain such information.
To overcome this issue, we propose to perform a first segmentation with a different set of classes.
This new set of class is constructed such that all high resolution classes are merged into existing low resolution classes, referred as the concatenated classes, see figure \ref{fig:pipeline}.
Thus we obtain a low resolution cloud populated with low resolution classes only, which are suitable conditions for a segmentation of a low resolution cloud.
Low resolution classes to be merged are chosen according to their adjacency in the scene with the high resolution classes.
As an example, doors and mural lights can be merged into walls, because of their close positioning.
This first low resolution segmentation with only low resolution classes and the concatenated classes, will then be referred as the initial segmentation.
To retrieve high resolution classes, we simply perform a second segmentation on all points classified as concatenated classes.
As high resolution classes benefit from details, this step is performed at full resolution.
The memory cost is greatly reduced because all large-scale structural objects considered as low resolution classes are not considered.
\subsection{Final results computation}
The final result clouds are computed by projecting both low and high resolution segmentation results on the original full resolution clouds.
The segmented low resolution clouds are projected on the high resolution clouds using a voxel based projection.
For each voxel in the high resolution cloud, we label all its points with the label of the unique point contained in the corresponding voxel of the low resolution cloud.
The voxel size is the same that was used to subsample the low resolution cloud.
Finally the segmented high resolution clouds are directly projected on the original high resolution clouds using a closest point projection.
This operation is necessary as the segmented high resolution clouds have missing parts since they do not contain points associated with low resolution classes, see figure \ref{fig:pipeline}.
\begin{figure}[htb]
\centering
\includegraphics[width=1.\linewidth]{figures/schemaV6.pdf}
\caption{\label{fig:pipeline}The proposed multi resolution deep learning pipeline applied to a made-up dataset with 4 classes. In this example $wall'$ is the concatenated class of $wall$ and $board$.}
\end{figure}
\section{Superpoint graph}
\label{sec:SPG}
Although our proposed pipeline can be used with any deep learning framework, we used it in combination with the Super-Point graph framework \cite{landrieu2018largescale}.
This section is a brief reminder of this paper work.
\subsection{Geometric partition}
The first step is a weakly supervised over-segmentation of the input cloud into geometrically simple point clusters.
These clusters are called superpoints.
Points of each superpoint have homogeneous geometric features, therefore it is assumed that they belong to the same object, but without making any assumption about its classification yet.
To better describe the local geometry of each point, 4 features are chosen, proposed by \cite{guinard_weakly_2017}, linearity, planarity, scattering and verticality.
These features are computed for each point from eigenvalues of the covariance matrix of their respective neighbors.
Superpoints are then modelized using an adjacency graph, as the piecewise constant approximation of a global energy problem.
An approximation of this problem solution is computed using the $l_0$-cut pursuit algorithm proposed by \cite{landrieu:hal-01306779}.
To retrieve entire objects, the relationship between superpoints is modeled by a superpoint graph, in which each node is a superpoint, and edges represent their adjacency relationship.
Each edge has a set of features to bring more information about the relationship, like the centroid offset or surface ratio.
\subsection{Classification}
First, a set of descriptors is computed for each superpoint according to its global shape, by a PointNET \cite{qi2017pointnet} network.
Points are rescaled to the unit-sphere before their embedding, in order to learn from the superpoint shape and not from its spatial distribution.
However to stay covariant to the superpoint size, the original metric diameter is concatenated to the final descriptors.
Then to take adjacency between superpoints into account, a contextual classification is performed.
It uses both descriptors previously computed and information from the superpoint graph in a Gated Graph Neural Network (GGNN) \cite{li2017gated}.
Each superpoint is embeded in a GRU initialized with previously computed descriptors from PointNET.
To take edge features into account, the convolution-like operation ECC\cite{simonovsky2017dynamic} idea is used over the superpoint graph.
\section{Conclusion}
In this paper, we presented a new deep learning pipeline to exploit fine-grained details from dense large scale 3D point clouds.
We showed that these details are important to segment certain objects, and introduced new ideas like adaptive density or class merging to process such details on large scale scenes scenario.
This approach leads to better semantic segmentation results of our dataset, composed of dense large scale 3D point clouds.
\section{Acknowledgements}
This work was performed using HPC resources from GENCI-IDRIS (Grant 2020-AD011012170).
The LPA dataset was provided thanks to the collaboration of the Lyon Parc Auto and Arskan.
This work was supported by Auvergne-Rhône-Alpes region under the R\&D booster grant "CAJUN".
\bibliographystyle{eg-alpha-doi}
\section{Related work}
In this section we will briefly present different deep learning frameworks designed to tackle the problem of the semantic segmentation of 3D point cloud scenes.
\subsection{Grid based}
As point clouds are unstructured, a natural way to process is to perform a projection into a structured data structure.
Thus, early approaches propose to embed point clouds into 3D voxel structure and operate convolution using 3D kernels \cite{octnet2017riegler,segcloud2017tchapmi,volumetric2016qi}.
Other methods use advances of matured 2D CNNs by rendering 3D point clouds into sets of 2D images from different points of view \cite{unstructured2017boulch,multiview2017chen,volumetric2016qi}.
\subsection{MLP based}
The pioneer work PointNet \cite{qi2017pointnet} directly consumes point cloud by learning pointwise features independently with several shared Multi-Layer Perceptrons (MLPs).
However, this type of architecture cannot capture the relations between points and therefore the local geometry.
To process a wider context, several approaches propose to use information from local neighborhood \cite{pointsift2018jiang,pointweb2019zhao,shellnet2019zhang,randlanet2020hu}.
\subsection{Convolution based}
Many recent works introduced various designs of convolution kernels for points, which operate directly on point clouds without any intermediate representation \cite{kpconv2019thomas,pointcnn2018li,convpoint2020boulch}.
These approaches rely on the fact that multiple points are needed to form a meaningful shape, and thus perform convolution between points in a local area.
\subsection{Graph based}
Some approaches design new convolution operators to learn from point clouds represented as a graph structure, in which each point is a node \cite{local2018wang, dynamic2019wang}.
ECC-MV \cite{simonovsky2017dynamic} generalizes the convolution operator to arbitrary graphs of varying size and connectivity.
GAC \cite{attention2019wang} proposes a Graph Attention Convolution to learn features from a local neighborhood by assigning attention weights.
\subsection{Large scale based}
Few works focus on segmentation of large scale 3D point clouds.
FCPN \cite{fullyconvolutional2018rethage} uses both voxel and MLP based networks in a fully-convolutional point network able to process clouds with up to 200k points.
Instead of a more complex point sampling strategy, RandLA-Net \cite{randlanet2020hu} uses a simple but efficient random point sampling, which can process up to 1 million points in a single pass.
To avoid the potential discard of key features, they introduce a local feature aggregation module to preserve details.
Flex-Convolution \cite{flexconvolution2020groh} manages to speed up the computation and decrease the memory consumption of convolution based methods with a new convolution kernel defined as a simple scalar product allowing massive GPU acceleration.
The vast majority of the previously presented methods have been designed and evaluated on publicly available dataset like semantic3D \cite{semantic3D2017hackel} or S3DIS \cite{S3DIS2017armeni}.
In contract, our work focuses on the segmentation of large-scale point clouds provided by the LPA dataset, which are much denser.
These are a new type of data to study which opens up new possibilities, especially in the use of fine grained details that are rarely available.
\section{Introduction}
Please follow the steps outlined in this document very carefully when
submitting your manuscript to Eurographics.
You may as well use the \LaTeX\ source as a template to typeset your own
paper. In this case we encourage you to also read the \LaTeX\ comments
embedded in the document.
\section{Instructions}
Please read the following carefully.
\subsection{Language}
All manuscripts must be in English.
\subsection{Margins and page numbering}
All printed material, including text, illustrations, and charts,
must be kept within a print area
7 inches (17.7 cm) wide by
9.44 inches (24 cm) high. Do not write or print anything
outside the print area. Number your pages on odd sites right
above, on even sites left above, no page number on the first site.
Do not use page numbering within the final version of your paper.
\subsection{Formatting your paper}
All text with the exception of the abstract must be in a two-column format.
The total allowable width of the text area -- including header and footer
lines -- is 177\,mm (7 inch) wide by 245\,mm (9.64 inch) high.
Columns are to be 84\,mm (3.3 inch) wide, with a 8\,mm (0.315 inch) space
between them.
The space between the header line and the first line of the text body and
between the last line of the text body and the footer line is 5\,mm
(0.196 inch) each.
\subsection{Type-style and fonts}
Wherever Times is specified, Times Roman may also be used. If
neither is available on your word processor, please use the font
closest in appearance to Times that you have access to. Only
Type-1 fonts will be accepted.
MAIN TITLE. The title should be in Times 17-point, boldface type and
centered. Capitalize the first letter of nouns, pronouns, verbs, adjectives,
and adverbs; do not capitalize articles, coordinate conjunctions, or
prepositions (unless the title begins with such a word). Leave two blank
lines after the title.
AUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title and
printed in Times 9-point, non-boldface type. This information is to be
followed by two blank lines.
The ABSTRACT ist to be in a one-column format. The MAIN TEXT is to be in a
two-column format.
MAIN TEXT. Type main text in 9-point Times, single-spaced. Do \emph{not} use
double-spacing. All paragraphs should be indented 1 em (the length of the
dash in the actual font). Make sure your text is fully justified -- that is,
flush left and flush right. Please do not place any additional blank lines
between paragraphs. Figure and table captions should be 9-point Times
boldface type as in Figure~\ref{fig:firstExample}.
\noindent Long captions should be set as in Figure~\ref{fig:ex1} or
Figure~\ref{fig:ex3}.
\begin{figure}[htb]
\caption{\label{fig:ex1}
'Empty' figure only to serve as an example of long caption requiring
more than one line. It is not typed centered but aligned on both sides.}
\end{figure}
\noindent
Figures which need the full textwidth can be typeset as Figure~\ref{fig:ex3}.
\noindent Callouts should be 9-point Times, non-boldface type. Initially
capitalize only the first word of section titles and first-, second-, and
third-order headings.
FIRST-ORDER HEADINGS. (For example, \textbf{1. Introduction}) should be Times
9-point boldface, initially capitalized, flush left, with one blank line
before, and one blank line after.
SECOND-ORDER HEADINGS. (For example, \textbf{2.1. Language}) should be Times
9-point boldface, initially capitalized, flush left, with one blank line
before, and one after. If you require a third-order heading (we discourage
it), use 9-point Times, boldface, initially capitalized, flush left, preceded
by one blank line, followed by a period and your text on the same line.
The headline \emph{(authors / title)} must be shortened if it uses the full
two column width of the main text.
There must be enough space for the page numbers. Please use ``et al.'' if
there are more than three authors and specify a shortened version for your title.
\subsection{Footnotes}
Please do \emph{not} use footnotes at all!
\subsection{References}
List all bibliographical references in 9-point Times, single-spaced, at the
end of your paper in alphabetical order. When referenced in the text, enclose
the citation index in square brackets, for example~\cite{Lous90}. Where
appropriate, include the name(s) of editors of referenced books.
For your references please use the following algorithm:
\begin{itemize}
\item \textbf{one} author: first 3 chars plus year --
e.g.\ \cite{Lous90}
\item \textbf{two}, \textbf{three} or \textbf{four} authors: first char
of each family name plus year -- e.g.\ \cite{Fellner-Helmberg93}
or \cite{Kobbelt97-USHDR} or \cite{Lafortune97-NARF}
\item \textbf{more than 4} authors: first char of family name from
first 3 authors followed by a '*' followed by the year --
e.g.\ \cite{Buhmann:1998:DCQ} or \cite{FolDamFeiHug.etal93}
\end{itemize}
For BibTeX users a style file \ \texttt{eg-alpha.bst} and
\texttt{eg-alpha-doi.bst} \ is available which uses the above algorithm.
For Biber users a style file \ \texttt{EG.bbx} \ is available which uses the above algorithm.
\subsection{Illustrations, graphs, and photographs}
All graphics should be centered.
\begin{figure}[htb]
\centering
\includegraphics[width=.8\linewidth]{sampleFig}
%
\parbox[t]{.9\columnwidth}{\relax
For all figures please keep in mind that you \textbf{must not}
use images with transparent background!
}
%
\caption{\label{fig:firstExample}
Here is a sample figure.}
\end{figure}
If your paper includes images, it is very important that they are of
sufficient resolution to be faithfully reproduced.
To determine the optimum size (width and height) of an image, measure
the image's size as it appears in your document (in millimeters), and
then multiply those two values by 12. The resulting values are the
optimum $x$ and $y$ resolution, in pixels, of the image. Image quality
will suffer if these guidelines are not followed.
Example 1:
An image measures 50\,mm by 75\,mm when placed in a document. This
image should have a resolution of no less than 600 pixels by 900
pixels in order to be reproduced faithfully.
Example 2:
Capturing a screenshot of your entire $1024 \times 768$ pixel display
monitor may be useful in illustrating a concept from your research. In
order to be reproduced faithfully, that $1024 \times 768$ image should
be no larger than 85 mm by 64 mm (approximately) when placed in your
document.
\subsection{Color}
\textbf{Please observe:} as of 2003 publications in the proceedings of the
Eurographics Conference can use color images throughout the paper. No
separate color tables are necessary.
However, workshop proceedings might have different agreements!
Figure~\ref{fig:ex3} is an example for creating color plates.
\subsection{Embedding of Hyperlinks / Typesetting of URLs}
Due to the use of the package \texttt{hyperref} the original behavior
of the command $\backslash$\texttt{url} from the package \texttt{url}
is not available. To circumvent this problem we either recommend to
use the command $\backslash$\texttt{httpAddr} from the
included package \texttt{egweblnk} (see below) or to replace the
command $\backslash$\texttt{url} by the command $\backslash$\texttt{webLink}
-- e.g. in cases where $\backslash$\texttt{url} has been used
widely in BibTeX-References. In the latter case we suggest to run
BibTeX as usual and then replace all occurences of $\backslash$\texttt{url} by
$\backslash$\texttt{webLink}
\noindent
The provided commands for hyperlinks, in a nutshell, are:
\begin{description} \itemsep 1ex
\item [\webLinkFont $\backslash$httpAddr \{URL without leading 'http:'\}]
\mbox{}\\
e.g. \ \httpAddr{//diglib.eg.org/handle/10.2312/306}
\item [\webLinkFont $\backslash$httpsAddr \{URL without leading 'https:'\}]
\mbox{}\\
e.g. \ \httpsAddr{//diglib.eg.org/handle/10.2312/306}
\item[\webLinkFont $\backslash$ftpAddr \{URL without leading 'ftp:'\}]
\mbox{}\\
e.g. \ \ftpAddr{//www.eg.org/EG/DL/ftpupload} %
\item[\webLinkFont $\backslash$URL \{url\}]
\mbox{}\\
e.g. \ \URL{http://diglib.eg.org/handle/10.2312/306}
\item[\webLinkFont $\backslash$MailTo \{Email addr\}]
\mbox{}\\
e.g. \ \MailTo{publishing@eg.org}
\item[\webLinkFont $\backslash$MailToNA \{emailName\}\{@emailSiteAddress\}]
\mbox{}\\
e.g. \ \MailToNA{publishing}{@eg.org}
\item[\webLinkFont $\backslash$webLink\{URL without hyperlink creation\}]
\mbox{}\\
e.g. \ \webLink{http://www.eg.org/some_arbitrary_long/but_useless/URL}
\end{description}
\subsection{PDF Generation}
Your final paper should be delivered as a PDF document with all typefaces
embedded. \LaTeX{} users should use \texttt{dvips} and \texttt{ps2pdf} to
create this PDF document. Adobe Acrobat Distiller may be used in place of
\texttt{ps2pdf}.
Adobe PDFWriter is \emph{not} acceptable for use. Documents created with
PDFWriter will be returned to the author for revision. \texttt{pdftex} and
\texttt{pdflatex} (and its variants) can be used only if the author can
make certain that all typefaces are embedded and images are not downsampled
or subsampled during the PDF creation process.
Users with no access to these PDF creation tools should make available a
PostScript file and we will make a PDF document from it.
The PDF file \emph{must not} be change protected.
\subsubsection*{Configuration Notes: dvips / ps2pdf / etc.}
\noindent
\texttt{dvips} should be invoked with the \texttt{-Ppdf} and \texttt{-G0}
flags in order to use Type 1 PostScript typefaces:
\begin{verbatim}
dvips -t a4 -Ppdf -G0 -o my.ps my.dvi
\end{verbatim}
\noindent
If you are using version 7.x of GhostScript, please use the following method of invoking \texttt{ps2pdf}, in
order to embed all typefaces and ensure that images are not downsampled or subsampled in the PDF
creation process:
\begin{verbatim}
ps2pdf -dMaxSubsetPct=100 \
-dCompatibilityLevel=1.3 \
-dSubsetFonts=true \
-dEmbedAllFonts=true \
-dAutoFilterColorImages=false \
-dAutoFilterGrayImages=false \
-dColorImageFilter=/FlateEncode \
-dGrayImageFilter=/FlateEncode \
-dMonoImageFilter=/FlateEncode \
mypaper.ps mypaper.pdf
\end{verbatim}
If you are using version 8.x of GhostScript, please use this method in place of the example above:
\begin{verbatim}
ps2pdf -dPDFSETTINGS=/prepress \
-dCompatibilityLevel=1.3 \
-dAutoFilterColorImages=false \
-dAutoFilterGrayImages=false \
-dColorImageFilter=/FlateEncode \
-dGrayImageFilter=/FlateEncode \
-dMonoImageFilter=/FlateEncode \
-dDownsampleColorImages=false \
-dDownsampleGrayImages=false \
mypaper.ps mypaper.pdf
\end{verbatim}
\subsubsection*{Configuration Notes: pdftex / pdflatex / etc.}
\noindent
Configuration of these tools to embed all typefaces can be accomplished by editing the \texttt{updmap.cfg} file
to enable inclusion of the standard (or base) 14 typefaces.
Linux users can run the \texttt{updmap} script to do this:
\begin{verbatim}
updmap --setoption pdftexDownloadBase14 true
\end{verbatim}
Windows users should edit the \texttt{updmap.cfg} files found in their TeX installation directories (one or both
of the following may be present):
\begin{verbatim}
INSTALLDIR\texmf\web2c\updmap.cfg
INSTALLDIR\localtexmf\miktex\config\updmap.cfg
\end{verbatim}
Ensure the value for \texttt{pdftexDownloadBase14} is "true," and then follow the instructions found here:
\httpAddr{//docs.miktex.org/manual/} to update your MikTeX installation.
\subsubsection*{Configuration Notes: Acrobat Distiller}
We recommend to use a Distiller job options file that embeds
all typefaces and does not downsample or subsample images when creating the PDF document.
\subsection{Exclusive License Form}
You must include your signed Eurographics Exclusive License Form
when you submit your finished paper. We MUST have this form before
your paper can be published in the proceedings.
\subsection{Conclusions}
Please direct any questions to the production editor in charge of
these proceedings.
\printbibliography
\newpage
\begin{figure*}[tbp]
\centering
\mbox{} \hfill
\includegraphics[width=.3\linewidth]{sampleFig}
\hfill
\includegraphics[width=.3\linewidth]{sampleFig}
\hfill \mbox{}
\caption{\label{fig:ex3}%
For publications with color tables (i.e., publications not offering
color throughout the paper) please \textbf{observe}:
for the printed version -- and ONLY for the printed
version -- color figures have to be placed in the last page.
\newline
For the electronic version, which will be converted to PDF before
making it available electronically, the color images should be
embedded within the document. Optionally, other multimedia
material may be attached to the electronic version. }
\end{figure*}
\end{document} |
1711.04083 | \section{Introduction}
Hard optimization problems are ubiquitous throughout the sciences and
engineering, and have consequently been the subject of intense
theoretical and practical study. While it is generally considered
highly unlikely that problems classified as NP-hard can be solved
efficiently for all members of their class, numerous algorithms have
been devised that either strive for an approximate solution (e.g.,
simulated annealing or stochastic local searches) or solve the
problems exactly in exponential time, but through mathematical insight
(e.g., branch-and-bound or branch-and-cut), the algorithms increase
the practically feasible sizes over what can be achieved using brute
force. Evaluation of such algorithms often requires access to
benchmarking problems of various types; ideally their difficulty
should also be a controllable (or tunable) property. Ideas of
generating test instances with planted solutions, that is, whose
optimizing values are known to the problem constructor, have been
explored in various fields for decades
\cite{bach:83,pilcher:87,pilcher:92}. A method of planting solutions
to hard random Boolean satisfiability (SAT) problems based on
statistical mechanics was first proposed in Ref.~\cite{barthel:02}.
In contrast, doing so for topologically structured problems is
considerably less charted territory, as the correspondence between
random SAT and the diluted spin glass disappears; thus replica
symmetry breaking analysis \cite{monasson:96} no longer applies. As
such, generating hard problems with a known solution for
nearest-neighbor Ising models on a hypercube is a relatively new field
of study.
\begin{figure}[tb]
\includegraphics[width=\columnwidth]{BCCPartition.pdf}
\caption{ Cutaway diagram showing a decomposition of the cubic
lattice into edge-disjoint induced subgraphs, the colored unit
cube cells. A cell in the right foreground is omitted for
clarity. The collection of vertices specifying each colored
cube is defined as $\mathcal{C}$. Under a periodic boundary, each spin
appears in exactly two subgraphs. Ising subproblems with
$J_{ij} \in \{-1,1\}$, i.e., requiring a single bit of
precision, are specified for the cells such that one of each
subproblem's ground states is consistent with its
neighbors'. Over the full problem, the concatenation of these
partial states forms the planted solution. As discussed in the
text, the unit cell Hamiltonians' lack of shared edges is a
crucial property enabling large, tunable changes in complexity
while guaranteeing a known ground state.}
\label{fig:BCCPartition}
\end{figure}
While the task of creating such problems is of theoretical interest due
to their potential assistance in answering open questions about the
nature of spin glasses, the pace of research has been hastened in recent
years by practical motivations, in particular, the availability of
quantum annealing \cite{johnson:11,kadowaki:98} and related analog
devices (e.g., optical \cite{wang:13b}) physically implementing Ising
Hamiltonian minimization heuristics. Such Hamiltonians are usually
constrained by device manufacturing considerations to having
short-ranged interaction terms \cite{choi:08}; while more general
objectives can typically be encoded onto their native graphs, this may
require many system variables for each objective function variable,
limiting the problem sizes that can be studied. Hence, a way of encoding
topology-native problems is desirable.
In this work, we present a methodology for generating short-range Ising
problems with a known ground state. Our primary focus is on the
three-dimensional lattice with periodic boundary conditions, primarily
because the three-dimensional spin glass is a prototypical complex
system and of tremendous interest to condensed matter physicists, but
also because the regular structure over the lattice allows a description
of our key contributions in a natural and transparent manner. We stress,
however, that the method is adaptable to different graph topologies.
By carefully decomposing the underlying graphical structure and
selecting Ising subproblems over the resultant components, whose
spin-spin interactions are restricted to being bimodal, from classes
designed to have specific features influencing hardness, we obtain a
factor graph representation having the topology of a body-centered
cubic lattice (see Fig.~\ref{fig:BCCPartition}). Following a Voronoi
tessellation of this lattice, we note that the resultant problem is
equivalent to a specific type of three-dimensional tile matching
puzzle, a generalization of the two-dimensional edge-matching puzzle
\cite{demaine:07}. In their two-dimensional form, these puzzles are
constraint-satisfaction problems requiring placement of tiles from a
given set in permissible spatial locations so that patterns on the
tile edges match those of their neighbors. A well-known example is
Eternity II \cite{eternityII}, a to date unsolved problem for whose
solution a \$2 million prize was once offered; the prize deadline
expired in 2010. Our construction yields polyhedral rather than
square puzzle pieces; in particular the units are truncated octahedra
whose facets are of various color patterns (see
Fig.~\ref{fig:octaPuzzle} below). By varying the pattern properties
via manipulation of the underlying Ising subproblems, we quickly and
efficiently induce a tremendous variation in problem difficulty
observed over all examined heuristic algorithms; hundreds of problems
at any hardness level within our attainable range can be obtained in
seconds with modest computing resources. Hard problems within our
class are observed to be many orders of magnitude more difficult than
randomly generated spin-glass problems with bimodal disorder, i.e.,
where the spin-spin interactions are randomly drawn from $\{\pm 1\}$.
Despite our restriction to using problem parameters in $\{\pm 1\}$, we
can directly generate problems with a known solution that are vastly
more difficult than the Gaussian spin glass, which is itself known to
be considerably harder than the random bimodal class. We thus consider
our construction technique as giving rise to as yet unencountered
types of disordered system and discuss our results within the context
of hardness phase transitions for combinatorial problems.
The paper is structured as follows. In Sec.~\ref{sec:relWork} we present
an overview of recently-introduced planting techniques, followed by an
outline of the planting approach presented in this work in
Sec.~\ref{sec:probConst}. Sections \ref{sec:spinGlassPuzzles} and
\ref{sec:phasesSat} discuss the relationship of the presented problems
to constraint satisfaction problems. Section \ref{sec:experiments}
presents numerical experiments using different algorithms highlighting
the tunability of typical computational complexity for the problems with
planted solutions, followed by discussions, generalizations to other
graphs, and concluding remarks.
\section{Related Work}
\label{sec:relWork}
Our methodology addresses several shortcomings of recently proposed
techniques for solution planting in short-range Ising models. In
Ref.~\cite{hen:15a}, a construction method was presented in which
frustrated loops with known solution were employed as subproblem
units. While the idea is appealing, the resultant problems typically
have coupler strengths spanning a large range, posing serious issues
for physical devices with precision limitations. An \emph{ad hoc}
limited-precision variant was introduced in Ref.~\cite{king:15}. Both
approaches, however, bear the more serious and systematic disadvantage
of tending to generate sets of problems whose members are not of
sufficient hardness; we contend that this is a consequence of adding
subproblem couplers. It is not hard to see that when partial
Hamiltonians having consistent ground states are added, couplers in
common which in the individual minima are either both satisfied or
both violated will increase in magnitude. In a ground state, however,
satisfied couplers are more common than violated ones, hence for any
given shared coupler, it is relatively likely that the resultant
(added) coupler tends to more strongly prefer alignment with the
shared ground state. When this occurs over many edges, the problem's
degrees of freedom are ``softly'' constrained. The effect is the
introduction of ``hints'' facilitating an optimization
algorithm's task.
In Ref.~\cite{marshall:16}, a method was presented for generating
difficult instances with bimodal couplers by iteratively changing the
couplers' signs to maximize the time required by a given solver to
reach the (hypothesized) ground state. This is a promising approach as
it both avoids the coupler addition issue and makes modest precision
demands; however, it generally does not allow the ground state to be
known with certainty and hence is difficult to scale to larger
systems. Furthermore, the approach requires a Monte Carlo-like bond
moving algorithm that will likely not easily scale up to much larger
problem sizes. The authors do discuss a variant enabling solution
planting, but as it again relies on adding couplers, the precision
issue returns and, they report, the gains in hardness are modest
compared to the mode not controlling the ground state. While the
authors provide a useful analysis of \emph{post hoc} empirical
correlates of hardness, for example the parallel tempering mixing
time, first proposed in Ref.~\cite{yucesoy:13}, the generation
procedure is ultimately a local search technique with respect to
solver computational time, and hence does not yield or use physical or
algorithmic insight at the problem level about what tunes
difficulty. In other words, one could not, merely by reference to the
resultant Hamiltonian, predict whether the problem is easy or
hard. Finally, to design problems based on the time to solution of a
given solver, assuming this will carry over to other optimization
techniques requires theoretical backing that is missing to date.
A recent paper \cite{wang:17} introduced the method of patch planting,
in which subgraphs with known solution are coupled to each other,
satisfying all interactions and thus planting an overall ground
state. An advantage is that low-precision and tunable problems can be
readily created for a given application domain. The authors report,
however, that the resultant problems are observed to be less difficult
than random problems, simply because frustrated loops typically do not
extend beyond the subgraph building blocks, making the ground state
less frustrated than in the fully random case. This problem can be
slightly alleviated by post processing mining of the data
\cite{wang:17}.
\section{Problem Construction}
\label{sec:probConst}
Consider a cubic lattice of linear size $L$ with periodic boundary
conditions. Let the underlying graph be $G = (V,E)$ with $V$ and $E$ its
vertex and edge sets, respectively. Define $U$ to be the vertices of $V$
whose coordinates $(i_x, i_y, i_z)$ consist of integers of the same
parity, for example, $(1,3,1)$ or $(0,2,4)$. For each $u \in U$, define
$C$ to be the vertices of the cubic unit cell implied when $u$ is
opposite to the vertex with coordinates $(i_x+1,i_y+1,i_z+1)$, where the
addition is modulo $L$ to account for the periodic boundary. Let $\mathcal{C}$
be the collection of all such cell vertex sets. This construction is
partially illustrated in Fig. \ref{fig:BCCPartition}, where each
colored unit cell represents a subgraph $G[C]$ induced by a vertex set
$C \in \mathcal{C}$ as defined. It should be clear that first, each vertex in
$V$ appears in two and only two unit cells and second, that the unit
cell subgraphs do not share any edges. In graph terminology, the family
$\mathcal{C}$ partitions $G$ into edge-disjoint induced subgraphs.
We are concerned here with constructing Ising problems on the lattice
with zero field, i.e., whose Hamiltonians are of the form
\begin{equation}
H(\vc{s}) = \sum_{ij \in E} J_{ij}s_is_j .
\end{equation}
Due to the disjointness of the cell edge sets, we can regroup the
Hamiltonian into terms each dependent only on the couplings within a
unit cell in $\mathcal{C}$,
\begin{equation}
H(\vc{s}) = \sum_{C \in \mathcal{C}} H_C(\vc{s}) ,
\end{equation}
where
\begin{equation}
H_C(\vc{s}) \triangleq \sum_{ij \in E[C]} J_{ij}s_i s_j .
\end{equation}
The terms $\{ H_C(\vc{s}) \}$ are called the unit cell subproblem
Hamiltonians. Specifying each subproblem’s Hamiltonian is sufficient to
imply a full Hamiltonian over the lattice. A straightforward consequence
of the relation between summation and minimization, exploited in
Ref.~\cite{hen:15a}, is that if the subproblems share the same
minimizing configuration $\vc{s}^*$, then $H$ is also minimized at $\vc{s}^*$.
This in turn suggests a natural construction procedure for a problem
$H$ with a known ground state. We note a seemingly small but in fact
deep and far-reaching difference between prior methodologies and what we
propose is that subproblem couplers are never added, avoiding the issue
discussed in Sec. \ref{sec:relWork}.
It may appear surprising at first that any interesting behavior can
result from linking such apparently simple unit-cell subproblems, but
this impression turns out to be quite false. Indeed, by restricting our
attention to simple classes of subproblems whose couplers belong to the
set $\{\pm 1\}$, in other words, representable with a single bit of
precision, we can effect dramatic changes in problem complexity, from
trivial to many orders of magnitude harder than, e.g., Gaussian spin
glasses.
Without loss of generality, we focus on planting the ferromagnetic
ground state, i.e., $\vc{s}^* = (+1,+1,\ldots,+1)$ and its
${\mathbb Z}_2$ image, because once such a problem has been generated,
the structure of the construction procedure can be concealed by gauge
randomization from a would-be adversary seeking the solution.
Specifically, to translate to an arbitrarily chosen ground state
$\vc{t}^*$, one would transform the initially determined couplings
$\{ J_{ij} \}$ via
\begin{equation}
J'_{ij} \gets J_{ij} t_i^*t_j^* .
\end{equation}
We thus turn our attention to the set of unit-cell Hamiltonians having
couplers of magnitude $1$ and ground state $\vc{s}^*$. Here we disregard
the trivial subproblem comprised uniquely of ferromagnetic bonds;
those remaining can be naturally partitioned into three types
according to their number of frustrated facets. We call these types
$F_2$, $F_4$, and $F_6$, as they contain unit cells with two, four,
and six frustrated facets, respectively. These types can be further
partitioned into members equivalent under action of the cube graph
automorphism group. More precisely, problems within a given
equivalence class can be arrived at from one another via
transformation from $O_h$, the $48$ octahedral symmetries comprised of
rotation and reflection leaving a (generic) cube invariant. It turns
out that $F_2$ and $F_4$ contain two such classes (the orbits
of $O_h$ acting on $F_2$ and $F_4$, called $F_{21}, F_{22}$ and
$F_{41}, F_{42}$) while all members of $F_6$ are equivalent under
$O_h$. Figure \ref{fig:FPProblems} illustrates an arbitrarily-chosen
set of equivalence class representatives for the three problems types,
while in Figure \ref{fig:F6Examples}, a few members of type $F_6$ are
shown.
\begin{figure}[tb]
\includegraphics[width=\columnwidth]{FPProblems.pdf}
\caption{Class representatives of the three unit cell problem
types grouped by the number of frustrated facets
(shaded). Straight and wavy edges denote ferromagnetic and
antiferromagnetic bonds, respectively. All problems have,
generally among others, the ferromagnetic state
$\vc{s}^* = (+1,+1,\ldots,+1)$ as a Hamiltonian minimizer. The
total number of ground states $|\Gamma|$ (up to ${\mathbb Z}_2$
symmetry) for problems within each class is indicated beside
each representative.}
\label{fig:FPProblems}
\end{figure}
\begin{figure}[tb]
\includegraphics[width=\columnwidth]{F6Examples.pdf}
\caption{Selected members of class $F_6$, equivalent under
octahedral symmetry, that is, one of the $48$ compositions of
rotation and reflection leaving an unmarked cube invariant.}
\label{fig:F6Examples}
\end{figure}
\begin{figure}[tb]
\includegraphics[width=\columnwidth]{F22GSExamples.pdf}
\caption{All subproblem ground states for a member of class
$F_{22}$ (up to ${\mathbb Z}_2$ symmetry) where black (dark) and red
(light) represent spins of opposite sign.}
\label{fig:F22GSExamples}
\end{figure}
While the subproblems can all readily be verified to have $\vc{s}^*$ as a
ground state, they clearly differ in the number of other minimizers
they possess. Up to ${\mathbb Z}_2$ symmetry, members of $F_{21}$
have a unique ground state while those of $F_{22}$ have four. All
problems in $F_4$ and $F_6$ have two and eight global minima,
respectively. Figure \ref{fig:F22GSExamples} shows the four ground
states of a specific member of $F_{22}$. As we will see, this
variation in subproblem ground-state degeneracy turns out to exert a
crucial effect on the computational difficulty of the final planted
problem.
We are now ready to straightforwardly specify our problem construction
procedure. First, a probability distribution is chosen over the problem
classes $\{ F_{ij}\}$. A naive example, which in fact turns out to be
a poor choice for generating hard problems, is to sample them uniformly.
Next, a distribution over members of the classes is specified. In our
examples this was always uniform. The randomness in generating the
problems is thus decomposed into what we call subproblem disorder
(over the classes) and $O_h$ disorder (over the members), the
latter so called because it arises equivalently by sampling symmetries
of the octahedral group. Subproblems over the unit-cell set $\mathcal{C}$ are
then sampled from the defined distributions.
Note that $O_h$ disorder is sufficient to induce considerable richness
in the types of final problems, even when the cells in $\mathcal{C}$ are all
assigned from the same subproblem class. For example, it may appear
at first that restricting all subproblems to the class $F_6$ (a regime
which turns out to be highly interesting) will result in a so-called
fully frustrated model \cite{villain:77}, i.e., where all chordless
cycles (or plaquettes) are frustrated in the final Hamiltonian,
because problems in $F_6$ are themselves fully frustrated. That,
however, would be incorrect; it is easy to check that cycles forming
the facets of unit cubes not in $\mathcal{C}$ may or may not be
frustrated depending on the random orientation of the bounding
$\mathcal{C}$-cell problems.
It should be pointed out that while the method does ensure that the
planted state is in fact a ground state, as the couplers are selected
from the $\{\pm 1\}$ class, it is in principle possible for other
ground states to exist \cite{comment:free}. Our experimental
validation in Sec.~\ref{sec:experiments} is thus concerned with the
difficulty of finding any ground state. Methods of removing the
degeneracy are certainly conceivable and are discussed briefly in
Sec.~\ref{sec:discussion}.
We proceed next to a further exploration of the problem structure,
discussing its connection to tiling puzzles.
\section{The puzzle of spin glasses}
\label{sec:spinGlassPuzzles}
The proposed problem generation formalism enables translation to a
specific type of geometrically appealing constraint-satisfaction
problem (CSP) called a tiling puzzle. While these problems are usually
defined over two-dimensional domains, we will see that our
three-dimensional technique yields a polyhedral generalization of an
edge-matching puzzle, an example of which is {\rm Eternity II}
\cite{eternityII}. The idea is to abstract the Ising problem into a
generic CSP representation known as a factor graph
\cite{kschischang:01}. While the Ising Hamiltonian itself can always
be interpreted as a factor graph, under our planting technique
additional structure is present, namely, the restriction to certain
subsets of states. Following construction of the CSP exploiting this
structure, the natural connection to a tiling puzzle is observed.
Figure \ref{fig:tilingCSPSteps} illustrates the necessary steps for the
mapping. For clarity, a direct two-dimensional analog of the
three-dimensional case is shown, in which the unit cells induced by the
decomposition into $\mathcal{C}$ are the cycles forming a checkerboard pattern
rather than unit cubes as displayed in Fig. \ref{fig:BCCPartition}.
We first consider the construction of a graph with respect to $\mathcal{C}$,
which we call the subproblem lattice, denoted by $\widetilde{G}$.
Suppose a vertex is placed at the centroid of each unit cell. The
subproblem lattice is defined by considering two such vertices to be
adjacent if and only if their corresponding unit cells share a corner
vertex. Clearly, this lattice may be $2$-colored, that is, partitioned
into two subsets, none of whose members are neighbors. This in turn
implies a partitioning of the cells in $\mathcal{C}$ into two disjoint sets
$\mathcal{W}$ and $\mathcal{B}$, whose elements are labeled with colors $W$ and $B$,
respectively. Figure \ref{fig:tilingCSPSteps}(a) demonstrates
the two-dimensional construction of $\widetilde{G}$ and its vertex coloring,
where yellow (light shading) and blue (dark shading) may represent
labels $W$ and $B$ or vice versa. In three space dimensions, each yellow
(light shading) cell is surrounded by eight blue (dark shading) cells
instead of four.
\begin{figure*}[tb]
\includegraphics[width=0.62\columnwidth]{subproblemLattice.pdf}
\hspace*{1em}
\includegraphics[width=0.62\columnwidth]{factorGraph.pdf}
\hspace*{1em}
\includegraphics[width=0.62\columnwidth]{2DTilingPuzzle.pdf}
\caption{Steps in the derivation of a tile-matching puzzle from
the unit-cell planting methodology, shown in two dimensions with
a free boundary for clarity. Note that the two-dimensional cells
induced by the decomposition are the unit cycle (plaquette)
subgraphs arranged in a checkerboard pattern rather than the
unit cubes shown in Fig.~\ref{fig:BCCPartition} in three
dimensions. (a) The subproblem lattice $\widetilde{G}$ is constructed
by placing a vertex at each subproblem cell's centroid. As
discussed in the text, the vertices are 2-colored to yield a
convenient representation of the puzzle. In three space
dimensions, each yellow (light shading) unit cell contacts eight
rather than four blue (dark shading) neighbors; the
corresponding $\widetilde{G}$ is the body-centered cubic lattice. (b)
Factor graph associated with $\widetilde{G}$, with factors shown as
squares. Each vertex is involved in eight factors in three space
dimensions. A Voronoi tessellation of $\widetilde{G}$ yields the set
of tiling locations. In two space dimensions, these are the
areas within the thick lines in panel (c). The factor graph
domains, determined by the Ising subproblems, specify a set of
allowable tile patterns at each location. The figure shows a
hypothetical tile placed at each site. The factor constraints
require adjacent colors to agree; hence, the central tile shown
has one violation among its four neighbors. The puzzle for the
three-dimensional tessellation is shown in
Fig.~\ref{fig:octaPuzzle}.}
\label{fig:tilingCSPSteps}
\end{figure*}
We next construct the factor graph CSP representation equivalent to the
problem of minimizing $H$. The idea is to derive an objective
consisting of independent variables subject to equality constraints
along the edges of $\widetilde{G}$. First, each vertex in $\widetilde{G}$, mapping
to cell $C$ in the original lattice, is identified with a variable
$\vc{s}_{C}^W$ or $\vc{s}_{C}^B$, over eight Ising variables (four in two space
dimensions) depending on its label in the $2$-coloring. Because each
vertex of the initial lattice is a shared corner of two unit cells in
$\mathcal{C}$ of opposite color, in the concatenation of the states
$\prod_{C\in \mathcal{W}} \vc{s}_C^W \prod_{C'\in \mathcal{B}} \vc{s}_{C'}^B$ variables
$s_i^W$ and $s_i^B$ will occur exactly once for each vertex $i \in V$.
The domain of each subproblem lattice variable is simply the ensemble of
its subproblem ground states, the ground-state set $\Gamma_C$, so that
the full configuration space of the new problem is the Cartesian product
\begin{equation}
\Gamma^{W,B}\triangleq \prod_{C\in \mathcal{W}} \Gamma_C \prod_{C'\in
\mathcal{B}} \Gamma_{C'} .
\end{equation}
To be a feasible solution to the original problem however, equality of
neighboring variables must be enforced, that is, $s_i^W = s_i ^B$
$\forall$ $i$. If we define the problem
\begin{equation}
\max_{\vc{s}^{W,B} \in \Gamma^{W,B}} \prod_{i \in V}
\delta[ s_i^W,s_i^B ] ,
\label{eq:tilingCSP}
\end{equation}
where $\delta(x,y) = 1$ if $x=y$ and zero otherwise, we readily see
that the solution is obtained for the overall planted ground state
$\vc{s}^B = \vc{s}^W= \vc{s}^*$. To make the graphical connection, we note that
each $\delta$ function in Eq.~(\ref{eq:tilingCSP}) can be interpreted
as a factor over neighboring subproblem lattice variables (identified
with) $C, C'$, i.e.,
$\psi_{C,C'} ( \vc{s}_C^W, \vc{s}_{C'}^B ) \triangleq \delta[ s_i^W,s_i^B] $
with $i = C \cap C'$. The final maximization objective is then the
product of all factors over subproblem lattice variables
\begin{equation}
f(\vc{s}^{W,B}) = \prod_{C,C' \in \widetilde{E}} \psi_{C,C'}( \vc{s}^W,\vc{s}^B) ,
\end{equation}
where the product index refers to edges of $\widetilde{G}$ by the cells in
$\mathcal{C}$ mapping to their endpoint vertices. The factor graph
corresponding to the two-dimensional lattice in
Fig.~\ref{fig:tilingCSPSteps} (a) appears in
Fig.~\ref{fig:tilingCSPSteps} (b), with variables represented by
circular vertices (whose colors identify their labels as $W$ or $B$) and
factors by the squares along the edges. In three dimensions, each
variable is involved in eight factors rather than four.
Consider now the subproblem lattice with its vertices lying at their
``natural'' points in Euclidean space, i.e., at the centroids of the
unit cells, rather than as a generic graph. A Voronoi tessellation
\cite{torquato:13} is a partitioning of the space surrounding the
vertices into convex polytopes, usually called tiles, but for reasons
that will soon be clear, we refer to as tiling locations such that all
points within a polytope are closest (typically in the $L_2$-norm
sense) to the enclosed vertex. It is easy to see in
Fig.~\ref{fig:tilingCSPSteps}(a) that in two dimensions, the
subproblem lattice forms a periodic pattern, sometimes called the
quincunx, of squares of length $2$, each with a central vertex. The
appropriate tessellation of $\widetilde{G}$ is a partitioning into tilted
square regions centered at each vertex. Neglecting for an instant the
significance of the colors, Fig.~\ref{fig:tilingCSPSteps}(c) shows
five full and eight partial tiling squares in the section of lattice
drawn. Returning to the CSP in which the task was to assign the
variables from their respective domains (ground-state sets) so that
factor constraints are satisfied, we now observe the equivalence to a
tile-matching puzzle. This is comprised of a board (the space
embedding $\widetilde{G}$), segmented into octagons (the cells in the
Voronoi tessellation), each of which is endowed with a repertoire of
tiles (the ground-state set associated with the location). The tile
faces could be in one of two colors (the spin values, not to be
confused with the $2$-coloring introduced to construct the CSP). The
puzzle task is to select the tiles so that no two adjacent tile faces
have a different color. If orange (light) and purple (dark) represent
spin values, say, $1$ and $-1$, respectively, then
Fig.~\ref{fig:tilingCSPSteps} (c) shows a hypothetical tiling
assignment. Here a violation occurs where the tile colors on both
sides of a factor differ.
\begin{figure}[tb]
\includegraphics[width=\columnwidth]{octaPuzzle.pdf}
\caption{Tile-matching puzzle associated with three-dimensional
unit-cell planting. The tile locations are truncated octahedra,
the elements of a bitruncated cubic honeycomb, and in turn the
Voronoi tessellation of the body-centered-cubic lattice
$\widetilde{G}$. Color constraints are defined among neighboring
hexagonal facets of the octahedra; the square facets are
uninvolved. Four of the eight factors associated with a location
are shown (the other four point into the page). In the example
configuration shown, one constraint satisfaction and two
violations occur.}
\label{fig:octaPuzzle}
\end{figure}
In three dimensions, the puzzle is analogous, but with a few
twists. In a generalization of the two-dimensional case, the
subproblem lattice $\widetilde{G}$ will, when respecting Euclidean
coordinates, form a pattern of cubes of length $2$, each with a vertex
at its centroid. Such a formation is known as the body-centered-cubic
lattice. The corresponding Voronoi tessellation is the space-filling
bitruncated cubic honeycomb whose base unit is the truncated
octahedron \cite{conway:16,torquato:13}. Each of these polyhedral
cells, which now define the three-dimensional tiling locations,
consists of eight hexagonal facets, through which the the edges
of our CSP factor graph pass, and six square facets, defining the
boundaries between $\widetilde{G}$ vertices at distance $2$, but through
which no CSP factors pass. The ground-state sets associated with each
vertex of $\widetilde{G}$ thus map to coloring configurations on the
hexagonal facets of the enclosing Voronoi cell, defining a set of
tiles, and again the task is to place the tiles while meeting all
coloring constraints. The construction is illustrated in
Fig.~\ref{fig:octaPuzzle}.
While technically fitting the definition of a tile-matching puzzle, the
specific examples proposed here differ in a key way from conventional
puzzles such as Eternity II. In the latter, each tile from the
given set may be placed at more than one board location (in an allowable
orientation) while presently the ground-state set associated with each
site constrains the choices. This fact, along with the polynomial-time
solvability \cite{barahona:82} of the original ground-state problem in
two dimensions, could lead one to suspect that the puzzles we have
contrived may not be overly interesting. In three space dimensions,
however, solution by graph matching is no longer applicable, and as we
will see, the problems exhibit a rich range of behavior intimately tied
to the known properties of generic constraint satisfaction problems.
\section{The phases of satisfaction}
\label{sec:phasesSat}
Hardness phase transitions in combinatorial problems have been an
active area of study for several decades now \cite{hogg:96}. Of
particular interest is the emergence of difficulty divergencies in
random SAT problems as parameters guiding instance generation are
varied \cite{mezard:02}. While the tiling CSPs we have proposed here
can certainly be expressed as SAT problems \cite{comment:sub}, the
resultant formulas do not, however, follow the typical
assumptions of random SAT. Most notably, the variables are not free to
appear in any clause with some probability, but follow the
highly structured topological constraints of the puzzle. Of course,
the subproblem and $O_h$ disorder impose stochasticity over the factor
graph variable domains $\{\Gamma_C\}$, but this is a localized
randomness, violating key assumptions in the analysis of random SAT.
Nonetheless, we expect that commonly known facts and intuitions
regarding the relation between how constrained a problem is and its
empirical hardness will apply to our situation.
Because the tiling puzzle color factors are invariant to the
distribution over cell problems, constrainedness depends exclusively
on the features of the ground-state sets. As we alluded to earlier, a
dominant correlate of problem hardness is the average size of the
ground-state sets $\Expect{} |\Gamma|$. A highly constrained regime is
one in which the subproblems tend to have relatively small degeneracy
and vice versa for a loosely constrained one. A subproblem
demonstrating a maximal constraint level is any member of class
$F_{21}$ (with a unique ground state up to ${\mathbb Z}_2$), while one
constrained minimally, which was not considered in the experiments, is
the zero $J$ subproblem, in which all $256$ states are ground states.
Within the class of subproblems introduced in
Sec.~\ref{sec:probConst}, including members of $F_{21}$ and $F_{6}$
(with degeneracy $8$) tends to bias the final problem towards the
maximal and minimal extremes, respectively.
\begin{figure*}[tb]
\includegraphics[width=\columnwidth]{ICMTTSsScatter26.pdf}
\includegraphics[width=\columnwidth]{ICMTTSsScatter46.pdf}
\caption{Simulated annealing $\langle \log_{10} {\rm TTS}\rangle$
(right $y$-axes, red triangles) and parallel tempering with
isoenergetic cluster moves
$\langle \log_{10} {\rm TTS}_{\textrm{opt}}\rangle$ (left
$y$-axes, blue circles) as a function of population annealing
Monte Carlo $\langle \log_{10} \rho_{\rm S}\rangle$ for
three-dimensional problem classes (a) \texttt{gallus\_26}
and (b) \texttt{gallus\_46}. The SA and ICM ${\rm TTS}$
are measured in Monte Carlo sweeps and seconds,
respectively. Within each problem class, every point in a given
color (symbol) jointly shows the two corresponding measures for
one of the $30$ subclasses described in the text (averages are
computed over $200$ instances from each subclass). The data show
that PAMC $\langle \log \rho_{\rm S}\rangle$ exhibits a strong
linear correlation with the other two algorithms'
$\log {\rm TTS}$ metrics, confirming its power as a measure of
hardness and landscape roughness. Furthermore, as relative
hardness measures for these problems, the three quantities
studied here can be used more or less interchangeably. Error
bars have been omitted for clarity.}
\label{fig:ICMTTSsScatter}
\end{figure*}
By construction, all our problems are satisfiable, i.e., there is no
unsatisfiable (or overconstrained) regime as usually
understood. Clearly, the minimally constrained full problem is trivial
as each state is a solution to the original problem. In the tiling
puzzle representation, the factor constraints can be met independently
because the resultant states are always in the ground-state sets. At
the other extreme, highly constrained problems are also typically
easy: Because there are few local choices from the ground-state sets,
a tree search method will encounter relatively small branching factors
when traversing the state space, while for local search, the energy
landscape will be such that the algorithm can reliably locate the
ground state using simple strategies to escape from local
minima. These extremes strongly suggest that for some intermediate
level of mean ground-state set size, a peak in problem hardness will
occur. While it is not possible to increase $|\Gamma_C|$ past $8$ for
unit cells without introducing more complex Ising interactions (or
without trivially removing all $J_{ij}$), an interesting question,
probed experimentally in the following, is, given the proposed classes of
unit-cell subproblems ($F_2$, $F_4$, and $F_6$) and all possible
mixture distributions over the classes, did the hypothesized peak
occur for some interior point of the distribution set? An affirmative
answer would suggest that the corresponding CSP class is among the
hardest of all tile-matching puzzles of the type we have introduced,
i.e., for which the locations' tile sets may be chosen arbitrarily,
not by constraining them to map to subproblem ground
states. Conversely, if the hardness maximum within the set of mixtures
occurs at the set boundary, it would point to the existence of more
difficult tiling puzzles not representable within our framework.
\section{Experiments}
\label{sec:experiments}
We now numerically study the typical complexity of certain subproblem
classes to illustrate their tunability. This section focuses on
three-dimensional cubic lattices, followed by a demonstration that the
approach works well also in two space dimensions
(Sec.~\ref{sec:chimera}). We conclude with a discussion of
generalizations to other graph types.
While we would ideally have performed a rigorous numerical study for a
discrete ``grid'' representing all subproblem types we have proposed,
this is computationally prohibitive. Consequently, we focus on two
illustrative regimes demonstrating interesting variations in problem
difficulty over three algorithms, and the role played by subproblem
degeneracy. The simulation results show that highly complex problems
with known ground state, and more difficult than conventional spin
glasses with both Gaussian and random bimodal couplings, are
accessible via our methodology, but they also strongly suggest that
the hardness maximum over the distribution set occurs at a boundary,
namely, when the distribution is a point mass concentrated on class
$F_6$. Perhaps unsurprisingly then, we conclude that tiling puzzles
equivalent to Ising problems constructed using the classes
$\{F_2,F_4,F_6\}$ are likely not the hardest among those where the
tile sets are arbitrarily specifiable.
All problems are defined on three-dimensional lattices of size $8\times
8\times 8$ with periodic boundary conditions. We consider two classes of
problems corresponding to one-dimensional slices of the problem mixture
parameter space. The first class, \texttt{gallus\_26}, allows
subproblems to belong solely to classes $F_6$ and $F_{22}$, while in the
second, \texttt{gallus\_46}, they are constrained to $F_6$ and
$F_{42}$. Both instance classes are parametrized by $p_6$, the
probability of selecting class $F_6$ instead of the alternative
\cite{comment:p6}. The classes each contained $200$ instances generated
at $30$ uniformly spaced values of $p_6 \in [0.8,1]$, for a total of
$6000$ instances per class. All subproblems are subject to uniform
$O_h$ disorder. The selected range of $p_6$ values is interesting as
within it, problems are more difficult than conventional spin glasses
with Gaussian or random bimodal couplings. We note that the problems
continue to become easier as $p_6$ decreases. Using the values of
$|\Gamma|$ shown in Fig.~\ref{fig:FPProblems}, the expected ground-state
set sizes in terms of $p_6$ are $\Expect{} |\Gamma|= 8p_6 + 4(1-p_6)$
for \texttt{gallus\_26} and $\Expect{} |\Gamma| = 8p_6 + 2(1-p_6)$
for \texttt{gallus\_46}.
\begin{figure*}[tb]
\includegraphics[width=\columnwidth]{PAlogRhoP6.pdf}
\includegraphics[width=\columnwidth]{PAlogRhoESD.pdf}
\caption{ Average log-entropic family size
$\langle\log_{10} \rho_{\rm S}\rangle$ (population annealing
Monte Carlo landscape roughness measure) for $L=8$ cubic lattice
classes \texttt{gallus\_26} and
\texttt{gallus\_46}. Each point corresponds to a subclass
in which unit-cell subproblems are selected from $F_6$ with
probability $p_6$, or else from $F_{22}$ for
\texttt{gallus\_26} and from $F_{24}$ for
\texttt{gallus\_46}. Averages are computed over $200$
instances from each subclass. Plots of (a)
$\langle \log_{10} \rho_{\rm S} \rangle$ against $p_6$ and (b)
$\langle \log_{10} \rho_{\rm S} \rangle$ against
$\Expect{}|\Gamma|$, the expected subproblem degeneracy within
their range of overlap. For a given mixing probability $p_6$,
(a) shows that \texttt{gallus\_26} is consistently more
difficult than \texttt{gallus\_46} despite being less
frustrated. On the other hand, (b) shows that instances
from either class with a given subproblem degeneracy have very
similar difficulty, suggesting that $\Expect{}|\Gamma|$
considerably explains the variation in hardness irrespective of
the specific underlying subproblem mixture or frustration level.
In light of the proposed tiling puzzle interpretation, this is
consistent with knowledge of CSP hardness. For reference, note
the $\langle \log_{10} \rho_{\rm S}\rangle$ values of
equal-sized spin glasses with random $\pm 1$ and Gaussian
interactions. For certain subclasses, the problems proposed here
are far more difficult than the traditionally used cases.}
\label{fig:PAlogRho}
\end{figure*}
Problem difficulty is assessed through performance of three different
algorithms designed for problems with rough energy landscapes. The
measures of difficulty are in strong agreement across the methods,
providing corroboration that the observed difficulty trends should
persist robustly across a fairly large class of heuristic algorithms,
including backtrack-based search \cite{hogg:96}.
Simulated annealing (SA) \cite{kirkpatrick:83} is the most basic
algorithm considered. We use the optimized implementation developed by
Isakov {\em et al}.~\cite{isakov:15} with $\beta_{\min} = 0.01$ and
$\beta_{\max} = 1$. While we would rather have used some form of
optimized time to solution (TTS) measure, which considers the best
tradeoff between simulation length and number of simulations, SA runs
were too time consuming on the hard problems to generate the requisite
runtime histograms reliably. Consequently, we selected a fixed run
length of $N_S = 8192$ sweeps for all problems with a single sweep per
temperature. Each problem is simulated $R = 10^6$ times independently.
The SA TTS is defined as the computational time required to find a
ground state with at least $99$\% probability, i.e., for each
instance, ${\rm TTS} = N_S \log(0.01)/\log(1 - p)$, where $p$ is the
fraction of successful runs out of the $R$ repetitions.
Furthermore, we have used a highly optimized implementation of
parallel tempering Monte Carlo with isoenergetic cluster moves (ICMs)
\cite{zhu:15b}, an adaptive hybrid parallel tempering (PT)
\cite{hukushima:96,geyer:91,moreno:03} cluster algorithm
\cite{houdayer:01}. Because the ICM is considerably more efficient
than SA, it allowed us to gather runtime statistics for each instance,
allowing optimized ${\rm TTS}$s to be computed. In contrast to SA,
total ICM computational time is not merely a function of overall Monte
Carlo sweeps, but includes the additional cost of constructing
random-sized clusters. Consequently, to track aggregated ICM
computational effort we record run times in seconds on hardware
dedicated entirely to the simulations. If $P(\tau)$ is the empirically
observed probability of finding the ground state in time $\tau$ or
less, the optimized time to solution is defined as
${\rm TTS}_{\textrm{opt}} = \min_{\tau} \tau
\log(0.01)/\log[1-P(\tau)]$.
We note, however, that despite the efficiency of the ICM, it commonly
failed to find the solution for the harder problems within the maximum
allowed $2^{24}$ total Monte Carlo sweeps, requiring $60-75$
min of real time, when computing the runtime histograms. For
difficult classes, these ``timeouts'' occurred at least half of the
time within the $100$ ICM repetitions used to tally the
histograms. Fortunately, we were nonetheless able to infer an
optimized ${\rm TTS}$ from the conditional histogram in which
solutions were found. We used $N_T=30$ temperatures spaced within
$T_{\max} = 3$ and $T_{\min} = 0.01$; see Ref.~\cite{zhu:15b} for
further details of the ICM.
The final set of tests have used the sequential Monte Carlo
\cite{delmoral:06} method known as population annealing Monte Carlo
(PAMC) \cite{hukushima:03,machta:10}. This algorithm is related to SA
but differs crucially in its usage of weight-based resampling, which
multiplies or eliminates members of a population according to the
ratios of their Boltzmann factors at adjacent temperatures,
maintaining thermal equilibrium at each step. Our simulations used
$N_T=201$ temperatures with $1/T = \beta\in[0.0,5]$ and $N_s=10$
sweeps per temperature, and a population size of $R=5\times10^5$
replicas. In Ref.~\cite{wang:15e}, a PAMC-derived index of landscape
roughness called the entropic family size $\rho_{\rm S}$ was
proposed. If $q_i$ is the fraction of replicas in the final population
descended from initial replica $i$, then
$\rho_{\rm S} = \lim_{R\to\infty}R/e^{h[q]}$, where
$h[q] = -\sum_{i}q_i\log q_i$. An energy landscape is thus deemed
rough if $h[q]$ is small, that is, if relatively few initial replicas
survive to the final distribution, yielding a large value of
$\rho_{\rm S}$. Note that $\rho_{\rm S}$ converges quickly in $R$ and
is easily estimated with finite populations. Thermal equilibration is
ensured by requiring $h[q] > \log 100 $; when this is not satisfied
for an instance, it is rerun with a larger population size. The
entropic family size $\rho_{\rm S}$ is known to covary strongly with
the PT autocorrelation time \cite{wang:15e} and, as can be seen from
Fig.~\ref{fig:ICMTTSsScatter}, does so for the ${\rm TTS}$-based
hardness metrics considered here as well. Figure
\ref{fig:ICMTTSsScatter} shows, respectively, the dependence of both
SA and ICM $\langle \log_{10} {\rm TTS}\rangle$ measures on PAMC
$\langle \log_{10} \rho_{\rm S}\rangle$ for the $30$ subclasses of
\texttt{gallus\_26} [Fig.~\ref{fig:ICMTTSsScatter}(a)] and
\texttt{gallus\_46} [Fig.~\ref{fig:ICMTTSsScatter}(b)] studied,
where the averages $\langle \cdots \rangle$ at each point representing
a subclass are computed over its $200$ generated instances. The plots
clearly show a distinct near-linear dependence of both time-based
measures on the PAMC-defined value. A larger $\rho_{\rm S}$ on average
implies a longer time to solution with respect to both SA and ICM
algorithms, corroborating the former's power as an objective measure
of landscape roughness.
\begin{figure*}[tb]
\includegraphics[width=\columnwidth]{SAlogTTS.pdf}
\includegraphics[width=\columnwidth]{ICMlogTTS.pdf}
\caption{(a) SA $\langle \log_{10} {\rm TTS}\rangle$ and (b) ICM
$\langle \log_{10} {\rm TTS}_{\textrm{opt}}\rangle$ (b) plotted
against the parameter $p_6$ for $30$ subclasses of
\texttt{gallus\_26} and \texttt{gallus\_46} defined
in the text. The problems show the same relative difficulty
trends with respect to both algorithms and in accordance with
the PAMC results shown in Fig.~\ref{fig:PAlogRho} (a).}
\label{fig:logTTS}
\end{figure*}
Results of the PAMC simulations are shown in Fig.~\ref{fig:PAlogRho}. We
observe a trend of increasing $\rho_{\rm S}$ (and hence complexity) for
both problem classes as the fraction of subproblems from $F_6$ increases
towards unity. For comparison, we display the mean $\log \rho_{\rm S}$
value of two prototypical problems with rough landscapes on the same
lattice, the random $J_{ij} \in \{\pm 1\}$ and Gaussian $J_{ij} \sim
N(0,1)$ [$N(0,1)$ a normal distribution with zero mean and variance one]
spin glass, computed using $1000$ and $5099$ instances of each type,
respectively. It is clear that problems in most of the examined
subclasses of \texttt{gallus\_26} and \texttt{gallus\_46}
are more difficult than both of these widely studied problem classes.
Indeed, for subclasses of \texttt{gallus\_26} corresponding to
$p_6 \in [0.91,1]$, problems are $\sim 2-3$ orders of magnitude
harder than bimodal spin glasses. Perhaps more surprisingly, they are
also $\sim 2-3$ orders more difficult than Gaussian spin glasses,
despite the latter possessing continuous-valued couplings while our
instances restrict couplers to $\{\pm 1\}$, which are believed to
generally be easier to minimize.
Setting $p_6=1$ yields the most complex problems of those considered. In
fact, we conjecture, based on less comprehensive simulations, that these
instances are the hardest among all those constructed with subproblem
classes in $\{F_2,F_4,F_6\}$. We plan to perform a comprehensive
analysis in the future. For this hard class, the unit-cell subproblems,
deriving exclusively from $F_6$, have eight ground states each. Because
this hardness peak occurs at the boundary of the problem parameter
space, it seems plausible that one could instantiate yet more complex
three-dimensional tiling puzzles, where the locations were allowed more
than eight (times two) tile possibilities but not so many that the problem
becomes underconstrained and easy to solve.
At first, the greater pointwise difficulty shown in
Fig.~\ref{fig:PAlogRho} of an $\{F_6, F_{22}\}$ mixture over one made
of $\{F_6,F_{42}\}$ appears to contradict intuition that greater
frustration implies higher difficulty. After all, with four bounding
frustrated facets, a member of $F_{42}$ can be interpreted as more
frustrated than one of $F_{22}$, which has two. The story is somewhat
more subtle though: While frustration certainly plays a role in tuning
hardness in our problems, it appears to do so through its effect on
constraint level, namely, on the sizes of $\Gamma_C$, with $F_{22}$
inducing higher complexity than $F_{42}$ because its ground-state set
size is larger. This fact is displayed in Fig.~\ref{fig:PAlogRho}(b),
where $\log\rho_{\rm S}$ for the two classes is plotted against
$\Expect{}|\Gamma|$ instead of $p_6$. The graph shows that for a given
value of $\Expect{}|\Gamma|$, \texttt{gallus\_26} and
\texttt{gallus\_46} have a very similar roughness index,
implying that mean subproblem degeneracy accurately predicts
difficulty regardless of the underlying subproblem mixture.
For completeness, analogous plots displaying similar difficulty trends
for SAs $\log {\rm TTS}$ and ICMs $\log {\rm TTS}_{\textrm{opt}}$ are
shown in Fig.~\ref{fig:logTTS}, where they are again plotted against
$p_6$.
\begin{figure*}[tb]
\includegraphics[width=\columnwidth]{ICMlogTTSOptHistograms26.pdf}
\includegraphics[width=\columnwidth]{ICMlogTTSOptHistograms46.pdf}
\caption{Histograms of (optimized) $\log_{10} {\rm
TTS}_{\textrm{opt}}$ relative to isoenergetic cluster moves for
classes (a) \texttt{gallus\_26} and
(b) \texttt{gallus\_46} for three subclasses
characterized here by mean subproblem degeneracy
$\Expect{}|\Gamma|$; $200$ problems within each subclass were used
to obtain the histograms. The leftward trend in both sets of
histograms shows clearly that the problems become easier with
decreasing $\Expect{}|\Gamma|$ and their shapes suggest that ${\rm
TTS}_{\textrm{opt}}$ is log-normally distributed.}
\label{fig:ICMlogTTSOptHistograms}
\end{figure*}
In contrast to our results so far, which have considered
instance-averaged difficulty measures,
Fig.~\ref{fig:ICMlogTTSOptHistograms} shows histograms of optimized
ICM $\log_{10} {\rm TTS}$ values for three subclasses of
\texttt{gallus\_26} [Fig.~\ref{fig:ICMlogTTSOptHistograms}(a)]
and of \texttt{gallus\_46}
[Fig.~\ref{fig:ICMlogTTSOptHistograms}(b)] indexed by
$\Expect{}|\Gamma|$. The leftward shift in histogram support shows
clearly that problems tend to become easier with decreasing
$\Expect{}|\Gamma|$. Given the histogram shapes, one may naturally
suspect that ${\rm TTS}_{\textrm{opt}}$ is log-normally distributed.
As $\log {\rm TTS}$ is unlikely to be precisely normal across the
entire data range, we visualize correspondence with a Gaussian via
normal probability plots \cite{chambers:83}, which relate the sample
order statistics, obtained by sorting the data, with the theoretical
means of the corresponding normal order statistics. Deviations from a
linear relation signal lack of Gaussianity. Figure
\ref{fig:ICMlogTTSOptQQPlots} shows the probability plots for the
three subclasses used to generate the preceding histograms for the
\texttt{gallus\_26} [Fig.~\ref{fig:ICMlogTTSOptQQPlots}(a)] and
\texttt{gallus\_46} [Fig.~\ref{fig:ICMlogTTSOptQQPlots}(b)]
classes, respectively. The relation is close to linear over the
majority of the histogram support, implying in turn that the
${\rm TTS}_{\textrm{opt}}$ is approximately log-normally distributed.
This is consistent with quantities such as the parallel tempering
Monte Carlo autocorrelation time and other roughness measures
\cite{katzgraber:06a,yucesoy:13} having the same property.
While we have argued that $\Expect{}|\Gamma|$ is a good predictor of
mean difficulty, the histogram results show that this value by no means
provides a complete picture. Indeed, when $\Expect{}|\Gamma| = 8$ (i.e.,
$p_6=1$) there is no variance in $|\Gamma_C|$ as all subproblems have
eight-fold degeneracy, yet the $\log {\rm TTS}$ distributions in
Fig.~\ref{fig:ICMlogTTSOptHistograms} nonetheless show considerable
intraclass spread in difficulty. Therefore, there are certainly other
factors at play in predicating the hardness.
\begin{figure*}[tb]
\includegraphics[width=\columnwidth]{ICMlogTTSOptQQPlots26.pdf}
\includegraphics[width=\columnwidth]{ICMlogTTSOptQQPlots46.pdf}
\caption{Normal probability plots of ICMs
$\log_{10} {\rm TTS}_{\textrm{opt}}$ for the instance subclasses
used to generate the histograms in
Fig.~\ref{fig:ICMlogTTSOptHistograms}. A linear relation
between the Gaussian theoretical and data quantiles implies that
the data follow a normal distribution. The plots show clearly
that for the three representative subclasses of
(a) \texttt{gallus\_26} and (b) \texttt{gallus\_46}
parametrized by $\Expect{}|\Gamma|$, the histograms are close to
normal over the majority of their support. In other words,
${\rm TTS}_{\textrm{opt}}$ approximately follows a log-normal
distribution.}
\label{fig:ICMlogTTSOptQQPlots}
\end{figure*}
\section{Discussion}
\label{sec:discussion}
In this section we discuss generalizations of the planting approach
using lattice animals. Furthermore, we present a case study on how the
approach generalizes to other non hypercubic lattices. Finally, we
discuss the use of the planted problems for fundamental studies of spin
glasses and related statistical systems.
\subsection{Generalization via lattice animals}
\begin{figure}[tb]
\includegraphics[width=0.85\columnwidth]{LatticeAnimals.pdf}
\caption{Generalization of unit-cell planting using lattice
animals, also known as polyominoes. Illustrations are in two
dimensions for clarity. The analogous procedure in three space
dimensions is similar. With lattice animal planting, the
edge-disjoint subgraphs underlying the subproblems are no longer
restricted to the unit cells in $\mathcal{C}$ (the checkerboard in two
space dimensions and the decomposition in
Fig.~\ref{fig:BCCPartition} for three space dimensions) but are now
permitted to be connected subgraphs comprised of unions of cells
from $\mathcal{C}$. As before, subproblem couplers are not added. Above,
six such subgraphs in non gray colors are shown, of which only the
pink one (top right) is comprised of a single cell. If the tree widths of the
lattice animals are small, their ground states can be computed
exactly and gauged to the overall target ground state. This
extension considerably expands types of subproblems that can be
employed and also introduces an additional mechanism of solution
hiding via randomization of the employed lattice animals.}
\label{fig:LatticeAnimals}
\end{figure}
The proposed unit-cell planting technique shows encouraging results
and properties and one may inquire how it may be generalized. In this
section, we present a natural extension of the idea, still assuming
lattice-structured problems, where instead of defining subproblems on
the unit cells of $\mathcal{C}$, they are specified on subgraphs consisting
of their unions. One can verify that such subgraphs, called lattice
animals or polyominoes, also partition the lattice into edge-disjoint
subgraphs, meaning that subproblem couplers are still not added.
A two-dimensional example of decomposition into lattice animals is shown
in Fig.~\ref{fig:LatticeAnimals}. Generalization to three-dimensional
polyominoes obtained by grouping cells from the decomposition shown in
Fig.~\ref{fig:BCCPartition} is straightforward. Shown in non-gray colors
are six lattice animals. Only the pink (top right) one is made of a
single cell. The key difference from the basic method is that unit cells
of a given color are no longer constrained to have their individual
ground states agree; only the complete animal ground state is relevant.
This extension thus considerably extends the types of subproblems that
can be employed and also introduces an additional mechanism of solution
hiding, namely, via randomization of the employed lattice animals, which
would of course be unknown to the would-be adversary. While the tiling
puzzle and CSP interpretation of the problem, suitably modified,
continues to hold in this generalization, under lattice animal
randomization, the adversary would in essence no longer be certain what
puzzle they are even solving.
In this work, we have considered carefully chosen families of
subproblems on three-dimensional unit cells. An exploration of
extensions to more general lattice animals is outside the scope of the
present work. We note, however, that if subproblem couplers are sampled
from a given distribution, then provided the subgraph tree widths
are small, their ground states can be computed exactly \cite{koller:09}
and gauged to the desired overall ground state.
Finally, we note that this lattice animal generalization is also key
when attempting to reduce degeneracy in the planted problems. For
example, by selecting the coupler values from a Sidon set
\cite{sidon:32,katzgraber:15,karimi:17a} of the form
$\{\pm(n-2)/n,\pm(n-1)/n,\pm 1\}$ with, e.g., $n = 50$, the degeneracy is
drastically reduced. Similarly, one could select the couplers from a
distribution of the form
\begin{equation}
|J_{ij}| \in a + (1-a)u ,
\end{equation}
where $u \in [0,1)$ is a uniform random number and $a$ close to $1$
\cite{katzgraber:10a}. However, instead of using basic tiles, more complex
lattice animals must be used to construct the problems.
\subsection{Generalization to arbitrary graphs}
\label{sec:chimera}
The need to benchmark both classical and quantum optimization heuristics
has hastened the development of advanced planting techniques for
solutions of spin-glass-like optimization problems. We have so far
focused our attention on hypercubic lattice problems. However, quantum
annealing machines typically have hardwired hardware graphs that are
close to planar. A prototypical example is the chimera graph
\cite{bunyk:14} used by current versions of the D-wave quantum annealing
machines. When presented with such a situation, one has two possible
options to apply our planting framework. The first is to impose a
lattice structure onto the available graph and the second is to specify
subproblems on altogether different ``unit cells'' than cubes, which of
course, must be graph dependent. We now discuss the first option applied
to the special case of chimera.
\begin{figure}
\includegraphics[width=0.95\columnwidth]{C32D.pdf}
\caption{Implementation of a $6\times 6$ square lattice with
periodic (toroidal) boundary from a $3\times 3$ unit-cell
chimera graph. (a) Instantiation of two-dimensional logical
spins from the chimera. For concreteness, $36$ lattice spins are
assumed to be labeled in row-fastest order. First, each spin in
the bipartite cells is paired with the directly opposing spin of
the same cell. Next, the resultant spin pairs, called dimers,
are forced to behave as one spin via strong ferromagnetic
coupling. Each dimer is labeled with the corresponding lattice
logical spin. (b) In the first and last columns of cells,
lattice problem edges between dimers $\{1,2\}$ and $\{3,4\}$ of
each cell are added. Note that only one of the two existing cell
edges is shown, but if both are used, the coupler strength must
be suitably divided. (c) In the top and bottom rows of cells,
lattice edges between dimers $\{1,3\}$ and $\{2,4\}$ of each
cell are added. Again, only one of the two available edges is
displayed. (d) All inter cell couplers are specified
according to the lattice problem. The lattice variables are
labeled beside each dimer again, from which one can verify that
the resultant construction does indeed implement a $6\times 6$
two-dimensional torus.}
\label{fig:Chimera2DEmbed}
\end{figure}
Although in principle three-dimensional lattices can be implemented on
the chimera graph \cite{harris:17x}, the required overhead may limit
linear (planted) problem sizes that can be practically studied. On the
other hand, relatively large two-dimensional lattices with periodic
boundaries can be straightforwardly embedded on chimera, with a
relatively modest constant ratio of two chimera variables per lattice
spin. More precisely, a chimera graph of $L\times L$ bipartite unit
cells, comprised of $8L^2$ variables in total, can produce a toroidal
two-dimensional lattice of size $2L\times 2L$. Rather than formally
describe the rather natural procedure, we illustrate it in
Fig.~\ref{fig:Chimera2DEmbed}, where a $6\times 6$ periodic lattice is
created from a $3\times 3$ $K_{4,4}$ cell chimera graph.
So far, we have focused primarily on planting subproblems on
three-dimensional unit cells, in part because planar problems without a
field are solvable in polynomial time \cite{barahona:82}, but as
illustrated in Figs.~\ref{fig:tilingCSPSteps} and
\ref{fig:LatticeAnimals}, the two-dimensional analog is readily
obtained. The analytical tractability of the planar lattice enables a
deep exploration of physical and computational properties, a work which
will be reported elsewhere \cite{perera:18x}. Here we outline the idea
and demonstrate that it does indeed perform well on planar lattices and
nonplanar quasi-two-dimensional chimera graphs using population
annealing simulations. The two-dimensional subproblems, i.e., the
analogs of the cells shown in Fig.~\ref{fig:FPProblems}, are partitioned
into classes $\{C_i\}$ for $i \in \{1,\ldots,4\}$, within which cells
have $i$ minimizing ground-state configurations each. To achieve the
construction, first define two magnitudes $J_s, J_l > 0$ with $J_l > J_s
$; presently we take $J_l = 2$ and $J_s = 1$. A cell in class $C_i$ is
constructed by setting a random edge to be antiferromagnetic with
magnitude $J_s$, $i-1$ of the remaining edges to be ferromagnetic with
magnitude $J_s$, and the leftover edges to be ferromagnetic with
magnitude $J_l$ \cite{comment:precision}. It is readily verified that
the subproblems do indeed have the specified number of local ground
states which always include the ferromagnetic state.
\begin{figure}
\includegraphics[width=0.95\columnwidth]{PAlogRhoP2_2D.pdf}
\caption{Average log-entropic family size
$\langle\log_{10} \rho_{\rm S}\rangle$ for $L=24$ periodic planar
lattice classes \texttt{C\_23} and \texttt{C\_24}.
Each point corresponds to a subclass in which unit cell
subproblems are selected from $C_2$ with probability $p_2$, or
else from $C_3$ for \texttt{C\_23} and from $C_4$ for
\texttt{C\_24}. Averages are computed over $200$ instances
from each subclass. As for the study in three dimensions, we see a
large range of landscape roughness, again with a range of
subclasses demonstrating greater difficulty than equal-sized
random bimodal spin-glass problems. The most difficult problems in
our ensemble are those in which the subproblems are exclusively
selected from $C_2$, in which the local GS degeneracy is $2$.}
\label{fig:PAlogRhoP2_2D}
\end{figure}
As was done in three space dimensions, we consider two classes of
problems each comprised of mixtures of two subproblem classes. Problem
class \texttt{C\_24}, consists of mixtures of $C_2$ and $C_4$
cells, while in \texttt{C\_23} the cells may belong to $C_2$ or
$C_3$. Both problem classes are parametrized by $p_2$, the probability
of choosing a cell from $C_2$. The results on $24 \times 24$ periodic
lattices are shown in Fig. ~\ref{fig:PAlogRhoP2_2D}. Again, a wide
range of landscape roughness is seen to be attainable, including a
regime in which the problems are more difficult than random bimodal
($\pm 1$ couplers) spin glasses. An interesting distinction from
three-dimensional results is that the most difficult problems are not
those in which cells exclusively belong to the maximally degenerate
class $C_4$, but rather the moderately constrained $C_2$ class. In
fact, class $C_4$ is a highly underconstrained regime for this
topology and gives rise to very easy problems. The fascinating
connections between dimension, complexity, and phase behavior are the
subject of ongoing study that goes beyond the scope of this paper.
If one wished to move beyond a regular lattice structure, the natural
objective is a decomposition of the problem graph $G$ into edge-disjoint
subgraphs satisfying some constraint allowing tractable minimization. An
example of a constrained decomposition is into subgraphs with given
minimum vertex degree \cite{yuster:13}. More directly applicable to the
planting context would be a constraint on maximum subgraph tree width
thereby allowing determination of the planted subproblem ground states.
The edge-disjointness property is the key common aspect with the ideas
presented in this paper, as it continues to circumvent the need for
adding subproblem couplers. This idea is clearly a generalization of the
lattice animal methodology discussed previously. While we have not
presently considered planting using such generic subgraphs, one can
readily envision a heuristic decomposition algorithm that greedily grows
partitioning subgraphs until their tree widths exceed some criterion.
\subsection{Spin-glass physics}
We hope researchers in the field embrace these planted problems to study
physical properties of glassy systems beyond the benchmarking of
optimization heuristics.
Having arbitrarily large planted solutions for hypercubic lattices
allows one to address different problems in the physics of spin glasses.
For example, the computation of defect energies, intimately related to
fundamental properties of these paradigmatic disordered
systems, strongly depends on the knowledge of ground states (see, for
example,
Refs.~\cite{hartmann:97,hartmann:99,palassini:99,palassini:01,hartmann:01b,katzgraber:01}).
Being able to plant problems would drastically reduce the computational
effort in answering these fundamental questions.
Furthermore, by carefully tuning the different instance classes,
problems with different disorder and frustration can be generated. A
systematic study of the interplay between disorder and frustration is
therefore possible for nontrivial lattices beyond hierarchical ones.
Similarly, being able to tune the fraction of
frustrated plaquettes allows one to carefully study the emergence of
chaotic effects in spin glasses (see, for example,
Refs.~\cite{katzgraber:07,thomas:11e,wang:15a} and references therein).
\subsection{Application-based benchmark problems}
It is well established that random benchmark problems
\cite{ronnow:14a,katzgraber:14} for classical and quantum solvers using
spin-glass-like problems are computationally typically easy.
Furthermore, the control over the hardness of the benchmark problems has
been rather limited either because (post) processing is expensive
\cite{katzgraber:15,marshall:16} or because the benchmark generation
approach does not give the user enough control over the problems to
match, e.g., hardware restrictions \cite{hen:15a}.
Because application-based problems from industrial settings are highly
structured, they pose additional challenges for the vast pool of
optimization techniques designed, in general, for random unstructured
problems. This has sparked the use of problems from industry to generate
hard (and sometimes tunable) benchmark problems. Most notably, the use
of circuit fault diagnosis \cite{perdomo:15,perdomo:17x} has produced
superbly hard benchmarks with small number of variables. However, the
use of application-based problems for benchmarking is in its infancy and
while circuit fault diagnosis shows most promise \cite{perdomo:17x},
many applications have produced benchmark problems that lack the
richness needed to perform systematic studies; see, for example,
Refs.~\cite{perdomo:12,santra:14,rieffel:15,azinovic:17}.
The problems presented in this work are highly tunable and
computationally easy to generate. Furthermore, they can be embedded in
more complex graphs, as is, for example, commonly done on the D-wave
hardware for application benchmarks. Thus, having this tunability not
only should allow for the generation of problems that might elucidate
quantum speedup in analog annealers, but might also help gain a
deeper understanding into quantum annealing for spin glasses in
general. In parallel, having these tunable problems might elucidate
the application scope of specific optimization techniques, both
classical and quantum.
\section{Conclusions}
\label{sec:conc}
We have presented an approach for generating Ising Hamiltonians with
planted ground-state solutions and a tunable complexity based on a
decomposition of the model graph into edge-disjoint subgraphs. Although
we have performed the construction for three-dimensional cubic lattices
and illustrated the approach with the two-dimensional pendant, the
approach can be generalized to other lattice structures, as shown in
Sec.~\ref{sec:chimera} for the chimera lattice. The construction allows
for a wide range in computational complexity depending on the mix of the
elementary building blocks used. We corroborated these results with
experiments using different optimization heuristics. Subsequent studies
should discuss constructions with controllable ground-state degeneracy, as
well as the mapping of the complete complexity phase space.
\section*{Acknowledgements}
We acknowledge helpful discussions with Jon Machta, Bill Macready,
Catherine McGeogh, Humberto Munoz-Bauza, and Martin Weigel and are
grateful to Fiona Hanington for thorough suggestions to the
manuscript. F.~H.~would like to thank the Santa Fe Institute for its
hospitality and invitation to a stimulating Working Group on solution
planting, and is indebted to those who introduced him to the original
Eternity puzzle. H.~G.~K.~acknowledges support from the NSF (Grant
No.~DMR-1151387). The Texas A\&M team's research is based upon work
supported by the Office of the Director of National Intelligence
(ODNI), Intelligence Advanced Research Projects Activity (IARPA), via
Interagency Umbrella Agreement No. IA1-1198. The views and
conclusions contained herein are those of the authors and should not
be interpreted as necessarily representing the official policies or
endorsements, either expressed or implied, of the ODNI, IARPA, or the
U.S.~Government. The U.S.~Government is authorized to reproduce and
distribute reprints for Governmental purposes notwithstanding any
copyright annotation thereon. We thank Texas A\&M University and the
Texas Advanced Computing Center at University of Texas at Austin for
providing HPC resources. |
1711.04007 | \section{Introduction}\label{sec:Introduction-shell}
Globular clusters (GC) are found in the halo of the Milky Way orbiting around the Galactic core. They are generally composed of old, low-mass stars bound together by gravity. The composition of these stars may vary between clusters, but in average, GCs have subsolar metallicity \citep[Z,][]{Gratton:2004,Harris:2010}. GCs are under intensive investigation for many reasons.
Their stars are so old that they constrain the minimum age of the universe.
Additionally, their stars are both coeval and equidistant, thereby providing natural laboratories for stellar evolution.
One of the most intriguing open questions concerning GCs is the so-called abundance anomalies \citep{Yong:2003,DaCosta:2013}. Light element abundances such as O and Na anticorrelate with each other: if O is depleted in a star, then Na is enhanced. The same is observed for the proton-capture isotopes of Mg and Al: if Mg is depleted in a star, then Al is enhanced.
Moreover, with the Al-abundance increasing, the ratio of the $^{24}$Mg isotope to the total Mg is decreasing, the $^{25}$Mg is slightly decreasing and the $^{26}$Mg is considerably increasing in the observed GC stars. This is consistent with the interpretation that one generation of stars has been polluted by nuclear burning products produced at very high temperatures \citep[>6$\cdot$10$^7$~K, ][]{Ventura:2011}.
The nucleosynthetic processes that can increase Na and Al while destroying O and Mg (as well as creating the Mg-isotopic ratios observed) are the Ne-Na chain and the Mg-Al chain \citep{Burbidge:1957}, respectively. These burning chains are side-reactions of the CNO-cycle, the main hydrogen-burning process in \textsl{massive} stars. Consequently, there must have been at least one population of massive (and/or intermediate-mass) stars born in the early epochs of the GC's life. These massive stars are already dead, but their nuclear imprint is what we observe today as anomalous abundance patterns in the second generation of low-mass stars. The question is then: how did the pollution happen, i.e.\ how did massive stars lose the amount of nuclear-processed material, and how did this material end up in some of the low-mass stars?
According to the most commonly accepted explanation, the interstellar medium (ISM) had been polluted by hydrogen-burning products from massive stars, and the second generation of stars were born from the polluted ISM \citep{Decressin:2007,DErcole:2008}. Alternatively, low-mass stars could accrete the ISM during a long pre-main sequence phase \citep{Bastian:2013}. In both cases, an astrophysical source -- a polluter -- is needed. This source, a population of massive or intermediate-mass stars, should only produce hydrogen-burning products (including helium), since no traces of helium burning products or supernova ejecta are observed. Additionally, the polluter should eject the material slowly enough for it to stay inside the gravitational potential well of the GC. This condition excludes fast winds of massive OB stars or Wolf-Rayet stars unless the fast winds are shocked and can cool efficiently before leaving the cluster \citep[cf.][]{Wunsch:2016}.
Several astrophysical scenarios were proposed that fulfill the conditions above. Asymptotic giant branch stars could eject their hot bottom burning products \citep{Ventura:2001,DErcole:2008}. Fast rotating massive stars that are close to the breakup rotation could eject core burning products \citep{Decressin:2007,Tailo:2015}. Supermassive (10\,000~M$_{\odot}$) stars could pollute through continuum driven stellar wind \citep{Denissenkov:2014}. In addition, massive binary systems could pollute via non-conservative mass transfer \citep{deMink:2009}.
\begin{figure}
\centering
\resizebox{1.1\hsize}{!}{\includegraphics[page=1]{shell.png}}
\caption{Photoionization-confined shell around a cool supergiant star. The second generation of low-mass stars are formed in the shell. This scenario could be common in the first few million years of the early globular clusters, explaining the pollution of the second generation. This simple drawing serves to present the original idea; as for the nominal values of our model, the shell forms at $r\approx 0.02$~pc from the central star (cf.\ our simulation of a shell in Fig.~\ref{fig:shelldens}). The central supergiant itself has a stellar radius of $\sim$5000~R$_{\odot}$; that is, the supergiant is 170 times smaller in radial dimension than the sphere of the shell. (This figure is derived from fig.~1 of \citet{Mackey:2014}).
}
\label{fig:shell}
\end{figure}
Here we propose a new scenario: low mass stars could be born in photoionization-confined shells around cool supergiant (SG) stars in the young globular clusters, as shown in Fig.~\ref{fig:shell}.
\citet{Szecsi:2015} simulated very massive (80$-$300~M$_{\odot}$) and long-living SGs. These long-living SGs are predicted only to exist at low-Z, because at solar composition the strong mass-loss removes their envelopes and turns them into Wolf--Rayet stars before reaching the SG branch. Moreover, the very massive, metal-poor SGs form \textsl{before} the hydrogen is exhausted in the core \citep[this is due to envelope inflation, cf.][]{Sanyal:2015}. Core-hydrogen-burning cool supergiants spend 0.1-0.3 Myr in the SG branch. During this time, they lose a large amount of mass (up to several hundred M$_{\odot}$ in the case of a 600~M$_{\odot}$ star, as we show below). This mass lost in the SG wind has undergone nuclear burning and shows similar abundance variations to those observed in GC stars.
Photoionization-confined shells can be present around cool supergiants at the interface of ionized and neutral material, as shown by \citet{Mackey:2014}. The shell can contain as much as 35\% of the mass lost in the stellar wind.
The main condition for forming a photoionization-confined shell is that the SG has a cool and slow wind and is surrounded by strong sources of Lyman-continuum radiation.
These conditions may have been fulfilled at the time when Galactic globular clusters were born. Evolutionary simulations of low-Z massive stars by \citet{Szecsi:2015} predict that both supergiant stars and compact hot stars develop at the same time. The latter are fast rotating, hot and luminous massive stars that
emit a huge number of Lyman-continuum photons. The slowly rotating stars, on the other hand, evolve to be cool red or yellow SGs. Thus, the condition required by \citet{Mackey:2014} about SGs and ionizing sources close to each other may have been common in the first few million years of a GC's life.
Consequently, photoionization-confined shells could form there, too.
This work is organized as follows. In Sect.~\ref{sec:SGinyoungGC} we present the evolution of the models that become core-hydrogen-burning cool SG stars, and discuss the composition of their winds. In Sect.~\ref{sec:SGshells} we introduce the star-forming supergiant shell scenario, and show that in the environment of the young globular clusters, it is possible to form low-mass stars in a supergiant shell from the material ejected by the SG's wind. In Sect.~\ref{sec:discuu} we discuss the mass budget of our scenario, as well as the amount of helium predicted in the second generation. In Sect.~\ref{sec:conclusionshell} we summarize the work.
\section{Supergiants in young GCs}\label{sec:SGinyoungGC}
\subsection{The evolution of core-hydrogen-burning cool SGs}\label{sec:evolution}
The first generation of stars in the young GCs almost certainly contained massive stars. We see massive stars forming in young massive clusters (YMC) today \citep{Longmore:2014}. YMCs are theoretically similar to the young GCs and are thought to become GC-like objects eventually
\citep[e.g.][]{Brodie:2006,Mucciarelli:2014,Andersen:2016}.
The massive stars of this first generation must have had the same metallicity that we observe today in the low-mass GC stars. The metallicity distribution of GCs in the Galaxy is shown in Fig.~\ref{fig:histo}. It is a broad and bi-modal distribution with a large peak at [Fe/H]~$\sim -$1.4 and a smaller peak at $\sim -$0.6 \citep[cf.][]{Gratton:2004,Brodie:2006,Harris:2006,Harris:2010,Forbes:2010}. While there is recent evidence that a few of the high-metallicity GCs seem to harbor multiple generations too \citep[][]{Schiavon:2017}, here we only consider low-metallicity GCs that are in the large peak, that is between [Fe/H]=$-$1.0 to $-$2.0, because the abundance anomalies seem to be consistently present in almost all of them \citep{Gratton:2004}.
We use the low-metallicity ([Fe/H]=$-$1.7, corresponding to 0.02~Z$_{\odot}$) massive star simulations of \citet{Szecsi:2015} to model the young GC environment and the first generation of massive and very massive stars. However, \citet{Szecsi:2015} do not use an $\alpha$-enhanced mixture \citep[as suggested for GC stars by][see their Table~3]{Decressin:2007}, but a mixture suitable for dwarf galaxies. Therefore, when comparing to observations (in Figs.~\ref{fig:obsNaO}--\ref{fig:obsMgiso}), the initial O, Na, Mg and Al abundance of our models are scaled to the following abundance ratios:
[O/Fe]$_{\rm first}$=0.4,
[Na/Fe]$_{\rm first}$=$-$0.4,
[Mg/Fe]$_{\rm first}$=0.6,
[Al/Fe]$_{\rm first}$=0.2,
approximately matching the observed composition of the first generation of GC stars.
\begin{figure}
\centering
\resizebox{0.5\hsize}{!}{\includegraphics[width=0.2\columnwidth,angle=0]{Harris}}
\caption{Number of GCs at a given metallicity. The figure is taken from \citet[][]{Harris:2010}, and shows the distribution of 157 GCs with measured [Fe/H] value. We apply a metallicity of [Fe/H]=$-$1.7 (marked in the figure) to model the first generation of massive stars in GCs.
}
\label{fig:histo}
\end{figure}
Massive stars at low Z evolve differently from those at Z$_{\odot}$.
Simulations of \citet{Szecsi:2015} predict different evolutionary paths and, consequently, new types of objects present in low-Z environments. One of the predictions at low Z are the core-hydrogen-burning cool supergiant stars.
These objects start their evolution as O-type stars but, during their main-sequence phase, they expand due to envelope inflation \citep{Sanyal:2015} and become cool SG stars while still burning hydrogen in their cores.
The cool supergiants in general have a convective envelope because of their low (<10$^4$~K) surface temperature. Envelope convection mixes nuclear products from the burning regions (core or shell) to the surface. Thus, the wind of the cool SG stars contains the products of nuclear burning that is happening in the deeper regions of these stars. In case of core-hydrogen-burning cool supergiants, the nuclear burning products in the wind are, necessarily, hot-hydrogen-burning products.
Core-hydrogen-burning cool SGs with low metallicity (0.02~Z$_{\odot}$) are predicted at masses higher than M$_{\rm ini}\gtrsim$~80~M$_{\odot}$.
They stay on the SG branch and burn hydrogen for a relatively long time (in some cases, as long as 0.3~Myr, which corresponds to 15\% of their main sequence lifetimes). These objects have a contribution to the chemical evolution of their environments. Such a star could eject several tens, or hundreds, of M$_{\odot}$ through stellar wind mass-loss, the composition of which material being different from that of the circumstellar gas.
We simulate the cool supergiant phase by applying the mass-loss rate prescription by \citet{Nieuwenhuijzen:1990}, which is a parametrized version of that by \citet{deJager:1988}. The latter has been shown by \citet{Mauron:2011} to be still applicable in the light of new observations of red supergiants. A metallicity-dependence of the wind is implemented as $\dot{M}\sim Z^{0.85}$ according to \citet{Vink:2001}. Thus, the mass-loss recipe we use:
\begin{equation}
\begin{split}
\log\frac{\dot{M}}{M_{\odot}{\rm yr}^{-1}}= 1.42\log (L/L_{\odot}) + 0.16\log (M/M_{\odot}) + \\
+ 0.81\log (R/R_{\odot}) - 15\log(9.6310) + 0.85\log (Z_{\rm ini}/Z_{\odot})\label{eq:nieu}
\end{split}
\end{equation}
This formula is in accordance with the results of \citet{Mauron:2011} who find that the metallicity exponent should be between 0.5 and 1. However, it is important to note that this prescription is based on red SG stars with masses between 8-25~M$_{\odot}$. Since there is no mass-loss rate observed for SG stars with masses of 150-600~M$_{\odot}$, we extrapolate Eq.~\ref{eq:nieu} up to these masses, pointing out that this approach involves large uncertainties.
Fig.~\ref{fig:HR} shows the Hertzsprung--Russell diagram of three evolutionary models that become core-hydrogen-burning SG stars
towards the end of their main-sequence evolution.
The models were taken from \citet{Szecsi:2015}, except for the most massive one (M$_{\rm ini}$=575~M$_{\odot}$) which was computed for this work.
Our simulation of the model with M$_{\rm ini}$=575~M$_{\odot}$ was carried out until the central helium mass-fraction was 0.81, that is, before the end of core hydrogen-burning.
We estimate that until core-hydrogen exhaustion, this model needs about 0.28~Myr of further evolution, thus the total time it spends as a core-hydrogen-burning cool SG is 0.37~Myr. Based on its main-sequence lifetime of 1.56~Myr and the general trend that massive stars spend 90\% of their total life on the main-sequence and 10\% on the post-main-sequence, we expect a post-main-sequence lifetime of $\sim$0.17~Myr. The mass loss in the SG phase can be as high as 10$^{-3}$~M$_{\odot}$~yr$^{-1}$.
It is expected that with this high mass-loss, the model loses its whole envelope during its post-main-sequence lifetime. But even if all its hydrogen-rich layers are lost, it will stay cool. According to \citet[][their fig.~19]{Koehler:2015} the zero-age main-sequence (ZAMS) of pure helium-stars bends toward that of hydrogen-rich stars, crossing it over at $\sim$300~M$_{\odot}$ in the case of models with subsolar (SMC and LMC) composition. Although the exact mass where the crossover of the two ZAMS-lines happens at our sub-SMC metallicity needs to be investigated in the future, the model with M$_{\rm ini}$=575~M$_{\odot}$ (and a total mass of 491~M$_{\odot}$ at the end of our simulation) is most probably above it. Therefore, we do not expect this model to become a hot Wolf--Rayet star after its envelope is lost, but instead to stay cool, and become a helium-rich SG during the remaining evolution.
The model with M$_{\rm ini}$=257~M$_{\odot}$ from \citet{Szecsi:2015} was followed during its post-main-sequence evolution. Our simulation stops when the central helium mass fraction has decreased to 0.73 during core \textsl{helium}-burning. The model spends 0.26~Myr as a core-hydrogen-burning cool SG (with a radius of $\sim$5000~R$_{\odot}$~$\sim$3.5$\cdot$10$^{14}$~cm), and is expected to spend a total of $\sim$0.25~Myr as a core-helium-burning object. The mass-loss rate is 2.9$\cdot$10$^{-4}$~M$_{\odot}$~yr$^{-1}$ (i.e.~$-$3.5 on a logarithmic scale) in the last computed model. Supposing that this mass-loss rate stays constant until the end of its post-main-sequence lifetime, this model will end up having only 140~M$_{\odot}$. It remains an open question if this model, having lost its hydrogen-rich envelope, would stay cool or would become a hot Wolf--Rayet star. To decide, one would need either to follow the rest of its evolution, or to establish a mass-limit where the helium-ZAMS and the hydrogen-ZAMS cross.
Since these tasks would require improvements of the code and creating a dense grid of high-mass models, they fall outside of the scope of current work.
However, given all the uncertainties concerning the mass-loss rates of actual supergiant stars at this mass, it may be that the model never even loses its envelope because the real mass-loss rate is lower than assumed here.
The model with M$_{\rm ini}$=150~M$_{\odot}$ has finished core-helium-burning in our simulation. It spends 0.07~Myr as a core-hydrogen-burning cool SG (during which time its surface does not become cooler than 19~000~K; its largest radius is 182~R$_{\odot}$) and another 0.30~Myr as a core-\textsl{helium}-burning red supergiant (with a surface temperature of $\sim$4250~K and a radius of $\sim$4000~R$_{\odot}$). It has a final mass of 118~M$_{\odot}$, and the mass-loss rate in the last computed model is 8.0$\cdot$10$^{-5}$~M$_{\odot}$~yr$^{-1}$.
Since core-helium-burning is finished in this model, we know its final surface temperature, as well as its envelope composition: it is a red supergiant at the end of its life, and it has an envelope of about 25~M$_{\odot}$ which is composed of 49.02\% hydrogen, 50.96\% helium and 0.02\% metals. Thus, we know for sure that it stays cool until the end of its life, whereas we could not be sure for the two more massive models discussed above. Moreover, we find no helium-burning side-products at its surface. The reason for this is that the size of the convective core during helium-burning is smaller than that during hydrogen-burning, and the convective envelope of the red supergiant never reaches the layers of the helium-burning. It only mixes the ashes from core-hydrogen-burning and, during the post-main-sequence phase, shell-hydrogen-burning to the surface. As the observed composition of GC stars show no traces of helium-burning products either, we suggest that this SG model, having finished its post-main-sequence evolution while ejecting about 30~M$_{\odot}$ of material polluted with hot-hydrogen-burning products, is a potential source of the pollution in the young GCs.
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics[width=0.5\columnwidth,angle=270]{HR}}
\caption{Hertzsprung--Russell diagram of three low-Z evolutionary models that become core-hydrogen-burning SG stars with initial masses of 150, 257 and 575~M$_{\odot}$ and initial rotational velocity of 100~km~s$^{-1}$. Dots in the tracks mark every 10$^5$~years of evolution. Crosses mark the end of the core-hydrogen-burning phase; in case of the model with 575~M$_{\odot}$, the end of the computation.
Theoretical mass-loss rates are colour coded, and dashed lines indicate the radial size of the stars on the diagram.
}
\label{fig:HR}
\end{figure}
\subsection{Composition of the SG wind}
Core-hydrogen-burning cool SGs have a convective envelope that mixes the hydrogen-burning products from the interior to the surface. The strong stellar wind then removes the surface layers. To calculate the composition of the ejecta, we need to sum over the surface composition of the evolutionary models. Fig.~\ref{fig:obsNaO} shows the surface Na abundance as a function of the surface O abundance of the three models presented above (in Fig.~\ref{fig:HR}). During their SG phase, the surface composition of our models cover the area where the most extremely polluted population of GC stars are found. This means that if low-mass stars form from the material lost by the SG directly (i.e.\ without mixing the ejecta with pristine gas), this second generation of low-mass stars would be observed as part of the extremely polluted population (cf.\ Sect.~\ref{sec:comp}). In case, however, if the material lost via the slow SG wind is mixed with non-polluted gas, the second generation of low-mass stars could possibly reflect the composition of the so-called intermediate population \citep[i.e.\ those stars that show some traces of pollution, compared to a not-polluted, primordial population, as explained by][]{DaCosta:2013}.
\begin{figure}
\centering
\resizebox{\fs\hsize}{!}{\includegraphics[page=1]{observed}}
\caption{
Theoretical predictions of the wind composition (surface Na abundance as a function of the surface O abundance, in solar Fe units) of three stellar models that become core-hydrogen-burning SGs are plotted with lines. The grey part of the lines correspond to surface compositions at T$_{\rm eff}$>10$^4$~K (i.e.\ the evolution before reaching the SG branch), while the coloured part of the lines show surface composition at T$_{\rm eff}$<10$^4$~K (i.e.\ on the SG branch). When the lines become dashed, they represent the composition of the envelope in the last computed model (i.e.\ deeper layers that could still be lost if the mass-loss rate was higher than assumed here). The evolutionary calculations ended at the core temperatures, T$_{\rm c8}$, given in the legend (units in 10$^8$~K).
The black-yellow star-symbol corresponds to the composition for the simulation presented in Sect.~\ref{sec:comp}.
Observational data of the surface composition of GC stars ($\omega$~Cen red, NGC~6752 black and M~4 blue) are plotted with dots of different colours and shapes, following \citet{Yong:2003}, \citet{DaCosta:2013} and \citet{Denissenkov:2014}.
Open symbols mark the `primordial' population of stars, that is, those without pollution. Filled symbols mark the `extremely' polluted population of stars. Crosses mark the `intermediate' population stars, that is, those with some but not extreme pollution. For details of the observations and the properties of these categories, we refer to \citet{Yong:2003} and \citet{DaCosta:2013}.
}
\label{fig:obsNaO}
\end{figure}
Since the mass-loss rates of our models are uncertain, it is worth investigating how a higher mass-loss rate would influence the ejecta composition. Therefore, we also plotted the composition of the envelope in the last model in Fig.~\ref{fig:obsNaO}. With a higher mass-loss rate (or, in the case of the two most massive models, during the remaining evolutionary time), deeper layers could be lost in the wind, contributing to the extremely polluted generation with very low [O/Fe] (<$-$1) and very high [Na/Fe] ($\sim$0.7).
Deep inside the envelope, the Na abundance drops suddenly because the high temperature ($\gtrsim$0.8$\cdot$10$^{8}$~K) destroys the Na.
\begin{figure}
\centering
\resizebox{\fs\hsize}{!}{\includegraphics[page=2]{observed}}
\caption{The same as Fig.~\ref{fig:obsNaO} but for Mg and Al.}
\label{fig:obsMgAl}
\end{figure}
The Mg-Al surface abundances of our models are shown in Fig.~\ref{fig:obsMgAl}. The surface Mg and Al abundances cover only a small fraction of all the observed variations in these elements. However, losing deeper layers of the envelope could explain the whole observed ranges of Mg and Al variations. When it comes to Mg, it is not only the sum of all three Mg-isotopes that is measured, but the ratios of them as well \citep{Yong:2003,Yong:2006,DaCosta:2013}. Fig.~\ref{fig:obsMgiso} shows the observed isotopic ratios of Mg as a function of the Al-abundance. As mentioned above, our models can reproduce the most extreme Al-abundance values observed in the case where deeper layers of the models are lost. In these deep layers, the Mg-isotopes also follow the observed trend: $^{24}$Mg is decreasing, $^{25}$Mg is slightly decreasing and $^{26}$Mg is considerably increasing compared to their values at the surface.
Due to the high core temperatures, the Mg-Al chain is very effective in our cool SG models. This is a clear advantage of our scenario: for example, neither the fast rotating star scenario nor the massive binary scenario can reach the required spread in Al and Mg, or reproduce the extreme ratios of the Mg-isotopes, unless the reaction rate of the Mg-Al chain is artificially increased \citep{Decressin:2007,deMink:2009}.
\begin{figure}
\centering
\resizebox{0.8\hsize}{!}{\includegraphics[width=0.8\columnwidth,page=3]{observed}}
\resizebox{0.8\hsize}{!}{\includegraphics[width=0.8\columnwidth,page=4]{observed}}
\resizebox{0.8\hsize}{!}{\includegraphics[width=0.8\columnwidth,page=5]{observed}}
\caption{The same as Fig.~\ref{fig:obsMgAl} but for the isotopes of Mg.}
\label{fig:obsMgiso}
\end{figure}
From the comparison of our models' composition with the observed light-element abundances, we conclude that cool SG stars are promising candidates for the astrophysical source that pollutes the second generation of GC stars.
Their strong, slow winds can enrich the interstellar material of the cluster with hot-hydrogen-burning products; the light-element abundances in their envelopes correspond to the most extreme pollution observed. If the stellar wind mixes with the pristine gas of the cluster \citep[as assumed for all other scenarios, such as the asymptotic giant branch star, the fast rotating star and the massive binary scenarios,][]{Bastian:2015}, this mixture can form stars with all of the observed abundance spreads. Thus, cool SGs should be considered as potential contributors of the general pollution of GCs.
However, here we discuss our cool SG models' role not in the general pollution of the interstellar medium of GCs, but in the context of another star-forming process: low-mass star formation in a photoionization-confined shell around the cool SGs. To predict the composition of the SG-ejecta and thus the composition of the second generation of low-mass stars, we need to sum over the surface composition of the SG evolutionary models. We come back to this issue in Sect.~\ref{sec:comp}. In the following, we introduce the concept of the star-forming SG shell.
\section{Starformation in the shell}\label{sec:SGshells}
\subsection{Conditions in young GCs}\label{sec:Conditions}
Apart from the core-hydrogen-burning cool SGs, another important prediction by \citet{Szecsi:2015} is that the fast rotating massive stars become hot, compact and bright for their whole lifetime. These objects, called Transparent Wind UV-Intense (TWUIN) stars, have similar surface properties to those of Wolf--Rayet stars, but differ in that their stellar winds are optically thin \citep[see also][for further discussions of these objects]{Szecsi:2015b,Szecsi:2017h}. TWUIN stars produce a huge amount of ionizing radiation during their lifetimes. According to \citet{Szecsi:2015}, TWUIN stars have a Lyman-continuum luminosity of $Q_0\approx 10^{50}-10^{51}$~s$^{-1}$. A SG located 0.5~pc from such a star is therefore exposed to an ionizing photon flux, $F_{\gamma}$, between $3.3\times 10^{12}$~cm$^{-2}$~s$^{-1}$ and $3.3\times 10^{13}$~cm$^{-2}$~s$^{-1}$. In a dense cluster it is possible for the separation to be even smaller, leading to potentially even more extreme irradiating fluxes.
Following \citet{Szecsi:2015}, we suppose that $\sim$20\% of all massive stars rotate faster than required for quasi-homogeneous evolution, i.e.\ TWUIN-star formation. \citep[This ratio is supported by the rotational velocity distribution of massive stars in the Small Magellanic Cloud observed by][]{Mokiem:2006}. Thus, we have a population of massive stars in a young globular cluster where $\sim$80\% of stars evolve towards the supergiant branch while $\sim$20\% stay hot and emit ionizing radiation.
Supposing that the ionizing-radiation field of the TWUIN stars is isotropic, the wind structure of the SG stars changes significantly: their winds are photoionized from the outside in. At the interface between ionized and neutral material, a dense, spherical shell developes, if the wind is sufficiently slow. This region is called the photoionization-confined shell.
\subsection{Photoionizaton-confined shells around cool SGs}\label{sec:pico}
\citet{Mackey:2014} developed the photoionization-confined shell model to explain the static shell observed around Betelgeuse, a nearby red SG star. According to their calculations, pressure from the photoionized wind generates a standing shock in the neutral part of the wind and forms an almost static, photoionization-confined shell. The shell traps up to 35\% of all mass lost during the red SG phase, confining this gas close to the central object until its final supernova explosion.
We carried out simulations of a shell around a low-Z very massive SG star that undergoes core hydrogen burning. We use the \textsc{PION} code with spherical symmetry \citep{Mackey:2012} to simulate an evolving stellar wind that is photoionized by external radiation. The source of the ionizing radiation are the fast-rotating TWUIN stars, creating an isotopic radiation field that surrounds the SG star. The simulations follow \citet{Mackey:2014} except that we include stellar evolution and we use non-equilibrium heating and cooling rates for the gas thermal physics \citep[as in][]{Mackey:2015}. The stellar wind flows through the inner boundary of the grid with properties taken from the model with M$_{\rm ini}$=257~M$_{\odot}$ of \citet[][also see Sect.~\ref{sec:evolution}]{Szecsi:2015}. This evolutionary model has an initial rotational velocity of 100~km~s$^{-1}$ and mass loss in the SG phase of about 10$^{-3.5}$~M$_{\odot}$~yr$^{-1}$.
The wind is initially cold (200~K; this has no effect on the results because the wind is highly supersonic). The wind velocity is calculated from the escape velocity following \citet{Eldridge:2006}, except that we set the SG wind velocity to be $v_\infty=0.1v_\mathrm{esc}$ for $T_\mathrm{\rm eff}<4500$ K.
The above modification gives a minimum value of $v_\infty\approx12$~km\,s$^{-1}$. The simulations are run with a total metallicity of 0.0002 and surface abundance mass fractions X=0.5 and Y=0.4998, similar to the surface abundances in the low-Z stellar model \citep{Szecsi:2015}.
The wind is exposed to an ionizing photon flux of $F_{\gamma} = 10^{13}$~cm$^{-2}$~s$^{-1}$ (cf.\ Sect.~\ref{sec:Conditions}) in the calculations presented here.
The formation of the shell in the simulation depends on the thermal physics of the shocked wind (which must be able to cool into a dense and cold layer); this is rather uncertain because we have no constraints on dust formation in such low-metallicity SGs. We use atomic line cooling \citep{Wolfire:2003} as implemented in \citet{Mackey:2013}, scaled to the metallicity of the stellar wind.
\begin{figure}
\centering
\resizebox{\fs\hsize}{!}{\includegraphics{Evo257_v12_End_radial}}
\caption{
Density, temperature, velocity, and ionization fraction for the simulation of the photoionizaton-confined shell around a core hydrogen burning supergiant with initial mass of 257~M$_{\odot}$. The snapshot is taken at the end of the stellar evolution calculation, when the star has an age of 1.88~Myr, at which time the shell mass is 14~M$_{\odot}$.
}
\label{fig:shelldens}
\end{figure}
\begin{figure}
\centering
\resizebox{\fs\hsize}{!}{\includegraphics{Evo257_v12_MP3_results}}
\caption{
Shell mass, $M_\mathrm{sh}$, as a function of time since the star's birth (solid blue line), compared to the Bonner-Ebert mass $M_\mathrm{BE}$ at the densest point in the shell (dot-dashed blue line). The dashed black line shows the minimum unstable wavelength in units of the shell radius.
}
\label{fig:shellmass}
\end{figure}
Fig.~\ref{fig:shelldens} shows the structure of the shell. The shell formed at a radius $r\approx0.02$~pc ($6\cdot10^{16}$ cm) from the supergiant (recall that the radius of the stellar model itself is 3.4$\cdot$10$^{14}$~cm, see Sect.~\ref{sec:evolution})
and shows the classic structure of a radiative shock:
(i) an initial density jump at the shock of a factor of $\approx4$ with associated jumps in temperature and velocity according to the Rankine-Hugoniot jump conditions;
(ii) a cooling region where the temperature decreases with $r$, the density increases, and the velocity decreases; and
(iii) a cold dense layer.
The cold layer is bounded on the outside by the ionization front, at which radius the stellar wind is heated to $\approx12\,000$ K.
A thermally driven wind accelerates outwards from the ionization front.
We find that at the metallicity of the SG, the atomic cooling simulation produces a shell with density $\rho\approx2\times10^{-16}$~g\,cm$^{-3}$ and temperature $T\approx50$ K.
The shell mass, $M_{\rm shell}$, is plotted as a function of time in Fig.~\ref{fig:shellmass}.
It grows to M$_{\rm shell}\approx$~14~M$_{\odot}$ by the end of the simulation.
The Bonner-Ebert mass (i.e.\ the mass limit of the overdense region, above which the material collapses into a proto-star), $M_\mathrm{BE}$, and the minimum unstable wavelength $\lambda_\mathrm{min}$ are also plotted in Fig.~\ref{fig:shellmass}. They are discussed in the next section.
\subsection{Gravitational instability in the shell}\label{sec:grav}
For the second generation of low mass stars to form in the photoionization-confined shell, the shell should be gravitationally unstable. To show that the shell in our simulation is indeed gravitationally unstable against perturbations, we follow \citet[][see their eqs.~2.12-2.14]{Elmegreen:1998} who describes the stability of a shocked sheet of gas \citep[see also][]{Doroshkevich:1980,Vishniac:1983}. The dispersion relation (eq.~2.13) gives the condition that perturbations with wavelength~$\lambda$ are unstable~($\omega^2>0$)~if
\begin{equation}
\lambda \geq \frac{c^2}{G\sigma} = \frac{P}{G\sigma\rho}
\end{equation}\label{eq:lambda}
where $c$ is the isothermal sound speed defined by $c^2\equiv P/\rho$ ($P$ being the thermal pressure and $\rho$ the density), and $\sigma$ is the column density through the shell. This condition needs to be fulfilled by the shell in order to become gravitationally unstable. We define $\lambda_\mathrm{min}$ to be the wavelength at which this inequality is an equality.
In our simulation, the shell thickness is \mbox{$l=0.36\cdot10^{16}$~cm}, density is \mbox{$\rho=2.65\cdot10^{-16}$~g~cm$^{-3}$}, and pressure is \mbox{$P=5.89\cdot10^{-7}$~dyne~cm$^{-2}$}. For this shell, the above condition gives a perturbation wavelength \mbox{$\lambda_\mathrm{min} = 3.4\cdot10^{16}$~cm}.
An overdense region should have a diameter of $\lambda$/2. For our spherical shells, we should restrict $\lambda$/2 to be significantly less than the radius of curvature, so that the unstable part of the shell looks more like a flat sheet than a sphere. The shell is at radius \mbox{$\sim$6.2$\cdot10^{16}$~cm} (0.02~pc). The angular size of the overdense region is thus
\mbox{$\lambda_\mathrm{min}/2R_\mathrm{sh}=1.7/6/2\approx0.3$} which is much less than one radian (about 16\degr), so curvature effects are relatively small.
Fig.~\ref{fig:shellmass} shows that \mbox{$\lambda_\mathrm{min}/2R_\mathrm{sh}\approx0.33$} at the end of the simulation, similar to the estimate above.
The Bonnor-Ebert mass for this dense region is
\begin{equation}
M_{\rm BE}=1.18\frac{c^4}{P^{1/2}G^{3/2}}=0.2~ {\rm M}_{\odot},
\label{eq:BEmass}
\end{equation}
meaning that if the dense region contains more mass than this, it would collapse to a protostar. The mass of the dense region depends on its geometry, but with a density of \mbox{$\rho=2.65\cdot10^{-16}$~g~cm$^{-3}$} and a length scale of \mbox{$\lambda/2 \approx 1.7\cdot10^{16}$~cm}, it is around 2-3~M$_{\odot}$.
We see from Fig.~\ref{fig:shellmass} that the shell contains a mass $M_\mathrm{sh}\approx50M_\mathrm{BE}$ at the end of the simulation.
The stability analysis shows that the shell does not become unstable until it contains $\geq20M_\mathrm{BE}$ because the mass is distributed in a shell and not in a spherical cloud. We conclude therefore, that the thermodynamic conditions in the shell allow for gravitational instability, and that potentially many low mass stars may form from a single shell.
\subsection{Forming the second generation of stars in the shell}
Even if gravitational instabilities develop in the shell, the protostars should have been formed before the shell evaporates. This means that the growth timescale of the perturbation should be less than a few times 10$^5$ years (cf.\ lifetimes of SG stars in our simulation, Sect.~\ref{sec:evolution}). Using eqs.~2.12 and 2.14 from \citet{Elmegreen:1998}, we get 3100 and 2.2$\cdot$10$^4$ years, respectively. These timescales are indeed significantly shorter than the life of the SG star with shell.
Once gravitational instability sets in, the collapse timescale is very short because the shell already has a very high density, much larger than dense cores in molecular clouds. Three-dimensional simulations are required to follow the gravitational collapse, so we cannot predict the final masses of the stars that will form. They may be larger than $M_\mathrm{BE}$ because the shell is constantly replenished from the cool SG's mass-loss, and this could accrete onto collapsing cores.
It is highly unlikely, however, that this star-formation channel would have a typical initial mass function. It will rather be dominated by stars with less than one solar mass, and the probability of forming massive stars is expected to be extremely small. On the other hand, we also do not expect very low-mass stars since our simulation predicts a typical mass of 0.2~M$_{\odot}$ for proto-stars, and they are probably still accreting.
Star formation could be a bursty process if gravitational instability sets in at the same time everywhere in the shell (i.e.~if the shell is homogeneous), or more continuous if the shell is asymmetric and/or clumpy.
In either case, star formation does not destroy the shell, but rather makes space for further gas accumulation and subsequent collapse to form more stars.
After the shell begins to collapse, its gaseous mass (excluding protostars) is determined by the addition of new material from the stellar wind of the cool SG, balanced by the collapse of shell material to form new stars, plus accretion of shell material onto existing protostars.
The addition of new material is about 35\% of the cool SG's mass-loss rate, so $\sim10^{-4}$~M$_\odot$~yr$^{-1}$.
Accretion rates onto low-mass protostars are typically $10^{-7}$~M$_\odot$~yr$^{-1}$ \citep{Hartmann:1996}, and so this is unlikely to affect the shell mass because the shell can only form $\approx 10-50$ protostars at any one time (recall, it becomes unstable when its mass is $\gtrsim 10$~M$_\odot$). The reservoir of gas available to form new stars is therefore determined by the mass-loss rate of the cool SG and the rate at which new protostars are condensing out of the shell.
This means that star formation in the shell is expected to be a more or less continuous (but stochastic) process. After the shell has formed and grown to become unstable, some bits of it collapse at different times. But in the meantime, the shell-material is constantly replenished by the SG wind. Thus, an equilibrium develops between mass added to the shell and mass lost through star formation.
\subsection{Composition of the stars in the shell}\label{sec:comp}
The low-mass stars formed in the shell necessarily reflect the composition of the SG wind which is polluted by hot-hydrogen-burning products.
To compute the composition of the shell-stars, we assume that the wind that leaves the SG star goes directly into the shell, and that the material inside the shell is homogeneously mixed. We take into account that the shell only traps a certain amount of the wind-mass (as follows from the hydrodynamical simulations of its structure presented in Fig.~\ref{fig:shellmass}), and thus sum over the wind composition.
Figs.~\ref{fig:obsNaO} and \ref{fig:obsMgAl} show the composition of a star formed inside the shell simulated around the M$_{\rm ini}$=257~M$_{\odot}$ supergiant. The abundances of Na and O of the shell-stars are compatible with the surface composition observed in the extremely polluted population.
The abundances of Mg and Al of our shell stars are compatible with the intermediate population. To fit more extreme abundances of Mg and Al, deeper layers of the SG star should be lost (represented by the dashed lines in Fig.~\ref{fig:obsMgAl}). This could still happen during the post-main-sequence evolution of the SG model which would last for an additional 0.17~Myr (not simulated).
The shell stars have a helium mass fraction of Y$_{\rm sh}$=0.48. We discuss the issue of the observed helium abundance of GC stars in Sect.~\ref{sec:helium}.
\section{Discussion}\label{sec:discuu}
\subsection{Mass budget}\label{sec:massb}
Any scenarios that aim to explain the abundance anomalies observed in GCs need to account for the mass that is contained in the first as well as in the second generation of stars. The three most popular of the polluter sources (asymptotic giant branch stars, fast rotating stars, massive binaries), when only one of them is taken into consideration, fail to explain the amount of stellar mass that we observe with polluted composition. These scenarios suppose that the polluted material stays inside the gravitational potential well of the cluster, preferably accumulating near the center. There the polluted material mixes with the pristine material and forms the second stellar generation. This would explain why we observe not just the primordial and extreme abundances but everything in between (see the observations in Figs.~\ref{fig:obsNaO}~and~\ref{fig:obsMgAl}). But for a second generation to be as numerous as the first generation, one needs much more polluted material than one of these sources can provide \citep{deMink:2009}. Therefore, it is possible that more then one pollution source is present, or even that all the suggested sources contribute \citep{Bastian:2013}.
The mass budget constraint in its simplest form is the following: the second generation that is born inside the shell should contain as much (50:50) mass as the first generation of low-mass stars born normally. (The ratio 50:50 is applicable for the GCs with average mass, but there is evidence that higher-mass clusters have a higher fraction of second generation stars, see Sect.~\ref{sec:verymassive}.)
\subsubsection{Classical IMF}\label{sec:Salpeter}
To investigate the mass budget of our starforming shell scenario, we follow the discussion of \citet{deMink:2009}. Namely, we apply an initial mass function (IMF) between 0.1-1000~M$_{\odot}$ to represent the first generation of stars, as follows \citep{Salpeter:1955,Kroupa:2001}:
\begin{equation}
\begin{split}
N(m)=
\begin{cases}
0.29\cdot m^{-1.3}, & \text{if}\ \ 0.1<m<0.5 \\
0.14\cdot m^{-2.3}, & \text{if}\ \ 0.5<m<1000
\end{cases}
\end{split}\label{eq:imf}
\end{equation}
We take the low-mass stars in the first, unpolluted generation to be between 0.1-0.8~M$_{\odot}$, that is, the mass of stars observed in GCs today \citep[see][]{deMink:2009}.
As for the shell-forming SGs in the first generation, we argue that our models are representative for them in the mass range of 80-1000~M$_{\odot}$. This argument is justified because (1) mass-loss rates in this mass range are high enough for massive shells to form (cf. Sect.~\ref{sec:lowmass}) and because (2) models in this mass range are expected to become core-hydrogen-burning SG stars (cf. Sect.~\ref{sec:evolution}). Additionally, we assume here that the second generation of shell-stars also form between 0.1$-$0.8~M$_{\odot}$, following the mass-distribution of the unpolluted first generation of low-mass stars. We discuss the consequences of \textit{not} assuming this in Sect.~\ref{sec:2GIMF}.
Eq.~\ref{eq:imf} predicts that the first generation of low-mass stars represent 35\% of the total stellar mass initially present in the cluster. Thus to fulfil the mass budget constraint, the second generation should also account for the same, 35\% of the total mass.
Unfortunately, the mass of the SG stars represent only 10\% of the total. If it would be lost through the wind and incorporated into the second generation in the shell with an efficiency of $\xi=$100\% (which is clearly a very weak constraint not only because it would require an unreasonably high mass-loss rate but also because we expect $\sim$20\% of all massive stars to be hot TWUIN stars, see Sect.~\ref{sec:Conditions}), this is still far from the 35\% we aim to account for.
\subsubsection{Top-heavy IMF}\label{sec:topheavy}
One simple way around this issue is to assume a top-heavy IMF, which has indeed been favoured for massive clusters recently \citep{Ciardi:2003,Dabringhausen:2009}.
For example, \citet{Decressin:2010} suggests a flat IMF with index $-$1.55 (instead of $-$2.3 as in~Eq.~(\ref{eq:imf})) to make their fast rotating star scenario work. Our SG shell scenario, however, can work with less extreme values.
Assuming that the massive component of the IMF has an index of $-$2.07 (instead of $-$2.3), the first generation low-mass stars (0.1-0.8~M$_{\odot}$) represent 23\% of the stellar mass initially present in the cluster, while the SG stars (80-1000~M$_{\odot}$) also represent 23\%, satisfying the weak constraint mentioned at the end of Sect.~\ref{sec:Salpeter}.
A strong constraint should take into account: (1) that only $\sim$40\% of the SG mass is lost in the wind; (2) that the shell contains only $\sim$35\% of the wind mass; and (3) that only $\sim$80\% of massive stars evolve towards the supergiant branch (the rest are the TWUIN stars responsible for the ionization). Thus, the mass contained in SG stars will be converted into low-mass stars with an efficiency of $\xi$~$\approx$~40\%~$\times$~35\%~$\times$~80\%~$\approx$~12\%. With this efficiency, an IMF index of $-$1.71 is needed, which translates to 7\% of the total mass in first generation low-mass stars (i.e.\ 0.1-0.8~M$_{\odot}$), and 55\% of the total mass in massive stars (i.e.\ 80-1000~M$_{\odot}$). The mass budget problem is then solved because from this 55\%, only 55\%~$\times$~$\xi$~$\approx$~7\% will be converted into the second generation of low-mass stars.
However, we may not need this strong constraint, since the ratio of the material trapped in the SG shell should be higher than 35\%, which is the nominal value in our simulation. Thus the efficiency, $\xi$, of converting SG mass into shell-stars may be significantly higher than 12\%. The reason for this is that, according to the speculation at the end of Sect.~\ref{sec:grav}, the shell may retain more wind material than the nominal value since the proto-stars are constantly accreting. Since accretion is not included into our shell-simulation, we cannot properly quantify that at this point. Nonetheless, the weak and the strong constraints presented above correspond to IMF indices of $-$2.07 and $-$1.71, respectively, so we conclude that the index required for our scenario to work should be somewhere between these two values.
\subsubsection{On the number of stars in the cluster and in the shell}\label{sec:number}
We give an order of magnitude estimate of the number of stars present in a typical GC where SG shells are forming the second generation. To do this, we assume an average GC with total mass of 10$^5$~M$_{\odot}$ and with an IMF index $-$1.71. This IMF allocates 7\% of the total mass into first generation stars between 0.1-0.8~M$_{\odot}$, and 55\% into SG stars between 80-1000~M$_{\odot}$ (while the rest has no mass-contribution to this particular scenario). The mass of stars in the second generation (i.e.\ formed from shells around SGs) also represents 7\%.
We take 257~M$_{\odot}$ to be the representative mass for the massive regime (that is, the initial mass of the SG model around which our simulation was carried out); and we take
0.2~M$_{\odot}$ to be the representative average mass for both the first generation low-mass stars and the second generation of shell-stars. This value, 0.2~M$_{\odot}$, is the Bonnor-Ebert mass of the objects in our simulation presented in Sect.~\ref{sec:grav}, so it may depend on the mass and geometry of the shell and, therefore, on the mass of the SG.
With these assumptions, the first generation of low-mass stars consist of 35\,000 stars, and so does the second generation. Besides, the first generation must have contained 214 stars in the massive regime. From these, 171 should evolve to be supergiants and have shells, and 43 should be hot TWUIN stars. Note however that there are much more ionizing sources than that, since fast rotating models in the mass range of 9$-$80~M$_{\odot}$ also predict TWUIN stars \citep{Szecsi:2015}.
To form 35\,000 second-generation stars, all 171 supergiants have to form $\sim$200 low-mass stars of 0.2~M$_{\odot}$ out of its wind material. One may recall from Sect.~\ref{sec:grav} that the structure of our simulated shell facilitates the formation of only 50 protostars of this mass at any given time, and that the protostars condensing out of the shell make space for further gas accumulation and subsequent collapses. Thus, from the mass budget constraints it follows that the shell in our simulation should undergo $\sim$3-4 subsequent events of gravitational collapse.
We say subsequent collapses, but we are not suggesting that the shell will form, then everywhere collapse into stars, then re-form and re-collapse, and repeat again. What we suggest is that the shell will form, grow to become unstable, and then there will be stars forming out of cloud material \textit{all the time}. We do not expect it to be an episodic process, but rather a continuous one, resulting in $\sim$3-4 times 50 protostars at the end.
\subsection{On very massive stars and very massive globular clusters}\label{sec:verymassive}
A crucial assumption of the star-forming-shell scenario is the presence of very massive stars in the young cluster. Very massive (>100~M$_{\odot}$) stars are theorized to form either via accretion (i.e.\ the same process that creates lower mass stars) or collision \citep[in extremely dense regions,][]{Krumholz:2014}. Therefore, it is not unreasonable to hypothesize stars as massive as this born in the young GCs. For example, \citet{Denissenkov:2014} assumed stars with 10$^4$~M$_{\odot}$ to give a possible explanation for the GC abundance anomalies.
Statistically, to find very massive stars in a star-forming region in significant number, either the mass of the region has to be large or the IMF has to be very top-heavy---or both. In Sect.~\ref{sec:number} we apply a top-heavy IMF of index $-$1.71 (coming from the strong constraint presented in Sect.~\ref{sec:massb}) and an average GC mass of 10$^5$~M$_{\odot}$ (which results in 171 SGs of the nominal mass 257~M$_{\odot}$). However, some GCs are significantly more massive than that. For example, the mass of $\omega$~Cen is 4$\cdot$10$^6$~M$_{\odot}$.
It has been suggested that the fraction of enriched stars (and in general, the complexity of the multiple population phenomenon) correlates with cluster mass \citep{Carretta:2010,Piotto:2015,Milone:2017}. To account for this, we computed the IMF index not only for a 50:50 ratio of second vs. first generation, but also for a 70:30 ratio (as in some high mass clusters) and a for a 90:10 ratio (as in the highest mass clusters such as e.g. NGC~2808). In the case of a 70:30 ratio, an IMF index of $-$1.6 is needed to fulfill the strong constraint in our starforming shell scenario; while in the case of a 90:10 ratio, $-$1.4 is needed. So we conclude that if---for some reason---the IMF gets more top-heavy with cluster mass, our scenario may work to explain even the most massive clusters. But this argument also applies to all other self-enrichment scenarios involving massive stars, so it is not a distinguishing feature of our scenario.
It is so far unclear if the same mechanism forms all galactic GCs. There is evidence that the low-metallicity GCs in the outer halo have been accreted from neighbouring dwarf galaxies, while the high-metallicity GCs in the inner halo have been formed in situ \citep{Brodie:2006,Forbes:2010}. Some of the most massive GCs, $\omega$~Cen amongst them, possibly used to be dwarf galaxies \citep{Schiavon:2017}. In short, the formation of globular clusters is a complex problem that may require several theoretical scenarios to work together; our scenario may be one of them.
\subsection{Supergiants at lower masses}\label{sec:lowmass}
We presented SG models with initial masses between 150$-$575~M$_{\odot}$, and considered them representative for the mass range of 80$-$1000~M$_{\odot}$ when talking about the mass budget in Sect.~\ref{sec:massb}. The reasons for not including SG models with lower masses (9$-$80~M$_{\odot}$) into our analysis, are the following.
First, their mass-loss is too low to form shells around them. We recall from Sect.~\ref{sec:evolution} that the model around which we simulated the shell, has a mass-loss rate of $-$3.5~[log~M$_{\odot}$~yr$^{-1}$]. Our computations of SG models with 70, 43 and 26~M$_{\odot}$ show that they have mass-loss rates of $-$4.6, $-$5.2 and $-$5.9, respectively. The shells around them will not be massive enough for the second generation of stars to form: it takes a long time to build up a solar mass in the shell, let alone tens of solar masses, if log($\dot{\mathrm{M}}$)~$\sim$~$-$5.
The second problem is geometric. The shell will be closer to the star, and so have smaller volume and less physical space in which to grow.
We cannot exlude, however, that the wind material of these lower-mass SG stars will be expelled into the cluster. There, it might be able to cool later on and -- possibly diluted with some pristine gas -- make new stars. Since these lower-mass SG stars are more likely to form, and thus would dominate over the very massive stars even with a top-heavy IMF, it is an important question to investigate their contribution to the cluster's chemical evolution. A detailed analysis of this scenario will be performed in another work. Our preliminary results nonetheless show that models below 80~M$_{\odot}$ evolve to the SG branch only during their core-helium burning phase. Their surface Na\&O composition reflects the primordial or intermediate population (as defined in Fig.~\ref{fig:obsNaO}), but not the extreme one. As for the Mg\&Al anticorrelation, they show some minor variation only in Al, but no variation in Mg.
Recently, \citet{Schiavon:2017} implied that, at a fixed metallicity, some GCs show variation in Mg and some not. In particular, they detected 23 giant stars in some high-Z and low-Z GCs (situated in the inner Galaxy), and found no clear anti-correlation between Al and Mg. Instead, they report a substantial spread in the abundance of Al and a smaller spread in Mg; while they also admit that the their sample is too small for this to be statistically significant. Nonetheless, this is an interesting finding from our point of view, especially when we talk about lower-mass SGs with <~80~M$_{\odot}$. As we see only minor variation in Al and no variation in Mg, we speculate that---without quantifying their contribution at this point---the presence of SG stars with <~80~M$_{\odot}$ in young clusters may help us to explain why some GCs show variation in Mg and some not.
\subsection{Helium spread in different clusters}\label{sec:helium}
In some globular clusters, there are extremely helium-rich stars. For example, $\sim$15\% of the stars in NGC~2808 show helium abundance of Y$\sim$0.4, as inferred from their multiple main sequences \citep{Piotto:2007,DAntona:2007}, as well as from spectroscopic measurements \citep{Marino:2014}. Other GCs, however, have less extreme helium variations \citep[][]{Bastian:2015,Dotter:2015}.
The most extreme values cannot be reproduced by asymptotic giant branch stars \citep{Karakas:2006}. All the other polluter sources (massive binaries, fast rotating stars, supermassive stars) have a general problem reproducing the required light element variations when the helium spread is a constraint, as shown by \citet{Bastian:2015}.
The reason for this is that the Ne-Na and Mg-Al chains are side-processes of the CNO-cycle -- therefore, together with their burning products a significant amount of helium must be produced as well.
Our simulated shell-stars behave the same way as other massive polluters. Their surface composition (represented by the black-yellow symbol in Figs.~\ref{fig:obsNaO} and \ref{fig:obsMgAl}) contains helium: Y$_{\rm sh}$=0.48. Therefore, they can also only explain the pollution in Na-O and Mg-Al together with a high helium abundance, similar to other scenarios that involve massive stars.
This issue is generic, as both the Ne-Na chain and the Mg-Al chain are side reactions of hot hydrogen-burning \citep{Bastian:2015,Lochhaas:2017}. Hydrogen burns into helium; therefore, whatever nuclear change occurs in the Na/O/Mg/Al abundances due to these chains, it will be accompanied by a change in helium abundance, unless we find a mechanism that separates Na/O/Mg/Al from helium either inside the star or in the interstellar material.
\subsection{Dynamical interactions}\label{sec:coll}
\subsubsection{Collisions of cluster members and shell}
Here we discuss issues about collisions of a random cluster star with a shell around a SG: how often may these collisions happen, and what consequences they may have.
Globular clusters have central densities $\gtrsim 10^3$~M$_\odot$~pc$^{-3}$ and typical stellar mass $0.8$~M$_\odot$ \citep{PortegiesZwart:2010} corresponding to a number density, $n_\star \gtrsim 10^3$~pc$^{-3}$.
They also have internal velocity dispersion, $\sigma_\mathrm{v}\approx 1-10$~km~s$^{-1}$ \citep{Harris:1996}.
The collision time, $t_\mathrm{coll}$, of one of their stars with a shell around a SG, with shell radius $R_\mathrm{sh}\approx0.02$~pc,
can be derived from Eq.~(26) of \citet{PortegiesZwart:2010} as follows:
\begin{equation}
t_\mathrm{coll} \approx 0.16\;\mathrm{Myr}
\left(\frac{n_\star}{10^3\;\mathrm{pc}^{-3}}\right)^{-1}
\left(\frac{\sigma_\mathrm{v}}{5\;\mathrm{km~s}^{-1}}\right)^{-1}
\left(\frac{R_\mathrm{sh}}{0.02\;\mathrm{pc}}\right)^{-2}\label{eq:coll}
\end{equation}
According to this simple, order-of-magnitude estimate, on the order of one star will pass through the cool SG shell during its existence (it lasts $\sim10^5$~years).
However, central densities of \textsl{young} GCs might have been higher than assumed in Eq.~(\ref{eq:coll}). One argument for this is that YMCs, thought to be analogous to young GCs, have central densities much higher than observed today in GCs. A well-known example of a resolved YMC is the Arches cluster with central density of $10^5$~M$_\odot$~pc$^{-3}$ \citep{PortegiesZwart:2010}. Another argument is that the mass may be segregated (i.e.\ stars with masses greater than a given value are found to be more centrally concentrated than the average stellar mass) leading to a higher central density. Additionally, gravitational focusing (i.e.\ enhanced probability that two stars will collide due to their mutual gravitational attraction) may play a role.
If the central stellar density is higher than assumed in Eq.~(\ref{eq:coll}), this means two things. First, this would lead to more (potentially destructive) collisions. In such a dense environment as the Arches cluster, the estimated collision time is two orders of magnitudes higher than in Eq.~(\ref{eq:coll}). Second, the ionizing sources would be closer to the SGs if the central density is higher. Thus, the ionising flux would be larger and the shells would be more compact. This would decrease the probability of a collision, balancing the first effect.
Whether the interaction with a star of low-mass would enhance or inhibit star formation in the SG shell is not clear, and would require complex simulations to model accurately.
If, on the other hand, the star were massive with a strong wind and large Lyman-continuum luminosity, then it would have a strong disruptive effect on the shell.
This may be happening to the wind of the red supergiant W26 in Westerlund~1
\citep{Mackey:2015}.
The probability of a massive star passing through a cool SG shell is small, however, because even the top-heavy mass function prefers low-mass objects (cf.\ the discussion on the number of stars in Sect.~\ref{sec:massb}).
Finally, we point out that even if the shells are destroyed by collision, their material may sink into the cluster core. It is possible that, independently of the formation of SG shells, the material in the cluster core is constantly forming stars, as supposed by many other scenarios \citep[cf.][]{Bastian:2015}. Our supergiants are therefore expected, even with their shells destroyed, to contribute to the chemical evolution of the young cluster by expelling polluted gas into the intracluster medium.
\subsubsection{The probability of falling into the SG}
Once the second generation of stars form in the shell, they are no longer subject to the radiation pressure from the central SG. The radial velocity of the shell-stars is therefore quite small. But it is not zero. In Galactic star formation, the clouds and the dense cores have velocity dispersions larger than the sound speed, attributed to supersonic turbulence \citep{MacLow:2004}. The shell around the SG will be the same, and so we expect that the dense cores that collapse to form stars will have non-radial velocities that are at least comparable to the local sound speed, and probably larger.
While detailed star-formation simulations and N-body dynamics calculations would be required to address this problem, we can present a simple estimation here to demonstrate our point. For T~=~70~K, the sound speed is about 0.6~km~s$^{-1}$. For a 250~M$_{\odot}$ supergiant, and a shell at 6.0$\cdot$10$^{16}$~cm from the star, the escape velocity is 3.3~km~s$^{-1}$ and the circular velocity is 2.4~km~s$^{-1}$. This means that the random non-radial motions are, on average, >~25\% of the circular orbital velocity, and so the shell stars will be on elliptical orbits.
The probability of actually falling into the supergiant is thus very small when simply considering orbits. It is not obvious whether N-body interactions between the many protostars in the shell would eject stars into the cluster and/or increase the likelihood of collision with the central supergiant, and we cannot make predictions at this stage.
\subsection{On high-metallicity clusters and future plans}
Our work focuses on low-metallicity since the majority of GCs with abundance anomalies are between [Fe/H]=$-$2.0 and $-$1.0. We suspect that our model of star-formation in shells will hardly work at high-metallicity. As shown by the models of \citet{Brott:2011} and \citet{Koehler:2015}, massive stars with LMC metallicity do indeed experience envelope inflation above 40~M$_{\odot}$. However, the very massive ones ($\gtrsim$150~M$_{\odot}$) do not become cool supergiants because their mass-loss is very high and so they become hot Wolf--Rayet stars instead. We do not expect shells to form around these hot stars. As for the LMC models between 40$-$150~M$_{\odot}$, they do evolve to the supergiants branch. So they may form shells, although it is beyond the scope of the present work to simulate such a shell and analyse its stability.
The fact that multiple populations have not been found in nearby super star clusters to date \citep{Mucciarelli:2014} may mean, in the context of our scenario, that either (1) SG shells are not stable at high-metallicity, or (2) they do not create (too many) new stars, or even (3) that the composition of the new stars is indistinguishable from that of the old ones. Indeed, our preliminary investigation of the LMC models shows that they have lower core temperatures than their low-Z counterparts, and so the Mg-Al~chain is not effective in them. Thus, even if the second generation is formed in a high-metallicity cluster, we do not expect them to show significant Mg/Al variations. As for the other elements, the variations of Na/O in the winds of the LMC models are present, but more moderate than in our low-metallicity models.
On the other hand, some of the higher-metallicity GCs also have multiple populations \citep[as observed by e.g.][]{Schiavon:2017}. This however does not mean that the same scenario produces the multiple populations at all metallicities. As mentioned in Sect.~\ref{sec:verymassive}, we do not expect that the complex problem of GC formation would be solved by one simple scenario. Indeed, both our low-Z models and the LMC models can be applied in another scenario, in which the mass lost in winds from massive stars can later cool in the cluster core and form new stars (cf.~Sect.~\ref{sec:lowmass}).
Detailed investigation of both sets of models and their wind composition, as well as the possible ways their strong wind may influence the chemical and hydrodynamical evolution of their clusters, are planned in the future.
Indeed, the metallicity-dependence of our scenario, along with that of other scenarios in the literature, should be investigated. Some observations \citep[such as the compilation of photometric results from the
HST~UV~survey by][which is mainly tracing N-abundance variations]{Milone:2017} imply that there is no clear relation between the fraction of stars in each population and the metallicity
of the host cluster. From the modelling point of view, we can say the following about metallicities between our models and the LMC models. \citet{Sanyal:2017} showed that we can expect core-hydrogen-burning SG stars with the composition of the Small Magellanic Cloud (SMC) with luminosities above 10$^6$~L$_{\odot}$. Thus, those GCs that have well-studied multiple populations near SMC
metallicity (e.g.~47~Tuc or M71 with [Fe/H]~$\sim$~$-$0.7) may be explained with our scenario too. The fact that we currently do not see any luminous SG stars in the SMC is not surprising, given the IMF, short lifetime of these stars, and low starformation rate in the SMC.
\subsection{Proposing another solution for the mass budget: a non-classical IMF for the second generation}\label{sec:2GIMF}
Discussing the mass budget in Sect.~\ref{sec:massb} and after, we assumed that the mass-distribution of the shell-stars is the same as that of the first generation of low-mass stars between 0.1$-$0.8~M$_{\odot}$, and showed that we need a top-heavy IMF for our shell-scenario to work under this assumption. We did this because it helps to compare our scenario to others, such as the fast rotating stars or the massive binary polluters. However, there is another way around the mass budget problem---one that is unique to our scenario.
Observationally, it is not excluded that all GC stars with M~$<$~0.6~M$_{\odot}$ are first generation stars (the abundances are always determined near the turn-off, i.e.~at 0.8~M$_{\odot}$). Other scenarios usually do not account for this, as this would make their mass budget solution even more speculative. Indeed, if star formation happens out of the interstellar material in the cluster center, it is already hard to justify why the second generation only harbours stars below 0.8~M$_{\odot}$ and nothing above \citep[as done, for example, in][]{deMink:2009}. It would be even more difficult to explain why the IMF would be truncated at both the high and the low ends. Or why, for that matter, the form of the distribution would not follow the classical power-law observed everywhere in the Universe.
In our shell-scenario however, the mode of star formation is so unusual that the IMF must be quite irregular. Apart from massive stars being justifiably excluded on quite robust grounds, it is not clear whether or not the minimum mass could be even larger than the Bonnor-Ebert mass (0.2~M$_{\odot}$, as quoted in Eq.~\ref{eq:BEmass}). After all, the proto-stars may be still accreting some more mass from the shell.
So for us, it is fathomable to account for a first generation well above 0.2~M$_{\odot}$, the lower limit depending on the accretion rates of the proto-stars. As an example, if the range to account for was only between 0.6$-$0.8~M$_{\odot}$, then the stars represent 7\% of the total cluster mass (following the classical IMF in Eq.~(\ref{eq:imf})). SGs represent 10\%, but their material is inserted into the second generation of shell-stars with an ill-established efficiency $\xi$. This efficiency was taken to be 100\% in the weak case in Sect.~\ref{sec:Salpeter} and 12\% in the strong case in Sect.~\ref{sec:topheavy}, but we expect its realistic value to be somewhere between. Supposing for example that $\xi$~$=$~70\%, the mass budget is solved with having a first generation as numerous (50:50) as the second generation (10\%~$\times$~$\xi$~$=$~7\%). (With $\xi$~$=$100\%, we get a 60:40 ratio of first vs. second generation, cf.~Sect.~\ref{sec:verymassive}.)
Furthermore, we have no reason to suppose that the form of the mass distribution of the second generation is identical to that of the first generation. It certainly needs further investigations (possibly, 3-dimensional simulations of star formation in a spherical shell) to know more about its supposed mathematical form, but in the most optimistic case where all second generation stars form with 0.6~M$_{\odot}$, the efficiency of inserting SG mass into shell-stars can be as low as $\xi$~$=$~40\% to solve the mass budget with a second generation as numerous as the first (50:50). We can also explain very massive clusters where the ratio is more extreme, cf.~Sect.~\ref{sec:verymassive}, if we suppose larger $\xi$ values.
We recall from Sect.~\ref{sec:topheavy} that $\xi$ depends on three astrophysical effects: the mass loss rate of very massive SGs, the amount of material captured in the shell, and the ratio of TWUIN stars vs. SG stars. All three are poorly constrained at this point, so it is quite conceivable that their interplay adds up to $\xi$~$\gtrsim$~40\%.
Note that in these considerations, the mass distribution of the \textit{first} generation of stars (both massive and low-mass) follows the classical (not top-heavy) IMF given in Eq.~(\ref{eq:imf}). Solving the mass budget problem this way---having a justifiably irregular IMF for the \textit{second} generation---is a unique feature of our starforming supergiant shell scenario.
\subsection{Uncertainties of the star-forming shell scenario}
From the point of view of observations, there is some uncertainty as to whether these massive and cool supergiants with low-metallicity actually exist in nature. This will be addressed in the near future by infrared observations of a larger sample of low-metallicity galaxies by the James Webb Space Telescope. From a theoretical point of view, the physics of these stars with inflated envelopes is quite uncertain, and it is undergoing intensive investigation at the moment \citep{Sanyal:2015,Sanyal:2017}.
Additionally, as mentioned in Sect.~\ref{sec:evolution}, the mass-loss prescription we use involves an extrapolation beyond the mass range where it has been measured.
The process of star formation in a shell is considered rather delicate. It requires several astrophysical effects to combine: that sufficiently dense and long-lived photoionization-confined shells form around very massive SG stars isotropically, so that gravitational instability could occur and lead to the formation of a second generation of stars. As for the mass budget, either the IMF of the cluster should have an index between $-$1.71 and $-$2.07 (as explained in Sect.~\ref{sec:massb}---also note that the upper limit for the first generation, 0.8~M$_{\odot}$ is rather arbitrary), or the second generation should have a non-classical IMF, truncated at both the high and the low end. Additionally, massive stars in this cluster should have a broad rotational velocity distribution, because the TWUIN stars that produce the ionizing radiation are fast rotators.
Under these conditions, the star-forming shell scenario could potentially produce a second population of stars with the observed abundance variations, and with a similar total mass to that of the first generation of low-mass stars.
\subsection{Supergiants may end up as massive black holes in globular clusters}
With the direct detection of merging black holes via their gravitational wave radiation \citep{Abbott:2016b,Abbott:2016a,Bagoly:2016,Abbott:2017,Szecsi:2017}, many authors suggested globular clusters as the host of these black holes \citep{Rodriguez:2015,Antonini:2016,Belczynski:2016,Askar:2017}. In this section, we discuss the final fate and remnants of our supergiant models.
The cores of very massive stellar models at low-Z undergo pair-instability \citep{Burbidge:1957,Langer:1991,Heger:2003,Langer:2007,Yoon:2012,Kozyreva:2014}. This instability makes the core collapse during oxygen burning, that is, before an iron-core could form. Above a helium-core mass of $\sim$133~M$_{\odot}$, the collapse directly leads to black hole formation. Below this mass, however, it leads to a pair-instability supernova \citep{Heger:2002}.
From the three supergiant models presented in the context of the star-forming supergiant shell scenario, the most massive two models (with M$_{\rm ini}$=575~M$_{\odot}$ and M$_{\rm ini}$=257~M$_{\odot}$) are predicted to form black holes \textsl{without} a supernova explosion \citep{Szecsi:2016}.
The masses of these black holes are expected to be above 140~M$_{\odot}$, depending on the strength of the mass-loss (discussed in Sect.~\ref{sec:evolution}). They will thus contribute to the black hole population of their globular clusters.
The model with M$_{\rm ini}$=150~M$_{\odot}$ on the other hand, which has a final mass of 118~M$_{\odot}$, is predicted to explode as a pair-instability supernova \citep{Szecsi:2016}. The explosion of the SG star may disrupt the shell, but leave the majority of the proto-stars intact. The supenova ejecta is probably too energetic to stay in the cluster's potential well \citep{Lee:2009}, so it may not pollute the second generation of stars \citep[cf.~however][]{Wunsch:2016}.
\
\section{Conclusions}\label{sec:conclusionshell}
We propose star-forming shells around cool supergiants as a possible site to form the second generation of low-mass stars in Galactic globular clusters at low-metallicity. Photoionizaton-confined shells around core-hydrogen-burning cool supergiant stars may have been common in the young GCs. We simulate such a shell and find that it is dense enough to become gravitationally unstable. The new generation of low mass stars that would be formed in the shells should have an initial composition reflecting that of the supergiant's stellar wind, i.e.\ polluted by hot-hydrogen-burning products.
We summarize the most important ingredients of our star-forming shell scenario below.
\begin{enumerate}
\item \textbf{Low-metallicity supergiant models.} We present state-of-the-art stellar models of low-metallicity supergiants. At this low-metallicity (comparable to the metallicity of globular clusters), our models spend several hundreds of thousands of years on the supergiant branch already during their core-hydrogen-burning phase. They also stay on the supergiant branch during their remaining evolution.
\item \textbf{Slow, but strong stellar wind.} The supergiant models lose a significant amount of their material in their winds. Since the winds are slow, the material likely stays inside the young globular cluster.
\item \textbf{Hot-hydrogen burning.} In our models of very massive supergiants, the two nuclear burning cylces (Ne-Na chain and Mg-Al chain) that are responsible for the anticorrelation (of O~vs.~Na and Mg~vs.~Al, respectively) are effective.
\item \textbf{Convective envelope even during hydrogen-burning.} Although the burning processes take place in the core during the core-hydrogen-burning phase, the ashes are mixed between the core and the surface due to the large convective envelope of the supergiant. The composition of the stellar wind is, therefore, enhanced in Na and Al, while depleted of O and Mg.
\item \textbf{Presence of ionizing sources (TWUIN stars).} We point out that in a population of low-Z massive stars with a broad rotational velocity distribution, the fastest rotating stars will evolve quasi-homogeneously. This chemically homogeneous evolution is responsible for the creation of hot, luminous objects with intense ionizing ratiation, the so-called Transparent Wind UV-Intense stars. We suppose that the radiation field of TWUIN stars is approximately isotropic in the young globular cluster.
\item \textbf{Photoionization-confined shells.} Where the neutral, cool stellar wind of the supergiants meet the ionized, hot region of the cluster environment, photoionization-confined shells may form. We simulate such a photoionization-confined shell around one of our supergiant models. The shell has a density of 2$\times$10$^{-16}$~g~cm$^{-3}$ and temperature of $\sim$50~K.
\end{enumerate}
We analyse the stability of the photoionization-confined shell in our simulation, and find that it is gravitationally unstable on a timescale much shorter than the lifetime of the supergiant. The Bonnor-Ebert mass of the overdense regions is low enough to allow star formation. The mass distribution of the new stars is unknown, but we certainly expect the majority of them to be above 0.2~M$_{\odot}$ and below 1~M$_{\odot}$. It is unlikely that massive stars would form because of the geometry of this particular star-forming region.
We show that the composition of a star formed in the photoionization-confined shell is comparable to the observed composition of old, low-mass stars in the most extremely polluted population in globular clusters. We match the abundances of O, Na, Al and Mg, as well as the isotopes of $^{24}$Mg, $^{25}$Mg and $^{26}$Mg. We emphasize that the very high masses of our supergiant models naturally explain the Mg isotope observations, with which some of the alternative scenarios (fast rotating star scenario and the massive binary scenario) clearly struggle. Our scenario also
only works in metal-poor environments however, and cannot apply to the most metal-rich clusters.
Our simulated shell-stars have a high surface helium mass fraction of Y$_\mathrm{sh}$=0.48. We find that low-metallicity supergiants behave the same way as other massive polluters when it comes to helium: they can also only explain the spread in Na\&O and Mg\&Al together with a high helium abundance. But this issue is generic, as both the Ne-Na chain and the Mg-Al chain are side reactions of hot hydrogen-burning \citep{Bastian:2015,Lochhaas:2017}.
To fulfill the mass-budget constraint, we offer two possibilities. One possibility is that we apply a top-heavy initial mass function with an index being somewhere between $-$1.71 and $-$2.07. These values are less restrictive than those required for some of the other scenarios; e.g. the supermassive stars with 10$^4$~M$_{\odot}$ of \citet{Denissenkov:2014} or the fast rotating stars of \citet{Decressin:2007}. Another possibility is to use a non-classical IMF for the second generation of stars in the shell. We argued that both massive stars and very low-mass stars are justifiably excluded from this second generation, making possible for us to solve the mass budget by accounting only for a fraction of the first generation low-mass stars.
We emphasize that even if the shells are destroyed e.g. by collision, the corresponding gas may sink into the cluster core and lead to star formation there. Thus, supergiant shells should be considered possible contributors to the chemical evolution of globular clusters.
If the conditions do not facilitate the formation of a photoionization-confined shell (e.g.~because the ionizing radiation field is too weak), the supergiant stellar models presented here should still be considered. Their winds are slow, strong and enhanced by ashes of hot-hydrogen burning. Therefore, our low-Z supergiant models should be taken into account when one is assessing all the possible sources of pollution in young globular clusters.
Although there are some uncertainties necessarily associated with our proposed scenario of star-forming shells around cool supergiant stars, it shows strong potential for explaining at least some of the second generation of stars with anomalous abundances in GCs -- especially the more extreme cases. Our calculations presented here show that the cool supergiant scenario, both with or without a photoionization-confined shell, deserves serious consideration alongside other, more established scenarios, and should be investigated in more detail in the future.
\begin{acknowledgements}
We thank S.E. de~Mink for her useful comments on the issues of helium spread and the initial composition of the clusters. We also thank R. Wünsch for the careful reading and commenting of our draft, and for his contribution to the discussion of collision times. For the original version of our Fig.~\ref{fig:shell}, we acknowledge its creator, S. Mohamed.
D.Sz.\ was supported by the Czech Grant nr.\ 13-10589S GA \v{C}R.
JM acknowledges funding from a Royal Society--Science Foundation Ireland University Research Fellowship. This research was partially supported by STFC.
\end{acknowledgements}
\bibliographystyle{aa} |
1711.04364 | \section{Introduction}
\label{sec:intro}
In this work we focus on the dynamics and interactions of a system of
$N$ ($N\geqslant 2$) immiscible incompressible fluids
in an unbounded flow domain.
In order to numerically simulate such problems it is necessary
to truncate the domain to a finite size. Consequently,
part of the boundary in the computational domain will be open, in the sense that
the fluids can freely leave (or even enter) the domain through
such boundaries, and appropriate
boundary conditions will be required on the open (or outflow)
portions of the domain boundary.
We are particularly concerned with situations in which the multitude
of fluid interfaces formed in the system will pass through the open
domain boundaries.
Following the notation of our previous works~\cite{Dong2014,Dong2015,Dong2017},
we refer to such problems as N-phase outflows. Here
$N$ denotes the number of different fluid components in the system, not
necessarily the number of material phases.
N-phase outflows and open boundaries pose a number of
issues to numerical simulations.
First, the problem involves multiple fluid interfaces at
the open/outflow boundary, which are associated with
multiple surface tensions and the contrasts in densities and
viscosities of these fluids. How to deal with
the surface tensions, and the density and viscosity contrasts
in the N-phase open/outflow boundary conditions (OBC) poses
the foremost issue.
Second, backflow instability is another crucial issue confronting
N-phase outflow simulations. Backflow instability refers to
the numerical instability associated with strong vortices or
backflows at the open/outflow boundary,
which causes computations to blow up instantly when strong vortices
or backflows occur at the outflow boundary.
The backflow instability issue is not unique to multiphase flows.
This issue is well-known in single-phase
outflow problems~\cite{DongKC2014,DongS2015,Dong2015clesobc}, but it becomes much
worse for two-phase~\cite{Dong2014obc,DongW2016} and multiphase
outflows because of the density contrasts and viscosity
contrasts at the outflow boundary.
Third, N-phase problems with $N\geqslant 3$ pose the so-called
reduction consistency issue on the design of outflow/open boundary
conditions~\cite{Dong2017}. Reduction consistency refers to
the property that, if only $M$ ($2\leqslant M\leqslant N-1$)
fluid components are present in the N-phase system (while the
other fluid components are absent),
the governing equations and the boundary conditions for
the N-phase system should reduce to those for the corresponding
smaller M-phase system~\cite{Dong2017}.
The reduction consistency of N-phase outflow/open
boundary conditions is an issue unique to multiphase outflow
and open-boundary problems.
The development of effective outflow/open boundary conditions
is an important problem in
computational fluid dynamics. For single-phase
problems, this has been under intensive investigations
for decades and a large volume of literature exists;
see e.g.~\cite{Gresho1991,SaniG1994}
for a comprehensive review of related literature
and~\cite{DongKC2014,Dong2015clesobc} and the
references therein for a sample of more recent works.
On the other hand,
for two-phase ($N=2$)
outflows and open boundaries
the existing work in the literature is very limited,
and for multiphase outflow and open-boundary problems involving
three or more ($N\geqslant 3$) fluid components,
there is no existing work available in the literature
to the best of our knowledge.
The zero-flux (Neumann) and extrapolation boundary conditions
from single-phase flows have been used
for the two-phase Lattice-Boltzmann equation
in \cite{LouGS2013}. The zero-flux condition
has also been employed for the outflow boundary
with a level-set type method
in \cite{AlbadawiDRMD2013,Son2001}.
The outflow condition for two immiscible fluids is considered
for a porous medium in \cite{LenzingerS2010},
and for one-dimensional two-phase compressible
flows in \cite{Munkejord2006,DesmaraisK2014}.
In \cite{Dong2014obc,DongW2016} we have developed
a set of two-phase open boundary conditions having
the attractive property that these conditions ensure
the energy stability of the two-phase system, which is
therefore effective for dealing with two-phase open boundaries.
In the current paper we consider the multiphase outflow
and open-boundary problem with $N$ ($N\geqslant 3$)
immiscible incompressible fluid components in the system,
and present a set of effective outflow/open boundary
conditions and an associated numerical algorithm for such
problems within the phase field framework. The proposed open boundary
conditions are designed based on considerations of two properties:
energy stability and reduction consistency.
By looking into the energy balance of the N-phase system,
we design the open boundary conditions in such a way to ensure
that their contributions shall not cause the total energy of
the N-phase system to increase over time,
regardless of the flow state at the outflow/open boundary.
This energy-stable property holds
even in situations where strong vortices or backflows occur
at the open boundary.
As a result, these boundary conditions are very effective
in overcoming the backflow instability.
We then look into the reduction consistency of these
boundary conditions, and study how these
conditions transform if some fluid components
are absent from the N-phase system.
The reduction consistency property limits the choice
and the form of those boundary conditions that
ensure the energy stability.
The N-phase outflow/open boundary conditions and also the inflow
boundary conditions proposed herein satisfy both
the energy stability and the reduction consistency.
The outflow/open boundary conditions proposed herein
are developed in the context of an N-phase physical formulation
we developed recently in \cite{Dong2017}.
This formulation is based on a phase field model for the N-fluid
mixture that is more general than a previous model \cite{Dong2014}.
The thermodynamic consistency and the reduction
consistency of this formulation
have been extensively studied in \cite{Dong2017}.
The formulation rigorously satisfies the mass conservation,
momentum conservation, the second law of thermodynamics,
and the Galilean invariance principle.
This formulation is fully reduction consistent, provided that
an appropriate potential free energy density function
satisfying certain properties is employed
for the N-phase system~\cite{Dong2017}.
The reduction consistency of a set of Cahn-Hilliard type
equations for a three-component and multi-component
system (without hydrodynamic interactions) has previously been
considered in \cite{BoyerL2006,BoyerM2014}.
The thermodynamic consistency of two-phase and multiphase
systems has also been considered
in~\cite{LowengrubT1998,KimL2005,AbelsGG2012,HeidaMR2012,Dong2014,LiW2014,Dong2015,WuX2017}.
We refer the reader
to e.g.~\cite{AndersonMW1998,LiuS2003,YueFLS2004,BoyerLMPQ2010,Kim2012,ZhangW2016,BanasN2017,YangZWS2017,ZhaoLWY2017}
for other contributions to two-phase and multiphase flow problems.
We further present an efficient numerical algorithm
for the proposed outflow and inflow boundary conditions
together with the N-phase governing equations.
This is a semi-implicit splitting type scheme.
Special care is taken in the numerical treatments of
the open/ouflow boundary conditions such that
the computations for different flow variables
and the computations for the ($N-1$) phase field
functions have all been de-coupled.
The algorithm involves only the solution of
a set of individual de-coupled Helmholtz-type equations (including Poisson) within
each time step. The resultant linear algebraic
systems after discretization involves only constant
and time-independent coefficient matrices,
which can be pre-computed during pre-processing,
even when large density contrasts and large viscosity
contrasts are involved in the N-phase system.
The novelties of this paper lie in two aspects:
(i) the set of N-phase energy-stable and reduction-consistent
outflow/open boundary conditions and inflow boundary conditions,
and (ii) the numerical algorithm for treating the proposed set
of outflow and inflow boundary conditions.
The rest of this paper is structured as follows.
In the rest of this section we provide a summary of
the general phase field model developed in \cite{Dong2017}
for the N-fluid mixture.
This model provides the basis for the N-phase energy balance
relation and the development of energy-stable boundary conditions.
In Section \ref{sec:method} we propose a set of outflow and
inflow boundary conditions based on considerations of energy
stability and reduction consistency of the N-phase system, and present an efficient
algorithm for numerically treating these boundary conditions
together with the N-phase governing equations.
In Section \ref{sec:tests} we present several representative
numerical examples involving multiple fluid components
and inflow/outflow boundaries to demonstrate the effectiveness of
the proposed outflow/open boundary conditions and
the performance of the numerical algorithm herein.
Section \ref{sec:summary} then concludes the discussion
with some closing remarks.
\input Model
\section*{Acknowledgement}
This work was partially supported by
NSF (DMS-1318820, DMS-1522537).
\bibliographystyle{plain}
\section{N-Phase Energy-Stable Open Boundary Conditions}
\label{sec:method}
In this section
we propose a set of N-phase outflow/open (and also inflow) boundary conditions
based on considerations of energy stability and reduction consistency,
and develop an algorithm for numerically treating the proposed
boundary conditions together with the N-phase governing equations.
\subsection{N-phase Energy Balance and Energy-Stable Boundary Conditions}
\label{sec:energy_balance}
We first derive the energy balance relation for
the N-phase model represented by \eqref{equ:nse_original}--\eqref{equ:CH_original},
and then based on this relation
look into possible forms for the boundary conditions
to ensure the energy stability of the N-phase
system.
It is straightforward to verify that
the $\rho(\vec{\phi})$ given by \eqref{equ:density_expr}
and $\tilde{\mathbf{J}}(\vec{\phi},\nabla\vec{\phi})$
given by \eqref{equ:J_expr}
satisfy the following relation
\begin{equation}
\frac{\partial\rho}{\partial t} + \mathbf{u}\cdot\nabla\rho
= -\nabla\cdot\tilde{\mathbf{J}}
\label{equ:mass_balance}
\end{equation}
where we have used equations \eqref{equ:CH_original}
and \eqref{equ:varphi_expr}.
Let $\mathbf{T} = -p\mathbf{I} + \mu\mathbf{D}(\mathbf{u})$ denote
the stress tensor,
where $\mathbf{I}$ is the identity tensor. Then
equation \eqref{equ:nse_original} can be written as
\begin{equation}
\rho\frac{D\mathbf{u}}{Dt}
+ \tilde{\mathbf{J}}\cdot\nabla\mathbf{u}
= \nabla\cdot\mathbf{T}
- \sum_{i=1}^{N-1} \nabla\cdot\left(
\nabla\phi_i \otimes \frac{\partial W}{\partial(\nabla\phi_i)}
\right),
\label{equ:nse_trans_1}
\end{equation}
where $\frac{D}{Dt}=\frac{\partial}{\partial t}+\mathbf{u}\cdot\nabla$
denote the material derivative.
Taking the $L^2$ inner product between equation \eqref{equ:nse_trans_1}
and $\mathbf{u}$ leads to
\begin{equation}
\begin{split}
\frac{\partial}{\partial t}\int_{\Omega} \frac{1}{2}\rho|\mathbf{u}|^2 =&
-\int_{\Omega}\frac{\mu}{2}\|\mathbf{D}(\mathbf{u}) \|^2
-\int_{\Omega}\sum_{i=1}^{N-1}\left[\nabla\cdot\left(
\nabla\phi_i\otimes\frac{\partial W}{\partial\nabla\phi_i}
\right)\right]\cdot\mathbf{u} \\
& +\int_{\partial\Omega}\left[
\mathbf{n}\cdot\mathbf{T}\cdot\mathbf{u}
-\frac{1}{2}(\mathbf{n}\cdot\tilde{\mathbf{J}})|\mathbf{u}|^2
-\frac{1}{2}\rho|\mathbf{u}|^2\mathbf{n}\cdot\mathbf{u}
\right]
\end{split}
\label{equ:kinetic_energy_balance}
\end{equation}
where $\mathbf{n}$ is the outward-pointing unit vector
normal to $\partial\Omega$, and we have used the divergence theorem, the equations
\eqref{equ:continuity_original} and \eqref{equ:mass_balance},
and the following relations
\begin{equation}
\left\{
\begin{split}
&
(\nabla\cdot\mathbf{T})\cdot\mathbf{u}=\nabla\cdot(\mathbf{T}\cdot\mathbf{u})
-\mathbf{T}:(\nabla\mathbf{u})^T
=\nabla\cdot(\mathbf{T}\cdot\mathbf{u})+p\nabla\cdot\mathbf{u}
-\frac{\mu}{2}\|\mathbf{D}(\mathbf{u}) \|^2, \\
&
\rho\frac{D\mathbf{u}}{Dt}\cdot\mathbf{u} = \frac{D}{Dt}\left(\frac{1}{2}\rho|\mathbf{u}|^2 \right)
-\frac{D\rho}{Dt}\left(\frac{1}{2}|\mathbf{u}|^2 \right), \\
&
\left(\tilde{\mathbf{J}}\cdot\nabla\mathbf{u} \right)\cdot\mathbf{u}
=\nabla\cdot\left(\frac{1}{2}|\mathbf{u}|^2\tilde{\mathbf{J}} \right)
-\nabla\cdot\tilde{\mathbf{J}}\left(\frac{1}{2}|\mathbf{u}|^2 \right).
\end{split}
\right.
\end{equation}
Take the $L^2$ inner product between equation \eqref{equ:CH_original}
and $\mathcal{C}_i$, sum over $i$ from $1$ to $(N-1)$, and we arrive at
\begin{equation}
\int_{\Omega}\sum_{j=1}^{N-1}\left(
\frac{\partial W}{\partial \phi_j}
- \nabla\cdot\frac{\partial W}{\partial\nabla\phi_j}
\right) \frac{D\phi_j}{Dt}
= -\int_{\Omega}\sum_{i,j=1}^{N-1}\tilde{m}_{ij}\nabla\mathcal{C}_i\cdot\nabla\mathcal{C}_j
+ \int_{\partial\Omega}\sum_{i,j=1}^{N-1}\tilde{m}_{ij}(\mathbf{n}\cdot\nabla\mathcal{C}_j)\mathcal{C}_i
\label{equ:free_eng_1}
\end{equation}
where we have used the integration by part, the divergence theorem,
and the equation \eqref{equ:chem_potential}.
By noting the relations
\begin{equation}
\left\{
\begin{split}
&
\frac{\partial W}{\partial t} = \sum_{i=1}^{N-1}\frac{\partial W}{\partial\phi_i}\frac{\partial\phi_i}{\partial t}
+ \sum_{i=1}^{N-1}\frac{\partial W}{\partial\nabla\phi_i}\cdot\nabla\frac{\partial\phi_i}{\partial t} \\
&
\nabla\cdot\frac{\partial W}{\partial\nabla\phi_i}\frac{\partial\phi_i}{\partial t}
=\nabla\cdot\left(\frac{\partial W}{\partial\nabla\phi_i} \frac{\partial\phi_i}{\partial t} \right)
-\frac{\partial W}{\partial\nabla\phi_i}\cdot\nabla \frac{\partial\phi_i}{\partial t},
\end{split}
\right.
\end{equation}
equation \eqref{equ:free_eng_1} can be transformed into
\begin{multline}
\int_{\Omega}\frac{\partial W}{\partial t}
+ \int_{\Omega}\sum_{i=1}^{N-1}\left(
\frac{\partial W}{\partial\phi_i} - \nabla\cdot\frac{\partial W}{\partial\nabla\phi_i}
\right)\mathbf{u}\cdot\nabla\phi_i
-\int_{\partial\Omega}\sum_{i=1}^{N-1}\mathbf{n}\cdot\frac{\partial W}{\partial\nabla\phi_i}
\frac{\partial\phi_i}{\partial t} \\
= -\int_{\Omega}\sum_{i,j=1}^{N-1}\tilde{m}_{ij}\nabla\mathcal{C}_i\cdot\nabla\mathcal{C}_j
+ \int_{\partial\Omega}\sum_{i,j=1}^{N-1}\tilde{m}_{ij}(\mathbf{n}\cdot\nabla\mathcal{C}_j)\mathcal{C}_i
\label{equ:free_eng_2}
\end{multline}
where we have used the divergence theorem.
With the help of the relations
\begin{equation}
\left\{
\begin{split}
&
\nabla\cdot\left(\nabla\phi_i\otimes\frac{\partial W}{\partial\nabla\phi_i} \right)\cdot\mathbf{u}
= \nabla\cdot\left(\frac{\partial W}{\partial\nabla\phi_i}\otimes\nabla\phi_i \right)\cdot\mathbf{u}
= \nabla\cdot\frac{\partial W}{\partial\nabla\phi_i}(\mathbf{u}\cdot\nabla\phi_i)
+\frac{\partial W}{\partial\nabla\phi_i}\cdot\nabla\nabla\phi_i\cdot\mathbf{u} \\
&
\mathbf{u}\cdot\nabla W = \sum_{i=1}^{N-1}\frac{\partial W}{\partial\phi_i}\mathbf{u}\cdot\nabla\phi_i
+ \sum_{i=1}^{N-1}\frac{\partial W}{\partial\nabla\phi_i}\cdot\nabla\nabla\phi_i\cdot\mathbf{u} \\
&
\mathbf{u}\cdot\nabla W = \nabla\cdot(\mathbf{u} W)
\end{split}
\right.
\end{equation}
we can further transform \eqref{equ:free_eng_2} into
\begin{equation}
\begin{split}
\frac{\partial}{\partial t}\int_{\Omega} W =&
-\int_{\Omega}\sum_{i,j=1}^{N-1}\tilde{m}_{ij}\nabla\mathcal{C}_i\cdot\nabla\mathcal{C}_j
+ \int_{\Omega} \sum_{i=1}^{N-1} \nabla\cdot\left(\nabla\phi_i\otimes\frac{\partial W}{\partial\nabla\phi_i} \right)\cdot\mathbf{u} \\
&
+ \int_{\partial\Omega}\sum_{i,j=1}^{N-1}\tilde{m}_{ij}(\mathbf{n}\cdot\nabla\mathcal{C}_j)\mathcal{C}_i
+ \int_{\partial\Omega}\sum_{i=1}^{N-1}\mathbf{n}\cdot\frac{\partial W}{\partial\nabla\phi_i}\frac{\partial\phi_i}{\partial t}
-\int_{\partial\Omega}W (\mathbf{n}\cdot\mathbf{u})
\end{split}
\label{equ:free_eng_3}
\end{equation}
where we have used the divergence theorem and equation \eqref{equ:continuity_original}.
Summing up equations \eqref{equ:kinetic_energy_balance}
and \eqref{equ:free_eng_3}, we obtain the energy balance
equation for the N-phase system described
by \eqref{equ:nse_original}--\eqref{equ:CH_original}:
\begin{equation}
\begin{split}
\frac{\partial}{\partial t}\int_{\Omega}\left[\frac{1}{2}\rho|\mathbf{u}|^2 + W \right]
=& -\int_{\Omega}\frac{\mu}{2}\|\mathbf{D}(\mathbf{u}) \|^2
- \int_{\Omega}\sum_{i,j=1}^{N-1}\tilde{m}_{ij}\nabla\mathcal{C}_i\cdot\nabla\mathcal{C}_j \\
& + \int_{\partial\Omega} \underbrace{ \left[
\mathbf{n}\cdot\mathbf{T}\cdot\mathbf{u}
-\frac{1}{2}(\mathbf{n}\cdot\tilde{\mathbf{J}})|\mathbf{u}|^2
-\frac{1}{2}\rho|\mathbf{u}|^2\mathbf{n}\cdot\mathbf{u}
-W \mathbf{n}\cdot\mathbf{u}
\right] }_{\text{boundary term (I)}} \\
& + \int_{\partial\Omega} \underbrace{
\sum_{i,j=1}^{N-1}\tilde{m}_{ij}(\mathbf{n}\cdot\nabla\mathcal{C}_j)\mathcal{C}_i
}_{\text{boundary term (II)}}
+ \int_{\partial\Omega}\underbrace{
\sum_{i=1}^{N-1}\mathbf{n}\cdot\frac{\partial W}{\partial\nabla\phi_i}\frac{\partial\phi_i}{\partial t} }_{\text{boundary term (III)}}.
\end{split}
\label{equ:energy_balance}
\end{equation}
Since the free energy form $W(\vec{\phi},\nabla\vec{\phi})$ and
the order parameters $\phi_i$ ($1\leqslant i\leqslant N-1$) are
unspecified,
the above energy balance holds
for any specific form of $W(\vec{\phi},\nabla\vec{\phi})$
and any specific choice of the order parameters.
In the above energy balance equation, the left hand side (LHS) is
the time derivative of the total energy of the N-phase system.
On the right hand side (RHS), the volume-integral terms
are always dissipative by noting the symmetric positive definiteness
of the matrix formed by $\tilde{m}_{ij}$ ($1\leqslant i,j\leqslant N-1$).
The boundary-integral terms, on the other hand, can be
positive or negative, depending on the boundary conditions.
We are interested in boundary conditions for the flow
and phase field variables which ensure that
the boundary-integral terms (I), (II) and (III) in the energy balance equation
\eqref{equ:energy_balance} are non-positive.
In other words, the contributions of the boundary terms will
be dissipative under these conditions.
As such, the total energy of the system will not
increase over time, and this ensures the energy stability
of the N-phase system.
We refer to such boundary conditions
as energy-stable boundary conditions.
We look into the following
choices that ensure the dissipativeness
of the boundary term (I) in equation \eqref{equ:energy_balance}:
\begin{subequations}
\begin{equation}
\mathbf{u}=0, \quad \text{on} \ \partial\Omega;
\label{equ:bc_vel_1}
\end{equation}
\begin{equation}
\mathbf{n}\cdot\mathbf{T} - W\mathbf{n}
-\frac{1}{2}(\mathbf{n}\cdot\tilde{\mathbf{J}})\mathbf{u}
-\frac{1}{2}\rho|\mathbf{u}|^2\mathbf{n} = 0, \quad \text{on} \ \partial\Omega;
\label{equ:bc_vel_2}
\end{equation}
\begin{multline}
\mathbf{n}\cdot\mathbf{T} - W\mathbf{n}
-\frac{1}{2}(\mathbf{n}\cdot\tilde{\mathbf{J}})\mathbf{u} \\
-\rho\left[\theta \frac{1}{2}(\mathbf{u}\cdot\mathbf{u})\mathbf{n}
+ (1-\theta)\frac{1}{2}(\mathbf{n}\cdot\mathbf{u})\mathbf{u}
-C_1(\mathbf{n},\mathbf{u})\mathbf{u} + C_2(\mathbf{n},\mathbf{u})\mathbf{n}
\right]\Theta_0(\mathbf{n},\mathbf{u}) = 0, \quad \text{on} \ \partial\Omega;
\label{equ:bc_vel_3}
\end{multline}
\end{subequations}
where $\theta$ is a constant parameter
satisfying $0\leqslant\theta\leqslant 1$,
and $C_1(\mathbf{n},\mathbf{u})\geqslant 0$
and $C_2(\mathbf{n},\mathbf{u})\geqslant 0$ are
two non-negative constants or functions.
$\Theta_0(\mathbf{n},\mathbf{u})$ is a smoothed
step function given in \cite{DongS2015}, expressed as follows,
\begin{equation}
\Theta_0(\mathbf{n},\mathbf{u}) = \frac{1}{2}\left(
1-\tanh\frac{\mathbf{n}\cdot\mathbf{u}}{U_0\delta}
\right),
\quad \lim_{\delta\rightarrow 0}\Theta_0(\mathbf{n},\mathbf{u})
= \Theta_{s0}(\mathbf{n},\mathbf{u})
=\left\{
\begin{array}{ll}
1, & \text{if} \ \mathbf{n}\cdot\mathbf{u}<0 \\
0, & \text{otherwise}
\end{array}
\right.
\end{equation}
where $U_0$ is a velocity scale, and
$\delta>0$ is a small positive parameter that controls the
sharpness of the smoothed step function.
As $\delta\rightarrow 0$, $\Theta_0$ approaches
the step function $\Theta_{s0}$, taking unit value when
$\mathbf{n}\cdot\mathbf{u}<0$ and zero otherwise.
The boundary condition \eqref{equ:bc_vel_2}
ensures the energy stability. But it prohibits
the kinetic energy from being convected out of the domain
in the presence of inflow/outflows, resulting
in poor physical results.
The form of the $\Theta_0$ term in
condition \eqref{equ:bc_vel_3}
is inspired by the boundary condition developed in \cite{DongS2015}
for the single-phase incompressible Navier-Stokes equations;
see also \cite{Dong2014obc,DongW2016} for two-phase flows.
The condition \eqref{equ:bc_vel_3}
ensures the energy dissipation
of the boundary term (I) as $\delta\rightarrow 0$,
i.e.~when $\delta$ is sufficiently small, because
with this condition
\begin{equation}
\begin{split}
\mathbf{n}\cdot\mathbf{T}\cdot\mathbf{u}
-\frac{1}{2}(\mathbf{n}\cdot\tilde{\mathbf{J}})|\mathbf{u}|^2 &
-\frac{1}{2}\rho|\mathbf{u}|^2\mathbf{n}\cdot\mathbf{u}
-W \mathbf{n}\cdot\mathbf{u} \\
&=\left\{
\begin{array}{ll}
-C_1\rho|\mathbf{u}|^2 + C_2\rho\mathbf{n}\cdot\mathbf{u}\leqslant 0, &
\text{where} \ \mathbf{n}\cdot\mathbf{u}<0, \\
-\frac{1}{2}\rho|\mathbf{u}|^2\mathbf{n}\cdot\mathbf{u}\leqslant 0, &
\text{where} \ \mathbf{n}\cdot\mathbf{u}\geqslant 0,
\end{array}
\right.
\quad \text{on} \ \partial\Omega, \ \text{as} \ \delta\rightarrow 0.
\end{split}
\end{equation}
We look into the following
choices that ensure the energy dissipation of the
boundary term (II) in equation \eqref{equ:energy_balance}:
\begin{subequations}
\begin{equation}
\sum_{i=1}^{N-1}\tilde{m}_{ij}\mathcal{C}_i = 0, \quad 1\leqslant j\leqslant N-1, \quad \text{on} \ \partial\Omega;
\label{equ:bc_phi_A_1}
\end{equation}
\begin{equation}
\sum_{j=1}^{N-1}\tilde{m}_{ij}\mathbf{n}\cdot\nabla\mathcal{C}_j=0, \quad 1\leqslant i\leqslant N-1, \quad \text{on} \ \partial\Omega;
\label{equ:bc_phi_A_2}
\end{equation}
\begin{equation}
\sum_{j=1}^{N-1}\tilde{m}_{ij}\mathbf{n}\cdot\nabla\mathcal{C}_j = -\sum_{j=1}^{N-1}d_{ij}\mathcal{C}_j,
\quad 1\leqslant i\leqslant N-1, \quad \text{on\
} \ \partial\Omega;
\label{equ:bc_phi_A_3}
\end{equation}
\end{subequations}
In \eqref{equ:bc_phi_A_3} $d_{ij}$ ($1\leqslant i,j\leqslant N-1$) are
chosen coefficients, and the $(N-1)\times(N-1)$ matrix formed by $d_{ij}$
is required to be symmetric semi-positive definite.
Because the matrix $\tilde{\mathbf{m}}$ formed by
$\tilde{m}_{ij}$ is SPD, the boundary conditions \eqref{equ:bc_phi_A_1}
and \eqref{equ:bc_phi_A_2} are equilvalent
to $\mathcal{C}_i=0$ and $\mathbf{n}\cdot\nabla\mathcal{C}_i=0$
($1\leqslant i\leqslant N-1$) on $\partial\Omega$,
respectively.
We look into the following
choices to ensure the dissipativeness of the
boundary term (III) in equation \eqref{equ:energy_balance}:
\begin{subequations}
\begin{equation}
\frac{\partial\phi_i}{\partial t} = 0, \quad 1\leqslant i\leqslant N-1,
\quad \text{on} \ \partial\Omega;
\label{equ:bc_phi_B_1}
\end{equation}
\begin{equation}
\mathbf{n}\cdot\frac{\partial W}{\partial\nabla\phi_i} = 0, \quad 1\leqslant i\leqslant N-1,
\quad \text{on}\ \partial\Omega;
\label{equ:bc_phi_B_2}
\end{equation}
\begin{equation}
\mathbf{n}\cdot\frac{\partial W}{\partial\nabla\phi_i}
= -\sum_{j=1}^{N-1}q_{ij}\frac{\partial\phi_j}{\partial t},
\quad 1\leqslant i\leqslant N-1, \quad \text{on}\ \partial\Omega;
\label{equ:bc_phi_B_3}
\end{equation}
\end{subequations}
In \eqref{equ:bc_phi_B_3} $q_{ij}$ ($1\leqslant i,j\leqslant N-1$)
are chosen coefficients, and the matrix formed by $q_{ij}$ is
required to be symmetric semi-positive definite.
The boundary conditions \eqref{equ:bc_vel_1}--\eqref{equ:bc_vel_3},
\eqref{equ:bc_phi_A_1}--\eqref{equ:bc_phi_A_3},
and \eqref{equ:bc_phi_B_1}--\eqref{equ:bc_phi_B_3} are
favorable from the energy stability standpoint.
Additionally, the boundary conditions should satisfy
the reduction consistency property for the N-phase systems,
as pointed out by \cite{Dong2017}.
The reduction consistency
consideration can place restrictions on the form of these boundary
conditions. In the subsequent section
we look into
the implications of the reduction consistency
property on these boundary conditions, and
in particular we suggest conditions for the inflow and
outflow boundaries taking account of both reduction consistency and
energy stability.
\subsection{Reduction Consistency and Inflow/Outflow Boundary Conditions}
\label{sec:reduction_consistency}
The reduction consistency of N-phase formulations has been investigated
extensively in \cite{Dong2017}.
Let us first define reduction consistency according to \cite{Dong2017},
and then apply this requirement to the energy-stable
boundary conditions from the previous subsection.
A physical entity (e.g.~variable, equation, or condition)
for the N-phase system
is said to be reduction consistent if it has the following property:
If only a set of $M$ ($2\leqslant M\leqslant N-1$) fluid components are
present in the N-phase system, then the physical entity
for the N-phase system reduces to that for the corresponding
equivalent M-phase system.
We insist that
the formulation for the N-phase system should honor
the reduction consistency property, namely, the N-phase formulation
should be reduction consistent.
Issues of reduction consistency have been considered recently in
\cite{Dong2017} for the N-phase governing equations (coupled
system of momentum and phase-field equations);
see also \cite{BoyerL2006,BoyerM2014} for an investigation of
the consistency issues of a system of Cahn-Hilliard type equations
(without hydrodynamic interaction).
The consistency properties
explored in \cite{Dong2017} can be summarized as
the following three:
\begin{enumerate}[($\mathscr{C}$1):]
\item
The N-phase free energy
density function should be reduction consistent;
\item
The N-phase governing equations should be reduction consistent;
\item
The boundary conditions for the N-phase system should
be reduction consistent.
\end{enumerate}
The goal of this subsection is to investigate
the implications of the consistency property ($\mathscr{C}$3)
on the energy-stable boundary conditions
from the previous subsection.
To make the presentation more concrete, hereafter
we will specifically employ the volume fractions of the first
($N-1$) fluids as the set of order parameters,
namely,
\begin{equation}
\phi_i \equiv c_i, \ \phi_i\in[0,1], \quad 1\leqslant i\leqslant N-1; \quad
\vec{\phi} = \vec{c} = (c_1,c_2,\dots,c_{N-1})^T.
\end{equation}
Then with this choice,
equation \eqref{equ:varphi_expr} is given by (see \cite{Dong2015}
for details)
\begin{equation}
\varphi_i(\vec{c}) = \sum_{j=1}^{N-1}a_{ij}c_j - \tilde{\rho}_N, \ \
1\leqslant i\leqslant N-1; \quad
a_{ij} = \tilde{\rho}_{i}\delta_{ij}+\tilde{\rho}_N, \ \
1\leqslant i,j\leqslant N-1
\label{equ:varphi_volfrac_expr}
\end{equation}
where $\delta_{ij}$ is the Kronecker delta.
Let $\mathbf{A}_1=[a_{ij}]_{(N-1)\times(N-1)}$.
It is straightforward to verify that $\mathbf{A}_1$ is
symmetric positive definite and thus non-singular.
It should be noted that
the boundary conditions and numerical algorithms
presented below can be formulated similarly
in terms of the class of general order parameters
introduced in \cite{Dong2015}.
Following \cite{Dong2017}, we employ the following general
form for the free energy density function
\begin{equation}
W(\vec{c},\nabla\vec{c}) =
\sum_{i,j=1}^{N-1}\frac{\lambda_{ij}}{2}\nabla c_i\cdot\nabla c_j
+ H(\vec{c})
\label{equ:free_energy}
\end{equation}
where the constants $\lambda_{ij}$ ($1\leqslant i,j\leqslant N-1$) are
referred to as the mixing energy density coefficients,
and the matrix
$\mathbf{A}=[\lambda_{ij}]_{(N-1)\times(N-1)}$ is required
to be symmetric positive definite.
$H(\vec{c})$ is referred to as the potential energy density
function, and is to be specified later.
In this work we assume that the coefficients
$\tilde{m}_{ij}$ ($1\leqslant i,j\leqslant N-1$) in
\eqref{equ:CH_original} are constants.
The following are the conditions obtained in \cite{Dong2017}
about $\lambda_{ij}$, $H(\vec{c})$,
and $\tilde{m}_{ij}$ based on
the reduction consistency properties
($\mathscr{C}$1) and ($\mathscr{C}$2):
\begin{enumerate}[(DC-1):]
\item
$\lambda_{ij}$ are given by
\begin{equation}
\lambda_{ij} = \frac{3}{\sqrt{2}}\eta(\sigma_{iN} + \sigma_{jN} - \sigma_{ij}),
\quad 1\leqslant i,j\leqslant N-1
\label{equ:lambda_ij_expr}
\end{equation}
where $\eta$ is the characteristic interfacial thickness,
$\sigma_{ij}$ ($1\leqslant i\neq j\leqslant N$) is the surface
tension between fluids $i$ and $j$, and $\sigma_{ii}=0$ ($1\leqslant i\leqslant N$).
\item
$\tilde{m}_{ij}$ are given by
\begin{equation}
[\tilde{m}_{ij}]_{(N-1)\times(N-1)} = \tilde{\mathbf{m}}
= m_0\mathbf{A}_1\mathbf{A}^{-1}\mathbf{A}_1^T
\label{equ:mij_expr}
\end{equation}
where the constant $m_0>0$ is the mobility coefficient,
$\mathbf{A}_1$ is the matrix formed by $a_{ij}$ as given
in \eqref{equ:varphi_volfrac_expr}, and $\mathbf{A}$ is
the matrix formed by $\lambda_{ij}$.
\item
$H(\vec{c})$ is reduction consistent.
\item
If any one fluid $k$ ($1\leqslant k\leqslant N$) is absent
from the N-phase system, i.e.~$c_k\equiv 0$, then $H(\vec{c})$ is chosen such that
\begin{equation}
\left\{
\begin{split}
&
L_k^{(N)} = 0 \\
&
L_i^{(N-1)} = L_i^{(N)}, \quad 1\leqslant i\leqslant k-1, \\
&
L_i^{(N-1)} = L_{i+1}^{(N)}, \quad k\leqslant i\leqslant N-1,
\end{split}
\right.
\label{equ:L_cond}
\end{equation}
where
$L_i^{(N)}$ ($1\leqslant i\leqslant N$) is defined by
\begin{equation}
\left\{
\begin{split}
&
L_i^{(N)} = \sum_{j=1}^{N-1}\zeta_{ij}^{(N)}\frac{\partial H^{(N)}}{\partial c_j^{(N)}},
\quad 1\leqslant i\leqslant N-1,
\quad \text{where} \ \left[\zeta_{ij}^{(N)}\right]_{(N-1)\times(N-1)} = \mathbf{A}^{-1};
\\
&
L_N^{(N)} = -\sum_{i=1}^{N-1}L_i^{(N)}.
\end{split}
\right.
\label{equ:L_def}
\end{equation}
In the above equations the superscript $N$ in $(\cdot)^{(N)}$ accentuates
the point that the variable is with respect to the N-phase system.
\end{enumerate}
It is shown in \cite{Dong2017} that, with $\lambda_{ij}$ and $\tilde{m}_{ij}$
given by \eqref{equ:lambda_ij_expr} and
\eqref{equ:mij_expr} respectively, and $H(\vec{c})$ satisfying
(DC-3) and (DC-4), the N-phase governing equations represented by
\eqref{equ:nse_original}--\eqref{equ:CH_original} and
the free energy density function given by \eqref{equ:free_energy}
satisfy the reduction consistency properties ($\mathscr{C}$1)
and ($\mathscr{C}$2).
In subsequent discussions, whenever necessary,
we will use the superscript notation
$(\cdot)^{(N)}$ to signify that the variable is with respect to
the N-phase system,
but will drop the superscript
where no confusion arises.
\subsubsection{Reduction Consistency of Boundary Conditions}
We employ the $\lambda_{ij}$ and $\tilde{m}_{ij}$ values
given by \eqref{equ:lambda_ij_expr} and \eqref{equ:mij_expr}, and
assume that the potential energy density function
$H(\vec{c})$ satisfies the conditions (DC-3) and (DC-4).
Let us now look into
the energy-stable boundary conditions
from Section \ref{sec:energy_balance}
in light of the reduction consistency requirement ($\mathscr{C}$3).
We insist that the boundary conditions \eqref{equ:bc_vel_1}--\eqref{equ:bc_vel_3},
\eqref{equ:bc_phi_A_1}--\eqref{equ:bc_phi_A_3}
and \eqref{equ:bc_phi_B_1}--\eqref{equ:bc_phi_B_3}
should satisfy the consistency property ($\mathscr{C}$3).
To ensure the reduction consistency between
the N-phase and M-phase ($2\leqslant M\leqslant N-1$) systems,
it suffices to consider only the reduction between
N-phase and ($N-1$)-phase systems, i.e.~if only one fluid
component is absent from the system.
Consider first the conditions \eqref{equ:bc_vel_1}--\eqref{equ:bc_vel_3}.
The condition \eqref{equ:bc_vel_1}
is evidently reduction consistent because no phase field variable is
involved.
The conditions \eqref{equ:bc_vel_2} and \eqref{equ:bc_vel_3}
are reduction consistent because, as shown in
\cite{Dong2017},
the variables
$\rho(\vec{c})$ and $\mu(\vec{c})$ given by \eqref{equ:density_expr}
are reduction consistent, and the $\tilde{\mathbf{J}}$ given by \eqref{equ:J_expr}
is also reduction consistent under
the condition (DC-4).
Note also that the free energy density function
$W(\vec{c},\nabla\vec{c})$ given by \eqref{equ:free_energy}
satisfies the consistency property ($\mathscr{C}$1) under
the condition (DC-3), as mentioned earlier.
We next consider the boundary
conditions \eqref{equ:bc_phi_A_1}--\eqref{equ:bc_phi_A_3}.
Define
\begin{equation}
\vec{\mathcal{C}}=\left[\mathcal{C}_i \right]_{(N-1)\times 1}, \ \
\frac{\partial H}{\partial \vec{c}} = \left[\frac{\partial H}{\partial c_i} \right]_{(N-1)\times 1}, \ \
\mathbf{D} = \left[d_{ij} \right]_{(N-1)\times(N-1)}.
\end{equation}
In light of the equations \eqref{equ:varphi_volfrac_expr},
\eqref{equ:free_energy} and \eqref{equ:lambda_ij_expr},
the chemical potentials $\mathcal{C}_i$ can be obtained from
equation \eqref{equ:chem_potential}
in a matrix form,
\begin{equation}
\vec{\mathcal{C}} = \mathbf{A}_1^{-T}\left( \frac{\partial H}{\partial\vec{c}}
- \mathbf{A}\nabla^2\vec{c} \right).
\label{equ:chempot_expr}
\end{equation}
So boundary condition \eqref{equ:bc_phi_A_1} is transformed into
\begin{equation}
\left\{
\begin{split}
&
-\nabla^2\vec{c} + \mathbf{A}^{-1}\frac{\partial H}{\partial \vec{c}}=0,
\ \ \text{on} \ \partial\Omega, \ \text{or equivalently}
\\
&
-\nabla^2 c_i + \sum_{j=1}^{N-1}\zeta_{ij}\frac{\partial H}{\partial c_j} = 0,
\ \ 1\leqslant i\leqslant N-1, \ \ \text{on} \ \partial\Omega
\end{split}
\right.
\label{equ:bc_phi_A_1_trans}
\end{equation}
where we have used \eqref{equ:mij_expr}.
Boundary conditions \eqref{equ:bc_phi_A_2} can be transformed into
\begin{equation}
\left\{
\begin{split}
&
\mathbf{n}\cdot\nabla\left(-\nabla^2\vec{c} + \mathbf{A}^{-1}\frac{\partial H}{\partial \vec{c}}\right)=0,
\ \ \text{on} \ \partial\Omega, \ \text{or equivalently}
\\
&
\mathbf{n}\cdot\nabla\left(-\nabla^2 c_i + \sum_{j=1}^{N-1}\zeta_{ij}\frac{\partial H}{\partial c_j}\right) = 0,
\ \ 1\leqslant i\leqslant N-1, \ \ \text{on} \ \partial\Omega.
\end{split}
\right.
\label{equ:bc_phi_A_2_trans}
\end{equation}
Equations \eqref{equ:bc_phi_A_1_trans} and \eqref{equ:bc_phi_A_2_trans}
are reduction consistent under
the condition (DC-4). It suffices to consider only \eqref{equ:bc_phi_A_1_trans}.
Suppose the fluid $k$ (for any $1\leqslant k\leqslant N$) is absent
from the N-phase system, i.e.~$c_k^{(N)}\equiv 0$.
Let
$\chi_i$ denote a variable from the set of variables
$\{ c_i, \rho_i, \tilde{\rho}_i, \tilde{\mu}_i, \tilde{\gamma}_i \}$, and
the following correspondence relations hold between the N-phase system and
the ($N-1$)-phase system without fluid $k$:
\begin{equation}
\chi_i^{(N-1)} = \left\{
\begin{array}{ll}
\chi_i^{(N)}, & 1\leqslant i< k \\
\chi_{i+1}^{(N)}, & k\leqslant i\leqslant N-1.
\end{array}
\right.
\label{equ:correspond_relation}
\end{equation}
Therefore, for $1\leqslant i=k\leqslant N-1$,
the equation \eqref{equ:bc_phi_A_1_trans} becomes
an identity,
\begin{equation}
-\nabla^2 c_k^{(N)} + \sum_{j=1}^{N-1}\zeta_{kj}^{(N)}\frac{\partial H^{(N)}}{\partial c_j^{(N)}} = -\nabla^2 c_k^{(N)} + L_k^{(N)} = 0
\end{equation}
in light of the equation \eqref{equ:L_cond} under
the condition (DC-4).
For $1\leqslant i\leqslant k-1$, the equation \eqref{equ:bc_phi_A_1_trans} becomes
\begin{equation}
\begin{split}
0 &= -\nabla^2c_i^{(N)}
+ \sum_{j=1}^{N-1}\zeta_{ij}^{(N)}\frac{\partial H^{(N)}}{\partial c_j^{(N)}}
= -\nabla^2c_i^{(N)} + L_i^{(N)}
= -\nabla^2c_i^{(N-1)} + L_i^{(N-1)} \\
&= -\nabla^2c_i^{(N-1)} + \sum_{j=1}^{N-2}\zeta_{ij}^{(N-1)}\frac{\partial H^{(N-1)}}{\partial c_j^{(N-1)}}, \quad
1\leqslant i\leqslant k-1
\end{split}
\end{equation}
where we have used the correspondence relation \eqref{equ:correspond_relation}
and the equation \eqref{equ:L_cond} under the condition (DC-4).
Therefore,
\begin{equation}
-\nabla^2c_i^{(N)}
+ \sum_{j=1}^{N-1}\zeta_{ij}^{(N)}\frac{\partial H^{(N)}}{\partial c_j^{(N)}}
=0 \ \ \Longrightarrow \ \
-\nabla^2c_i^{(N-1)} + \sum_{j=1}^{N-2}\zeta_{ij}^{(N-1)}\frac{\partial H^{(N-1)}}{\partial c_j^{(N-1)}} = 0, \quad
1\leqslant i\leqslant k-1.
\end{equation}
For $k+1\leqslant i+1\leqslant N$, equation \eqref{equ:bc_phi_A_1_trans} becomes
\begin{equation}
\begin{split}
0 &= -\nabla^2c_{i+1}^{(N)}
+ \sum_{j=1}^{N-1}\zeta_{i+1,j}^{(N)}\frac{\partial H^{(N)}}{\partial c_j^{(N)}}
= -\nabla^2c_{i+1}^{(N)} + L_{i+1}^{(N)}
= -\nabla^2c_i^{(N-1)} + L_i^{(N-1)} \\
&= -\nabla^2c_i^{(N-1)} + \sum_{j=1}^{N-2}\zeta_{ij}^{(N-1)}\frac{\partial H^{(N-1)}}{\partial c_j^{(N-1)}}, \quad
k\leqslant i\leqslant N-1
\end{split}
\end{equation}
where we have used \eqref{equ:correspond_relation}
and \eqref{equ:L_cond}. Therefore,
\begin{equation}
-\nabla^2c_{i+1}^{(N)}
+ \sum_{j=1}^{N-1}\zeta_{i+1,j}^{(N)}\frac{\partial H^{(N)}}{\partial c_j^{(N)}}
=0 \ \ \Longrightarrow \ \
-\nabla^2c_i^{(N-1)} + \sum_{j=1}^{N-2}\zeta_{ij}^{(N-1)}\frac{\partial H^{(N-1)}}{\partial c_j^{(N-1)}} = 0, \quad
k\leqslant i\leqslant N-1.
\end{equation}
Combining the above results, we conclude that
if any fluid is absent then the boundary condition
\eqref{equ:bc_phi_A_1_trans} for the N-phase system
will reduce to that for the corresponding $(N-1)$-phase
system. So it is reduction consistent. It follows that
the boundary condition \eqref{equ:bc_phi_A_2_trans}
is also reduction consistent
under the condition (DC-4).
The boundary condition \eqref{equ:bc_phi_A_3}
can be written in matrix form as
\begin{equation}
\tilde{\mathbf{m}}(\mathbf{n}\cdot\nabla\vec{\mathcal{C}})
=-\mathbf{D}\vec{\mathcal{C}} \ \Longrightarrow \
m_0\mathbf{n}\cdot \nabla \left(-\nabla^2\vec{c}+\mathbf{A}^{-1}\frac{\partial H}{\partial\vec{c}} \right)
=-\mathbf{A}_1^{-1}\mathbf{DA}_1^{-T}\mathbf{A}\left(
-\nabla^2\vec{c}+\mathbf{A}^{-1}\frac{\partial H}{\partial\vec{c}} \right)
\end{equation}
where we have used \eqref{equ:chempot_expr} and \eqref{equ:mij_expr}.
Let
$\mathbf{A}_1^{-1}\mathbf{DA}_1^{-T}\mathbf{A} = \left[b_{ij} \right]_{(N-1)\times(N-1)}$.
Then the above equation can be written in terms of the
component terms as
\begin{equation}
m_0\mathbf{n}\cdot\nabla\left(-\nabla^2c_i + \sum_{j=1}^{N-1}\zeta_{ij}\frac{\partial H}{\partial c_j} \right)
= - \sum_{j=1}^{N-1}b_{ij}\left(-\nabla^2c_j
+ \sum_{k=1}^{N-1}\zeta_{jk}\frac{\partial H}{\partial c_k} \right),
\quad 1\leqslant i\leqslant N-1.
\label{equ:bc_phi_A_3_trans}
\end{equation}
Note that the terms
$
\left( -\nabla^2c_i + \sum_{j=1}^{N-1}\zeta_{ij}\frac{\partial H}{\partial c_j} \right)
$
for $1\leqslant i\leqslant N-1$
are reduction consistent, as shown in the above discussions.
Therefore, a sufficient condition for
the equation \eqref{equ:bc_phi_A_3_trans} to
be reduction consistent is that
the matrix formed by $b_{ij}$ be diagonal,
\begin{equation}
\mathbf{A}_1^{-1}\mathbf{DA}_1^{-T}\mathbf{A}
= \text{diag}(\hat{e}_1, \dots,\hat{e}_{N-1}) = \mathbf{G}
\end{equation}
for some $\hat{e}_i$ ($1\leqslant i\leqslant N-1$).
It then follows that
\begin{equation}
\mathbf{A}_1^{-1}\mathbf{DA}_1^{-T} = \mathbf{G} \mathbf{A}^{-1}
\end{equation}
The left hand side of this equation is a symmetric semi-positive
definite matrix,
because $\mathbf{A}_1$ is non-singular and
$\mathbf{D}$ is required to be symmetric semi-positive definite.
Note that on the right hand side $\mathbf{G}$ is diagonal
and $\mathbf{A}$ is a general SPD
matrix. We therefore conclude that
\begin{equation}
\mathbf{G} = e_0\mathbf{I}
\end{equation}
where $\mathbf{I}$ is the identity matrix and $e_0\geqslant 0$ is
a non-negative constant. Consequently
\begin{equation}
\mathbf{D} = \mathbf{A}_1\mathbf{GA}^{-1}\mathbf{A}_1^T
= e_0\mathbf{A}_1\mathbf{A}^{-1}\mathbf{A}_1^T
= \frac{e_0}{m_0}\tilde{\mathbf{m}}.
\end{equation}
So the boundary condition \eqref{equ:bc_phi_A_3}
is transformed into
\begin{equation}
\mathbf{n}\cdot\nabla\left(-\nabla^2c_i + \sum_{j=1}^{N-1}\zeta_{ij}\frac{\partial H}{\partial c_j} \right)
= -\frac{e_0}{m_0}\left(-\nabla^2c_i
+ \sum_{k=1}^{N-1}\zeta_{ik}\frac{\partial H}{\partial c_k} \right),
\quad 1\leqslant i\leqslant N-1,
\label{equ:bc_phi_A_3_trans_1}
\end{equation}
and these conditions are reduction consistent.
Let us now consider the boundary conditions
\eqref{equ:bc_phi_B_1}--\eqref{equ:bc_phi_B_3}.
The condition \eqref{equ:bc_phi_B_1} implies
that
\begin{equation}
c_i(\mathbf{x},t) = c_{bi}(\mathbf{x}), \ \ 1\leqslant i\leqslant N-1; \ \
c_N(\mathbf{x},t) = 1-\sum_{i=1}^{N-1} c_{bi}(\mathbf{x}) = c_{bN}(\mathbf{x}),
\ \ \text{on} \ \partial\Omega.
\label{equ:bc_phi_B_1_trans}
\end{equation}
If a fluid $k$ is absent from the N-phase system throughout time,
then the reduction consistency requires that $c_{bk}(\mathbf{x})\equiv 0$.
Indeed, if $c_{bi}(\mathbf{x})$ is non-zero on the boundary
for any fluid $i$, that fluid cannot be absent from
the system.
In light of \eqref{equ:free_energy},
the boundary condition \eqref{equ:bc_phi_B_2}
is transformed into
\begin{equation}
\sum_{j=1}^{N-1}\lambda_{ij}\mathbf{n}\cdot\nabla c_j = 0, \ \
1\leqslant i\leqslant N-1 \ \
\Longrightarrow \ \
\mathbf{n}\cdot\nabla c_i = 0, \ \
1\leqslant i\leqslant N-1, \ \ \text{on} \ \partial\Omega
\label{equ:bc_phi_B_2_trans}
\end{equation}
by noting that
the matrix $\mathbf{A}$ formed by $\lambda_{ij}$
($1\leqslant i,j\leqslant N-1$) is non-singular.
The boundary condition \eqref{equ:bc_phi_B_2_trans} is reduction consistent.
Note that this boundary condition implies
$
\mathbf{n}\cdot\nabla c_N = -\sum_{i=1}^{N-1}\mathbf{n}\cdot\nabla c_i=0.
$
Let us suppose a fluid $k$ ($1\leqslant k\leqslant N$)
is absent from the N-phase system, i.e.~$c_k^{(N)}\equiv 0$.
Then $\mathbf{n}\cdot\nabla c_k^{(N)} = 0$ becomes an
identity. Based on the correspondence relation \eqref{equ:correspond_relation}, for $1\leqslant i\leqslant k-1$,
\begin{equation}
\mathbf{n}\cdot\nabla c_i^{(N)} =0 \ \
\Longrightarrow \ \
\mathbf{n}\cdot\nabla c_i^{(N-1)} =0;
\end{equation}
for $k\leqslant i\leqslant N-1$,
\begin{equation}
\mathbf{n}\cdot\nabla c_{i+1}^{(N)} = 0 \ \ \Longrightarrow \ \
\mathbf{n}\cdot\nabla c_{i}^{(N-1)} = 0.
\end{equation}
Therefore, if any one fluid is absent,
the boundary condition \eqref{equ:bc_phi_B_2_trans} (together with
$\mathbf{n}\cdot\nabla c_N=0$)
is reduced to
$
\mathbf{n}\cdot\nabla c_i^{(N-1)} = 0
$
for $1\leqslant i\leqslant N-1$.
The boundary condition \eqref{equ:bc_phi_B_3} can be
transformed into
\begin{equation}
\sum_{j=1}^{N-1}\lambda_{ij}\mathbf{n}\cdot\nabla c_j
= -\sum_{j=1}^{N-1} q_{ij}\frac{\partial c_j}{\partial t},
\quad \text{or} \quad
\mathbf{A}(\mathbf{n}\cdot\nabla\vec{c})
=-\mathbf{Q}\frac{\partial \vec{c}}{\partial t}
\label{equ:bc_phi_B_3_trans}
\end{equation}
where the matrix $\mathbf{Q} = [ q_{ij} ]_{(N-1)\times(N-1)}$
is required to be symmetric semi-positive definite.
Let $\mathbf{A}^{-1}\mathbf{Q} = [r_{ij} ]_{(N-1)\times(N-1)}$.
The above condition can be further transformed into
\begin{equation}
\mathbf{n}\cdot\nabla c_i
= - \sum_{j=1}^{N-1} r_{ij}\frac{\partial c_j}{\partial t},
\quad 1\leqslant i\leqslant N-1.
\label{equ:bc_phi_B_3_trans_1}
\end{equation}
Noting that both $\mathbf{n}\cdot\nabla c_i=0$ ($1\leqslant i\leqslant N$)
and $\frac{\partial c_i}{\partial t}=0$ ($1\leqslant i\leqslant N$)
are reduction consistent,
we impose the condition
that the matrix $\mathbf{A}^{-1}\mathbf{Q}$ be diagonal
in order to facilitate
the reduction consistency of
equation \eqref{equ:bc_phi_B_3_trans_1}, i.e.
\begin{equation}
\mathbf{A}^{-1}\mathbf{Q} = \text{diag}(\hat{r}_1,\dots,\hat{r}_{N-1})
= \mathbf{E},
\quad \text{or} \quad
\mathbf{Q} = \mathbf{AE}
\end{equation}
for some $\hat{r}_i$ ($1\leqslant i\leqslant N-1$).
Note that $\mathbf{Q}$ is required to be symmetric semi-positive definite,
$\mathbf{A}$ is a general SPD
matrix, and $\mathbf{E}$ is diagonal.
We then conclude that
\begin{equation}
\mathbf{E} = d_0\mathbf{I}
\end{equation}
where $d_0\geqslant 0$ is a non-negative constant.
Therefore, the boundary condition \eqref{equ:bc_phi_B_3}
is reduced to
\begin{equation}
\mathbf{n}\cdot\nabla c_i = -d_0 \frac{\partial c_i}{\partial t},
\quad 1\leqslant i\leqslant N-1, \quad
\text{on} \ \partial\Omega.
\label{equ:bc_phi_B_3_trans_2}
\end{equation}
This implies that
\begin{equation}
\mathbf{n}\cdot\nabla c_N = -\sum_{i=1}^{N-1}\mathbf{n}\cdot\nabla c_i
= d_0\sum_{i=1}^{N-1}\frac{\partial c_i}{\partial t}
= -d_0\frac{\partial c_N}{\partial t}, \quad \text{on} \ \partial\Omega.
\label{equ:bc_phi_B_3_trans_2A}
\end{equation}
The condition \eqref{equ:bc_phi_B_3_trans_2}, together
with \eqref{equ:bc_phi_B_3_trans_2A}, is reduction consistent.
To demonstrate this point, let us assume that
fluid $k$ ($1\leqslant k\leqslant N$) is absent from
the system, i.e.~$c_k^{(N)}\equiv 0$.
Then for $1\leqslant i\leqslant k-1$,
\begin{equation}
\mathbf{n}\cdot\nabla c_i^{(N)} + d_0\frac{\partial c_i^{(N)}}{\partial t} = 0 \ \ \Longrightarrow \ \
\mathbf{n}\cdot\nabla c_i^{(N-1)} + d_0\frac{\partial c_i^{(N-1)}}{\partial t} = 0
\end{equation}
where we have used the correspondence relation \eqref{equ:correspond_relation}.
For $k\leqslant i\leqslant N-1$,
\begin{equation}
\mathbf{n}\cdot\nabla c_{i+1}^{(N)} + d_0\frac{\partial c_{i+1}^{(N)}}{\partial t} = 0 \ \ \Longrightarrow \ \
\mathbf{n}\cdot\nabla c_i^{(N-1)} + d_0\frac{\partial c_i^{(N-1)}}{\partial t} = 0
\end{equation}
where the correspondence relation \eqref{equ:correspond_relation}
is again used.
One also notes that the condition
$\mathbf{n}\cdot\nabla c_k^{(N)} + d_0\frac{\partial c_k^{(N)}}{\partial t}=0$ becomes
an identity.
\subsubsection{Outflow and Inflow Boundary Conditions}
The above discussions involve general
considerations of the energy stability and reduction consistency properties of
the N-phase system and the implications of these properties
on the boundary conditions.
The resultant boundary conditions are applicable to any type of boundary.
We next focus on the outflow and inflow boundaries specifically, and use
these results to suggest specific outflow and inflow boundary conditions.
With $\lambda_{ij}$ given by \eqref{equ:lambda_ij_expr},
$\tilde{m}_{ij}$ given by \eqref{equ:mij_expr}
and the free energy density given by \eqref{equ:free_energy},
the governing equations \eqref{equ:nse_original} and
\eqref{equ:CH_original} are reduced into,
in terms of volume fractions $c_i$ ($1\leqslant i\leqslant N-1$)
as the order parameters,
\begin{equation}
\rho\left(
\frac{\partial\mathbf{u}}{\partial t}
+ \mathbf{u}\cdot\nabla\mathbf{u}
\right)
+ \tilde{\mathbf{J}}\cdot\nabla\mathbf{u}
=
-\nabla p
+ \nabla\cdot\left[
\mu \mathbf{D}(\mathbf{u})
\right]
- \sum_{i,j=1}^{N-1} \nabla\cdot\left(\lambda_{ij}
\nabla c_i \otimes \nabla c_j
\right)
+ \mathbf{f}(\mathbf{x},t),
\label{equ:nse}
\end{equation}
\begin{equation}
\frac{\partial c_i}{\partial t}
+ \mathbf{u}\cdot\nabla c_i = m_0\nabla^2\left(
-\nabla^2 c_i + \sum_{j=1}^{N-1}\zeta_{ij}\frac{\partial H}{\partial c_j}
\right) + g_i(\mathbf{x},t),
\quad 1\leqslant i\leqslant N-1
\label{equ:CH}
\end{equation}
where we have added an external body
force $\mathbf{f}$ to the momentum equation,
and a source term $g_i$ to each of the $N-1$
phase field equations.
$g_i$ ($1\leqslant i\leqslant N-1$) are
for the purpose of numerical testing only,
and will be set to $g_i=0$ in actual simulations.
$\tilde{\mathbf{J}}$ is given by (simplifed from equation \eqref{equ:J_expr})
\begin{equation}
\tilde{\mathbf{J}} = -m_0\sum_{i=1}^{N-1}\left(
\tilde{\rho}_i - \tilde{\rho}_N
\right)\nabla\left(
-\nabla^2 c_i + \sum_{j=1}^{N-1}\zeta_{ij}\frac{\partial H}{\partial c_j}
\right).
\label{equ:J_expr_1}
\end{equation}
We assume that the domain boundary consists of three
types which are non-overlapping with one another:
$\partial\Omega = \partial\Omega_i \cup \partial\Omega_w \cup \partial\Omega_o$, where
\begin{itemize}
\item
$\partial\Omega_i$ is the inflow boundary, on which the velocity distribution
and the fluid-material distributions are known.
\item
$\partial\Omega_w$ is the wall boundary with certain
wetting properties, on which the velocity distribution (e.g.~zero velocity)
and the contact angles are known.
\item
$\partial\Omega_o$ is the outflow (or open) boundary, on which
none of the flow variables (velocity, pressure, phase field variables) is
known.
\end{itemize}
Since the phase field equations \eqref{equ:CH} are of fourth spatial order,
two independent boundary conditions will be needed on
each type of boundary for the phase field
variables $c_i$.
On the outflow/open boundary $\partial\Omega_o$
we propose the boundary conditions \eqref{equ:bc_phi_A_2_trans}
and \eqref{equ:bc_phi_B_3_trans_2} for the phase field equations, i.e.
\begin{subequations}
\begin{equation}
\mathbf{n}\cdot\nabla\left(-\nabla^2 c_i + \sum_{j=1}^{N-1}\zeta_{ij}\frac{\partial H}{\partial c_j}\right) = 0,
\ \ 1\leqslant i\leqslant N-1, \ \ \text{on} \ \partial\Omega_o,
\label{equ:obc_phi_1}
\end{equation}
\begin{equation}
\mathbf{n}\cdot\nabla c_i = -d_0 \frac{\partial c_i}{\partial t},
\quad 1\leqslant i\leqslant N-1,
\quad \text{on} \ \partial\Omega_o.
\label{equ:obc_phi_2}
\end{equation}
\end{subequations}
For the momentum equation we propose
the boundary condition \eqref{equ:bc_vel_3} on $\partial\Omega_o$.
Note that the combination of equations \eqref{equ:J_expr_1} and \eqref{equ:obc_phi_1}
leads to $\mathbf{n}\cdot\tilde{\mathbf{J}} = 0$ on $\partial\Omega_o$.
We will consider the following choice for $C_1(\mathbf{n},\mathbf{u})$ and
$C_2(\mathbf{n},\mathbf{u})$ in \eqref{equ:bc_vel_3} in
the present work,
analogous to the outflow condition for single-phase Navier-Stokes
equations in \cite{DongS2015},
\begin{equation*}
C_1(\mathbf{n},\mathbf{u}) = -\frac{\alpha_1}{2}\mathbf{n}\cdot\mathbf{u}, \quad
C_2(\mathbf{n},\mathbf{u}) = \frac{\alpha_2}{2} \mathbf{u}\cdot\mathbf{u}
\end{equation*}
where $\alpha_1\geqslant 0$ and $\alpha_2\geqslant 0$ are constants.
Therefore, the boundary condition \eqref{equ:bc_vel_3} is reduced to
\begin{multline}
-p\mathbf{n} + \mu\mathbf{n}\cdot\mathbf{D}(\mathbf{u})
- \left[\sum_{i,j=1}^{N-1}\frac{\lambda_{ij}}{2}\nabla c_i\cdot\nabla c_j + H(\vec{c}) \right]\mathbf{n} \\
- \rho\left[ \frac{1}{2}
(\theta + \alpha_2)(\mathbf{u}\cdot\mathbf{u})\mathbf{n}
+ \frac{1}{2}(1-\theta+\alpha_1)(\mathbf{n}\cdot\mathbf{u})\mathbf{u}
\right]\Theta_0(\mathbf{n},\mathbf{u}) = 0, \quad
\text{on} \ \partial\Omega_o
\label{equ:obc_vel}
\end{multline}
where $0\leqslant \theta\leqslant 1$, $\alpha_1\geqslant 0$ and $\alpha_2\geqslant 0$
are constant parameters.
The open boundary conditions \eqref{equ:obc_phi_1}--\eqref{equ:obc_vel}
are reduction consistent, and they
ensure the energy dissipativity on the open/outflow boundary
$\partial\Omega_o$ even when strong vortices or backflows occur on $\partial\Omega_o$.
Equation \eqref{equ:obc_vel} represents a family of boundary conditions
for $\partial\Omega_o$ with $(\theta,\alpha_1,\alpha_2)$ as
the parameters.
The term involving $\Theta_0$ in \eqref{equ:obc_vel}
is critical to the energy stability when strong vortices or backflows occur
at the open boundary. This term is similar in form to that of
the open boundary conditions developed in \cite{DongS2015} for single-phase
flows. It is observed from single-phase flow simulations of \cite{DongS2015} that,
among the family represented by $(\theta,\alpha_1,\alpha_2)$,
the condition corresponding to $(\theta,\alpha_1,\alpha_2)=(1,1,0)$
produces overall the best results in terms of the smoothness
of the velocity field at the outflow boundary and the distortion of
flow structures when they exit the domain.
We specifically list below this particular
open boundary condition corresponding to $(\theta,\alpha_1,\alpha_2)=(1,1,0)$
among those given by \eqref{equ:obc_vel},
\begin{multline}
-p\mathbf{n} + \mu\mathbf{n}\cdot\mathbf{D}(\mathbf{u})
- \left[\sum_{i,j=1}^{N-1}\frac{\lambda_{ij}}{2}\nabla c_i\cdot\nabla c_j + H(\vec{c}) \right]\mathbf{n} \\
- \frac{1}{2}\rho\left[
(\mathbf{u}\cdot\mathbf{u})\mathbf{n}
+ (\mathbf{n}\cdot\mathbf{u})\mathbf{u}
\right]\Theta_0(\mathbf{n},\mathbf{u}) = 0, \quad
\text{on} \ \partial\Omega_o.
\label{equ:obc_vel_best}
\end{multline}
The majority of numerical simulations presented in
Section \ref{sec:tests} will be performed with
this boundary condition for $\partial\Omega_o$.
Let us make a comment on the boundary condition \eqref{equ:obc_phi_2}.
This condition is analogous to
a convective type condition on the outflow boundary
if $d_0>0$,
\begin{equation}
\frac{\partial c_i}{\partial t} + U_c \mathbf{n}\cdot\nabla c_i = 0,
\quad 1\leqslant i\leqslant N-1, \quad \text{on} \ \partial\Omega_o,
\quad \text{where} \ U_c = \frac{1}{d_0}.
\label{equ:obc_convective}
\end{equation}
Therefore, $\frac{1}{d_0}$ plays the role of a convection velocity
at the open/outflow boundary. In practical simulations, one could first
estimate a convection velocity scale $U_c>0$ at the outflow
boundary based on physical considerations (e.g.~mass conservation)
or by preliminary simulations using e.g.~$d_0=0$.
Then one can determine $d_0$ based on $d_0 = \frac{1}{U_c}$.
On the inflow boundary $\partial\Omega_i$
the material distribution is known, implying a Dirichlet type
condition
\begin{equation}
c_i = c_{bi}(\mathbf{x},t), \quad 1\leqslant i\leqslant N-1,
\quad \text{on} \ \partial\Omega_i
\label{equ:ibc_phi_1}
\end{equation}
where $c_{bi}$ is boundary volume-fraction distribution.
For the other boundary condition on $\partial\Omega_i$ for
the phase field equations,
we propose the condition \eqref{equ:bc_phi_A_1_trans}, i.e.
\begin{equation}
-\nabla^2 c_i + \sum_{j=1}^{N-1}\zeta_{ij}\frac{\partial H}{\partial c_j} = 0,
\ \ 1\leqslant i\leqslant N-1, \ \ \text{on} \ \partial\Omega_i.
\label{equ:ibc_phi_2}
\end{equation}
When a solid-wall boundary $\partial\Omega_w$ is present,
in the current paper we will assume that
the wall is of neutral wettability to all fluids,
that is, the contact angles for all fluid interfaces
are $90^0$. This corresponds to the condition \eqref{equ:bc_phi_B_2_trans},
namely,
\begin{equation}
\mathbf{n}\cdot\nabla c_i = 0,
\quad 1\leqslant i\leqslant N-1, \quad \text{on} \ \partial\Omega_w.
\label{equ:wbc_phi_1}
\end{equation}
For N-phase flows bounded by solid walls
with more general wetting properties we refer the
reader to \cite{Dong2017} for a method
to deal with general contact angles.
We employ the condition \eqref{equ:bc_phi_A_2_trans} for
the other boundary condition for the phase field function
on $\partial\Omega_w$, i.e.
\begin{equation}
\mathbf{n}\cdot\nabla\left(-\nabla^2 c_i + \sum_{j=1}^{N-1}\zeta_{ij}\frac{\partial H}{\partial c_j}\right) = 0,
\ \ 1\leqslant i\leqslant N-1, \ \ \text{on} \ \partial\Omega_w.
\label{equ:wbc_phi_2}
\end{equation}
In addition, the velocity distribution on the inflow and wall boundaries
are assumed to be known, leading to a Dirichlet type condition
\begin{equation}
\mathbf{u} = \mathbf{w}(\mathbf{x},t), \quad
\text{on} \ \partial\Omega_i\cup\partial\Omega_w,
\label{equ:dbc_vel}
\end{equation}
where $\mathbf{w}$ is the boundary velocity.
Finally, the initial distributions for the velocity ($\mathbf{u}^{in}$) and
the phase field functions ($c_i^{in}$) are assumed to be known,
\begin{subequations}
\begin{align}
&
\mathbf{u}(\mathbf{x},0) = \mathbf{u}^{in}(\mathbf{x}), \label{equ:ic_vel} \\
&
c_i(\mathbf{x},0) = c_i^{in}(\mathbf{x}), \quad 1\leqslant i\leqslant N-1.
\label{equ:ic_phi}
\end{align}
\end{subequations}
\subsection{Algorithm Formulation}
\label{sec:algorithm}
The equations \eqref{equ:nse}--\eqref{equ:CH} and \eqref{equ:continuity_original},
supplemented by the boundary conditions
\eqref{equ:dbc_vel}, \eqref{equ:ibc_phi_1}--\eqref{equ:ibc_phi_2},
\eqref{equ:wbc_phi_1}, \eqref{equ:wbc_phi_2}, \eqref{equ:obc_vel},
\eqref{equ:obc_phi_1}--\eqref{equ:obc_phi_2},
together with the initial conditions
\eqref{equ:ic_vel}--\eqref{equ:ic_phi},
constitute the system to be solved in numerical
simulations.
In the current paper, we employ the same potential
energy density function $H(\vec{c})$ as in \cite{Dong2017}
(originally suggested by \cite{BoyerM2014}),
given by
\begin{equation}
H(\vec{c}) = \frac{3}{\sqrt{2}\eta} \sum_{i,j=1}^N
\frac{\sigma_{ij}}{2}\left[
f(c_i) + f(c_j) - f(c_i+c_j)
\right], \quad
\text{with} \ f(c) = c^2(1-c)^2
\label{equ:potential_energy}
\end{equation}
where $\eta$ is the characteristic interfacial thickness
of the diffuse interfaces.
As pointed out in \cite{Dong2017}, this function is reduction
consistent, but satisfies only a subset of the (DC-4) property.
It ensures the reduction consistency between N-phase
systems and M-phase systems for $M=2$.
To numerically test with manufactured analytic solutions,
we will modify several boundary conditions by adding
certain prescribed source terms. Define $h_i={\partial H}/{\partial c_i},$ $1\leqslant i \leqslant N-1.$
We modify \eqref{equ:ibc_phi_2} as
\begin{equation}
-\nabla^2 c_i + \sum_{j=1}^{N-1}\zeta_{ij}h_j = g_{ai}(\mathbf{x},t),
\ \ 1\leqslant i\leqslant N-1, \ \ \text{on} \ \partial\Omega_i,
\label{equ:ibc_phi_2_mod}
\end{equation}
where $g_{ai}$ ($1\leqslant i\leqslant N-1$) are prescribed functions.
We combine \eqref{equ:obc_phi_1} and \eqref{equ:wbc_phi_2}
and re-write them as
\begin{equation}
\mathbf{n}\cdot\nabla\Big(-\nabla^2 c_i + \sum_{j=1}^{N-1}\zeta_{ij}h_j\Big) = g_{bi}(\mathbf{x},t),
\ \ 1\leqslant i\leqslant N-1, \ \ \text{on} \ \partial\Omega_w\cup\partial\Omega_o
\label{equ:bc_chempot}
\end{equation}
where $g_{bi}$ ($1\leqslant i\leqslant N-1$) are prescribed functions.
We modify \eqref{equ:wbc_phi_1} as
\begin{equation}
\mathbf{n}\cdot\nabla c_i = g_{ci}(\mathbf{x},t),
\quad 1\leqslant i\leqslant N-1, \quad \text{on} \ \partial\Omega_w
\label{equ:wbc_phi_1_mod}
\end{equation}
where $g_{ci}$ ($1\leqslant i\leqslant N-1$) are prescribed functions.
The boundary condition \eqref{equ:obc_phi_2} is modified
as
\begin{equation}
\mathbf{n}\cdot\nabla c_i = -d_0 \frac{\partial c_i}{\partial t} + g_{ei},
\quad 1\leqslant i\leqslant N-1,
\quad \text{on} \ \partial\Omega_o
\label{equ:obc_phi_2_mod}
\end{equation}
where $g_{ei}$ ($1\leqslant i\leqslant N-1$) are prescribed functions.
The prescribed source terms $g_{ai}$, $g_{bi}$, $g_{ci}$ and
$g_{ei}$ in the above equations \eqref{equ:ibc_phi_2_mod}--\eqref{equ:obc_phi_2_mod}
are for numerical testing only and will be set to zero
in actual simulations.
We re-write the momentum equation \eqref{equ:nse} as
\begin{equation}
\frac{\partial\mathbf{u}}{\partial t}
+ \mathbf{u}\cdot\nabla\mathbf{u}
+ \frac{1}{\rho}\tilde{\mathbf{J}}\cdot\nabla\mathbf{u}
=
-\frac{1}{\rho}\nabla P
+ \frac{\mu}{\rho}\nabla^2\mathbf{u}
+ \frac{1}{\rho}\nabla\mu\cdot
\mathbf{D}(\mathbf{u})
- \frac{1}{\rho}\sum_{i,j=1}^{N-1} \lambda_{ij}(\nabla^2 c_j)
\nabla c_i
+ \frac{\mathbf{f}}{\rho}
\label{equ:nse_trans}
\end{equation}
where $P = p+\frac{1}{2}\sum_{i,j=1}^{N-1}\lambda_{ij}\nabla c_i\cdot\nabla c_j$
and will also be loosely referred to as the pressure.
The boundary condition \eqref{equ:obc_vel} is re-written as
\begin{equation}
-P\mathbf{n} + \mu\mathbf{n}\cdot\mathbf{D}(\mathbf{u})
- H(\vec{c}) \mathbf{n} - \mathbf{E}(\mathbf{n},\mathbf{u},\rho)
= \mathbf{f}_{b}(\mathbf{x},t), \quad
\text{on} \ \partial\Omega_o
\label{equ:obc_vel_mod}
\end{equation}
where
$
\mathbf{E}(\mathbf{n},\mathbf{u},\rho)
= \frac{1}{2}\rho\left[
(\theta + \alpha_2)(\mathbf{u}\cdot\mathbf{u})\mathbf{n}
+ (1-\theta+\alpha_1)(\mathbf{n}\cdot\mathbf{u})\mathbf{u}
\right]\Theta_0(\mathbf{n},\mathbf{u}),
$
and $\mathbf{f}_b$ is a prescribed function for numerical testing only
and will be set to $\mathbf{f}_b=0$ in actual simulations.
We next present an algorithm for solving
the equations consisting of \eqref{equ:nse_trans},
\eqref{equ:continuity_original} and \eqref{equ:CH},
the boundary conditions consisting of \eqref{equ:dbc_vel},
\eqref{equ:obc_vel_mod}, \eqref{equ:ibc_phi_1},
\eqref{equ:ibc_phi_2_mod}, \eqref{equ:wbc_phi_1_mod},
\eqref{equ:bc_chempot} and \eqref{equ:obc_phi_2_mod},
together with the initial conditions
\eqref{equ:ic_vel} and \eqref{equ:ic_phi}.
The treatment for the governing equations follows a similar scheme as
in \cite{Dong2017}. Our emphasis below is on the numerical treatment and implementation of
various outflow and inflow boundary conditions.
Let $J$ ($J=1$ or $2$) denote the temporal order of accuracy,
$\Delta t$ denote the time step size,
and $n$ ($n\geqslant 0$) denote the time step index.
Let $\chi$ denote a generic variable. Then
$\chi^n$ represents the variable at time step $n$ in
the following, and we define
\begin{equation}
\chi^{*,n+1} = \left\{
\begin{array}{ll}
\chi^n, & J=1, \\
2\chi^n - \chi^{n-1}, & J=2;
\end{array}
\right.
\ \
\hat{\chi} = \left\{
\begin{array}{ll}
\chi^n, & J=1, \\
2\chi^n-\frac{1}{2}\chi^{n-1}, & J=2;
\end{array}
\right.
\ \
\gamma_0 = \left\{
\begin{array}{ll}
1, & J=1, \\
3/2, & J=2.
\end{array}
\right.
\label{equ:param_def}
\end{equation}
Given $(\mathbf{u}^n, P^n, c_i^n)$, we compute
$c_i^{n+1}$, $P^{n+1}$ and $\mathbf{u}^{n+1}$
successively in a de-coupled fashion as follows. \\
\underline{For $c_i^{n+1}$:}
\begin{subequations}
\begin{multline}
\frac{\gamma_0 c_i^{n+1}-\hat{c}_i}{\Delta t}
+ \mathbf{u}^{*,n+1}\cdot\nabla c_i^{*,n+1} \\
= m_0\nabla^2\left[
-\nabla^2 c_i^{n+1}
+ \frac{S}{\eta^2}\left(c_i^{n+1} - c_i^{*,n+1} \right)
+ \sum_{j=1}^{N-1} \zeta_{ij} h_j(\vec{c}^{*,n+1})
\right] + g_i^{n+1},
\quad 1\leqslant i\leqslant N-1
\label{equ:phi_1}
\end{multline}
%
\begin{equation}
-\nabla^2 c_i^{n+1} + \sum_{j=1}^{N-1}\zeta_{ij}h_j(\vec{c}^{n+1})=g_{ai}^{n+1},
\quad 1\leqslant i\leqslant N-1, \ \text{on} \ \partial\Omega_i
\label{equ:phi_2}
\end{equation}
%
\begin{equation}
c_i^{n+1} = c_{bi}^{n+1}, \quad 1\leqslant i\leqslant N-1,
\ \text{on} \ \partial\Omega_i
\label{equ:phi_3}
\end{equation}
%
\begin{multline}
\mathbf{n}\cdot\nabla\left[
-\nabla^2c_i^{n+1} + \frac{S}{\eta^2}(c_i^{n+1}-c_i^{*,n+1})
+ \sum_{j=1}^{N-1} \zeta_{ij} h_j(\vec{c}^{*,n+1})
\right] = g_{bi}^{n+1}, \\
\ 1\leqslant i\leqslant N-1,
\ \text{on} \ \partial\Omega_w\cup\partial\Omega_o
\label{equ:phi_4}
\end{multline}
%
\begin{equation}
\mathbf{n}\cdot\nabla c_i^{n+1} = g_{ci}^{n+1},
\quad 1\leqslant i\leqslant N-1,
\ \text{on} \ \partial\Omega_w
\label{equ:phi_5}
\end{equation}
%
\begin{equation}
\mathbf{n}\cdot\nabla c_i^{n+1} = -d_0\left.\frac{\partial c_i}{\partial t}\right|^{n+1}_{exp} + g_{ei}^{n+1},
\quad 1\leqslant i\leqslant N-1, \ \text{on} \ \partial\Omega_o
\label{equ:phi_6}
\end{equation}
%
\begin{equation}
\mathbf{n}\cdot\nabla c_i^{n+1} = -d_0\frac{\gamma_0 c_i^{n+1}-\hat{c}_i}{\Delta t}
+ g_{ei}^{n+1}, \quad
1\leqslant i\leqslant N-1, \ \text{on} \ \partial\Omega_o
\label{equ:phi_7}
\end{equation}
%
\end{subequations}
\underline{For $P^{n+1}$:}
\begin{subequations}
%
\begin{equation}
\begin{split}
\frac{\gamma_0\tilde{\mathbf{u}}^{n+1}-\hat{\mathbf{u}}}{\Delta t}
&+ \mathbf{u}^{*,n+1}\cdot\nabla\mathbf{u}^{*,n+1}
+ \frac{1}{\rho^{n+1}}\tilde{\mathbf{J}}^{n+1}\cdot\nabla\mathbf{u}^{*,n+1}
+ \frac{1}{\rho_0}\nabla P^{n+1} = \\
&
\left(\frac{1}{\rho_0} - \frac{1}{\rho^{n+1}} \right)\nabla P^{*,n+1}
- \frac{\mu^{n+1}}{\rho^{n+1}}\nabla\times\nabla\times\mathbf{u}^{*,n+1}
+ \frac{1}{\rho^{n+1}}\nabla\mu^{n+1}\cdot\mathbf{D}(\mathbf{u}^{*,n+1}) \\
&
- \frac{1}{\rho^{n+1}}\sum_{i,j=1}^{N-1}\lambda_{ij}\nabla^2 c_i^{n+1}\nabla c_i^{n+1}
+ \frac{1}{\rho^{n+1}}\mathbf{f}^{n+1}
\end{split}
\label{equ:pressure_1}
\end{equation}
%
\begin{equation}
\nabla\cdot\tilde{\mathbf{u}}^{n+1} = 0
\label{equ:pressure_2}
\end{equation}
%
\begin{equation}
\mathbf{n}\cdot\tilde{\mathbf{u}}^{n+1} = \mathbf{n}\cdot\mathbf{w}^{n+1},
\quad \text{on} \ \partial\Omega_i\cup\partial\Omega_w
\label{equ:pressure_3}
\end{equation}
%
\begin{equation}
P^{n+1} = \mu^{n+1}\mathbf{n}\cdot\mathbf{D}(\mathbf{u}^{*,n+1})\cdot\mathbf{n}
-H(\vec{c}^{n+1}) - \mathbf{n}\cdot\mathbf{E}(\mathbf{n},\mathbf{u}^{*,n+1},\rho^{n+1})
-\mathbf{f}_b^{n+1}\cdot\mathbf{n},
\quad \text{on} \ \partial\Omega_o
\label{equ:pressure_4}
\end{equation}
%
\end{subequations}
\underline{For $\mathbf{u}^{n+1}$:}
\begin{subequations}
%
\begin{equation}
\frac{\gamma_0\mathbf{u}^{n+1}-\gamma_0\tilde{\mathbf{u}}^{n+1}}{\Delta t}
- \nu_m\nabla^2\mathbf{u}^{n+1}
= \nu_m \nabla\times\nabla\times \mathbf{u}^{*,n+1}
\label{equ:vel_1}
\end{equation}
%
\begin{equation}
\mathbf{u}^{n+1} = \mathbf{w}^{n+1},
\quad \text{on} \ \partial\Omega_i\cup\partial\Omega_w
\label{equ:vel_2}
\end{equation}
%
\begin{equation}
\begin{split}
\mathbf{n}\cdot\nabla\mathbf{u}^{n+1} =&
\left(1 - \frac{\mu^{n+1}}{\mu_0} \right)\mathbf{n}\cdot\mathbf{D}(\mathbf{u}^{*,n+1})
+ \frac{1}{\mu_0}\left[
P^{n+1}\mathbf{n} + H(\vec{c}^{n+1})\mathbf{n}
+ \mathbf{E}(\mathbf{n},\mathbf{u}^{*,n+1},\rho^{n+1}) \right. \\
&
\left.
+ \mathbf{f}_b^{n+1} - \mu_0(\nabla\cdot\mathbf{u}^{*,n+1})\mathbf{n}
\right]
-\mathbf{n}\cdot(\nabla\mathbf{u}^{*,n+1})^T,
\quad \text{on} \ \partial\Omega_o.
\end{split}
\label{equ:vel_3}
\end{equation}
%
\end{subequations}
In the above equations, $\tilde{\mathbf{u}}^{n+1}$ is an auxiliary
velocity approximating $\mathbf{u}^{n+1}$, and
$S$ is a chosen positive constant
that satisfies a condition to be specified later.
$\rho_0$ is a chosen constant that satisfies the condition
$0 < \rho_0 \leqslant \min(\tilde{\rho}_1,\dots,\tilde{\rho}_N)$.
$\nu_m$ is a chosen constant that is sufficiently large,
and we employ
$\nu_m\geqslant \max\left(\frac{\tilde{\mu}_1}{\tilde{\rho}_1},\dots,\frac{\tilde{\mu}_N}{\tilde{\rho}_N} \right)$
in the current paper.
$\mu_0$ is a chosen constant satisfying the condition
that $\mu_0=\tilde{\mu}_1$ if
$\tilde{\mu}_1=\tilde{\mu}_2=\dots=\tilde{\mu}_N$, and otherwise
$\mu_0>\min(\tilde{\mu}_1,\dots,\tilde{\mu}_N)$.
In \eqref{equ:phi_6} $\left.\frac{\partial c_i}{\partial t}\right|^{n+1}_{exp}$ is
an explicit approximation of the time derivative given by
\begin{equation}
\left.\frac{\partial c_i}{\partial t} \right|^{n+1}_{exp}=\left\{
\begin{array}{ll}
\frac{1}{\Delta t}(c_i^n-c_i^{n-1}), & J=1, \\
\frac{1}{\Delta t}(\frac{5}{2}c_i^n - 4c_i^{n-1}+\frac{3}{2}c_i^{n-2}), & J=2.
\end{array}
\right.
\end{equation}
Several comments on the above algorithm are in order at this point:
\begin{itemize}
\item
To solve the set of phase field variables,
an extra term $\frac{S}{\eta^2}(c_i^{n+1}-c_i^{*,n+1})$ has been added to
the semi-discretized phase field equations \eqref{equ:phi_1}.
This term is equivalent to zero to the $J$-th order
accuracy in time.
This term enables the reformulation of the $(N-1)$ semi-discretized
4th-order phase field equations into $2(N-1)$ de-coupled Helmholtz-type equations \cite{Dong2017}. This is an often-used strategy for two-phase flow
simulations (see e.g.~\cite{DongS2012,Dong2012}). It is crucial for spatial
discretizations with $C^0$ continuous spectral elements
employed in the current paper.
\item
In the discrete boundary condition \eqref{equ:phi_4} the same
extra zero term is added. This term is crucial, and without it
significant loss of mass for some fluid phases can be observed.
\item
The discrete conditions \eqref{equ:phi_6} and \eqref{equ:phi_7}
result from an explicit and an implicit treatment
of the inertial term $\frac{\partial c_i}{\partial t}$ in
the boundary condition \eqref{equ:obc_phi_2_mod}.
These two approximations will be employed
to implement the outflow condition at different
stages of the implementation, which will become clear
from later discussions.
\item
The equations \eqref{equ:pressure_1} and \eqref{equ:vel_1}
constitute a rotational velocity correction scheme
for the momentum equation \eqref{equ:nse_trans}.
The scheme adopts a reformulation of the
pressure term and the viscous term in the same fashion
as in \cite{DongS2012},
\begin{equation*}
\frac{1}{\rho}\nabla P \approx \frac{1}{\rho_0}\nabla P +
\left(\frac{1}{\rho} - \frac{1}{\rho_0} \right)\nabla P^*,
\quad
\frac{\mu}{\rho}\nabla^2\mathbf{u} \approx \nu_m\nabla^2\mathbf{u}
+ \left(\nu_m - \frac{\mu}{\rho} \right)\nabla\times\nabla\times\mathbf{u}^*
\end{equation*}
where $P^*$ and $\mathbf{u}^*$ are explicit approximations of
$P$ and $\mathbf{u}$ respectively.
These reformulations lead to time-independent coefficient matrices
for the pressure and velocity linear algebraic systems
after discretization, which is crucial for numerical efficiency.
\item
Equation \eqref{equ:pressure_4} is a discrete Dirichlet type
condition for the pressure on the outflow boundary $\partial\Omega_o$.
It results from taking the inner product between the boundary condition
\eqref{equ:obc_vel_mod} and the directional vector $\mathbf{n}$ normal
to $\partial\Omega_o$ and treating the velocity in
an explicit fashion.
\item
The discrete condition \eqref{equ:vel_3} is essentially a combination of
the following two approximations:
\begin{equation*}
\left\{
\begin{split}
&
\mathbf{n}\cdot\nabla\mathbf{u}^{n+1} = \mathbf{n}\cdot\mathbf{D}(\mathbf{u}^{n+1})
- \mathbf{n}\cdot(\nabla\mathbf{u}^{*,n+1})^T \\
&
\mu_0\mathbf{n}\cdot\mathbf{D}(\mathbf{u}^{n+1}) =
(\mu_0-\mu)\mathbf{n}\cdot\mathbf{D}(\mathbf{u}^{*,n+1}) \\
&
\qquad\qquad\qquad\quad
+ \left[
P^{n+1}\mathbf{n} + H(\vec{c}^{n+1})\mathbf{n}
+ E(\mathbf{n},\mathbf{u}^{*,n+1},\rho^{n+1})
+ \mathbf{f}_b^{n+1}
\right], \quad \text{on} \ \partial\Omega_o.
\end{split}
\right.
\end{equation*}
The second approximation above stems from the
outflow boundary condition \eqref{equ:obc_vel_mod},
but with the terms involving $\mu_0$ incorporated.
The construction with the $\mu_0$ terms was first
introduced in \cite{Dong2014obc} for two-phase outflows.
This construction is crucial for the stability of the scheme when large
viscosity ratios among the fluids occur at the outflow/open boundary.
Note also that an extra term involving $(\nabla\cdot\mathbf{u})\mathbf{n}$
is incorporated into the discrete condition \eqref{equ:vel_3}.
\end{itemize}
\subsection{Implementation with Spectral Elements}
\label{sect: SEM}
We next implement the algorithm given by \eqref{equ:phi_1}--\eqref{equ:vel_3}
using $C^0$ continuous high-order spectral
elements~\cite{SherwinK1995,KarniadakisS2005,ZhengD2011}.
We first derive the weak forms for different flow variables
in the spatially continuous sense. Then we will specify the approximation
spaces and provide the fully discrete formulation.
Thanks to the term involving $\frac{S}{\eta^2}$,
each of the ($N-1$)
equations in \eqref{equ:phi_1} can be equivalently reformulated into
two de-coupled Helmholtz-type equations (see \cite{Dong2017} for details):
\begin{subequations}
\begin{align}
&
\nabla^2\psi_i^{n+1} - \left(\alpha + \frac{S}{\eta^2} \right)\psi_i^{n+1} = Q_i + \nabla^2 R_i,
\quad 1\leqslant i\leqslant N-1, \label{equ:CH_psi} \\
&
\nabla^2 c_i^{n+1} + \alpha c_i^{n+1} = \psi_i^{n+1},
\quad 1\leqslant i\leqslant N-1, \label{equ:CH_phi}
\end{align}
\end{subequations}
where $\psi_i^{n+1}$ is an auxiliary variable defined by \eqref{equ:CH_phi},
and
\begin{equation}
\left\{
\begin{split}
&
Q_i = \frac{1}{m_0}\left(
g_i^{n+1} - \mathbf{u}^{*,n+1}\cdot\nabla c_i^{*,n+1} + \frac{\hat{c}_i}{\Delta t}
\right), \quad 1\leqslant i\leqslant N-1, \\
&
R_i = -\frac{S}{\eta^2}c_i^{*,n+1} + \sum_{j=1}^{N-1}\zeta_{ij}h_j(\vec{c}^{*,n+1}),
\quad 1\leqslant i\leqslant N-1, \\
&
\alpha = \frac{S}{2\eta^2}\left[
-1 + \sqrt{1 - \frac{4\gamma_0}{m_0\Delta t}\left(\frac{\eta^2}{S} \right)^2 }
\right].
\end{split}
\right.
\end{equation}
The reformulation also results in the following condition that
the chosen constant $S$ must satisfy,
$S \geqslant \eta^2\sqrt{\frac{4\gamma_0}{m_0\Delta t}}$.
It is noted that this condition implies $\alpha <0$
and $\alpha+\frac{S}{\eta^2}>0$ in \eqref{equ:CH_psi} and \eqref{equ:CH_phi}.
Let $\varphi(\mathbf{x})$ denote an arbitrary function on $\Omega$ with
sufficient regularity and satisfying the condition
\begin{equation}
\varphi(\mathbf{x}) = 0, \quad \text{on} \ \partial\Omega_i.
\label{equ:cond_0_phi}
\end{equation}
Taking the $L^2$ inner product between $\varphi$ and equation \eqref{equ:CH_psi}
leads to
\begin{equation}
\begin{split}
\int_{\Omega} \nabla\psi_i^{n+1}\cdot\nabla\varphi
&+ \left(\alpha + \frac{S}{\eta^2} \right)\int_{\Omega} \psi_i^{n+1}\varphi
= -\int_{\Omega} Q_i\varphi + \int_{\Omega}\nabla R_i\cdot\nabla\varphi \\
&+ \int_{\partial\Omega_w}\left(\mathbf{n}\cdot\nabla\psi_i^{n+1}
- \mathbf{n}\cdot\nabla R_i \right)\varphi
+ \int_{\partial\Omega_o}\left(\mathbf{n}\cdot\nabla\psi_i^{n+1}
- \mathbf{n}\cdot\nabla R_i \right)\varphi,
\quad \forall \varphi,
\end{split}
\label{equ:psi_weak_1}
\end{equation}
where we have used integration by part,
the divergence theorem
and the condition \eqref{equ:cond_0_phi}.
In light of \eqref{equ:CH_phi} and \eqref{equ:phi_5},
the condition \eqref{equ:phi_4} can be transformed into
\begin{equation}
\mathbf{n}\cdot\nabla\psi_i^{n+1} - \mathbf{n}\cdot\nabla R_i
= \left(\alpha + \frac{S}{\eta^2} \right)g_{ci}^{n+1} - g_{bi}^{n+1},
\quad \text{on} \ \partial\Omega_w.
\end{equation}
Similarly, for $\partial\Omega_o$ the condition
\eqref{equ:phi_4} can be transformed into
\begin{equation}
\mathbf{n}\cdot\nabla\psi_i^{n+1} - \mathbf{n}\cdot\nabla R_i
= \left(\alpha + \frac{S}{\eta^2} \right)\left(
-d_0\left.\frac{\partial c_i}{\partial t} \right|^{n+1}_{exp} + g_{ei}^{n+1}\right)- g_{bi}^{n+1},
\quad \text{on}\ \partial\Omega_o
\end{equation}
where we have used \eqref{equ:CH_phi} and \eqref{equ:phi_6}.
Substitution of the above two expression into \eqref{equ:psi_weak_1}
leads to the weak form for $\psi_i^{n+1}$,
\begin{equation}
\begin{split}
\int_{\Omega} &\nabla\psi_i^{n+1}\cdot\nabla\varphi
+ \left(\alpha + \frac{S}{\eta^2} \right)\int_{\Omega} \psi_i^{n+1}\varphi
= -\int_{\Omega} Q_i\varphi + \int_{\Omega}\nabla R_i\cdot\nabla\varphi
- \int_{\partial\Omega_w\cup\partial\Omega_o}g_{bi}^{n+1}\varphi \\
&+ \left(\alpha + \frac{S}{\eta^2} \right)\int_{\partial\Omega_w}g_{ci}^{n+1}\varphi
+ \left(\alpha + \frac{S}{\eta^2} \right)
\int_{\partial\Omega_o}\left(-d_0\left.\frac{\partial c_i}{\partial t} \right|^{n+1}_{exp} + g_{ei}^{n+1}\right)\varphi,
\quad 1\leqslant i\leqslant N-1,
\ \ \forall \varphi.
\end{split}
\label{equ:psi_weakform}
\end{equation}
In light of \eqref{equ:CH_phi} and \eqref{equ:phi_3}, the discrete
condition \eqref{equ:phi_2} can be
transformed into
\begin{equation}
\psi_i^{n+1} = \alpha c_{bi}^{n+1}
+ \sum_{j=1}^{N-1}\zeta_{ij}h_j(\vec{c}_b^{n+1}) - g_{ai}^{n+1},
\quad 1\leqslant i\leqslant N-1,
\quad \text{on} \ \partial\Omega_i,
\label{equ:dbc_psi}
\end{equation}
where $\vec{c}_b=(c_{b1}, c_{b2},\dots,c_{bN-1})$.
Take the $L^2$ inner product between $\varphi(\mathbf{x})$
and equation \eqref{equ:CH_phi}, and we have
\begin{equation}
\int_{\Omega}\nabla c_i^{n+1}\cdot\nabla\varphi
- \alpha\int_{\Omega}c_i^{n+1}\varphi
= -\int_{\Omega}\psi_i^{n+1}\varphi
+ \int_{\partial\Omega_w}\mathbf{n}\cdot\nabla c_i^{n+1}\varphi
+ \int_{\partial\Omega_o}\mathbf{n}\cdot\nabla c_i^{n+1}\varphi,
\quad \forall \varphi
\label{equ:phi_weak_1}
\end{equation}
where we have used integration by part,
the divergence theorem and equation \eqref{equ:cond_0_phi}.
Substitution of the expressions
\eqref{equ:phi_5} and \eqref{equ:phi_7} into
the above equation leads to the weak form for $c_i^{n+1}$,
\begin{multline}
\int_{\Omega}\nabla c_i^{n+1}\cdot\nabla\varphi
- \alpha\int_{\Omega}c_i^{n+1}\varphi
+ \frac{\gamma_0 d_0}{\Delta t}\int_{\partial\Omega_o}c_i^{n+1}\varphi
= -\int_{\Omega}\psi_i^{n+1}\varphi
+ \int_{\partial\Omega_w} g_{ci}^{n+1}\varphi \\
+ \int_{\partial\Omega_o}\left(\frac{d_0}{\Delta t}\hat{c}_i + g_{ei}^{n+1} \right)\varphi,
\quad 1\leqslant i\leqslant N-1,
\ \ \forall \varphi.
\label{equ:phi_weakform}
\end{multline}
We re-write \eqref{equ:pressure_1} into
\begin{equation}
\frac{\gamma_0}{\Delta t}\tilde{\mathbf{u}}^{n+1}
+ \frac{1}{\rho_0}\nabla P^{n+1}
= \mathbf{G}^{n+1}
-\frac{\mu^{n+1}}{\rho^{n+1}}\nabla\times\bm{\omega}^{*,n+1}
\label{equ:pressure_1_trans}
\end{equation}
where the vorticity is $\bm{\omega} = \nabla\times\mathbf{u}$ and
\begin{equation}
\begin{split}
\mathbf{G}^{n+1} =& \frac{1}{\rho^{n+1}}\left[
\mathbf{f}^{n+1} - \tilde{\mathbf{J}}^{n+1}\cdot\nabla\mathbf{u}^{*,n+1}
+ \nabla\mu^{n+1}\cdot\mathbf{D}(\mathbf{u}^{*,n+1})
- \sum_{i,j=1}^{N-1}\lambda_{ij}\left(\psi_j^{n+1}-\alpha c_j^{n+1}\right)\nabla c_i^{n+1}
\right] \\
&
- \mathbf{u}^{*,n+1}\cdot\nabla\mathbf{u}^{*,n+1}
+ \frac{\hat{\mathbf{u}}}{\Delta t}
+ \left(\frac{1}{\rho_0}-\frac{1}{\rho^{n+1}} \right)\nabla P^{*,n+1}
\end{split}
\label{equ:G_expr}
\end{equation}
Let $q(\mathbf{x})$ denote an arbitrary function with sufficient regularity
and satisfying the condition
\begin{equation}
q(\mathbf{x}) = 0, \quad \text{on} \ \partial\Omega_o.
\label{equ:cond_0_p}
\end{equation}
Taking the $L^2$ inner product between equation \eqref{equ:pressure_1_trans}
and $\nabla q$ leads to
\begin{equation}
\frac{1}{\rho_0}\int_{\Omega}\nabla P^{n+1}\cdot\nabla q
= \int_{\Omega}\mathbf{G}^{n+1}\cdot\nabla q
-\int_{\Omega}\frac{\mu^{n+1}}{\rho^{n+1}}\nabla\times\bm{\omega}^{*,n+1}\cdot\nabla q
- \frac{\gamma_0}{\Delta t}\int_{\partial\Omega_i\cup\partial\Omega_w}\mathbf{n}\cdot\mathbf{w}^{n+1} q,
\quad \forall q
\end{equation}
where we have used integration by part, the divergence theorem
and the condition \eqref{equ:cond_0_p}.
In light of the identity
$
\frac{\mu}{\rho}\nabla\times\bm{\omega}\cdot\nabla q
=\nabla\cdot\left(\frac{\mu}{\rho}\bm{\omega}\times\nabla q \right)
-\nabla\left(\frac{\mu}{\rho} \right)\times\bm{\omega}\cdot\nabla q,
$
the above equation is transformed into
the weak form about $P^{n+1}$
\begin{equation}
\begin{split}
\int_{\Omega}\nabla P^{n+1}\cdot\nabla q
=& \rho_0\int_{\Omega}\left[\mathbf{G}^{n+1}
+ \nabla\left(\frac{\mu^{n+1}}{\rho^{n+1}} \right)\times\bm{\omega}^{*,n+1} \right]\cdot\nabla q \\
&
-\rho_0\int_{\partial\Omega_i\cup\partial\Omega_w\cup\partial\Omega_o}\frac{\mu^{n+1}}{\rho^{n+1}}\mathbf{n}\times\bm{\omega}^{*,n+1}\cdot\nabla q
- \frac{\gamma_0\rho_0}{\Delta t}\int_{\partial\Omega_i\cup\partial\Omega_w}\mathbf{n}\cdot\mathbf{w}^{n+1} q,
\quad \forall q.
\end{split}
\label{equ:p_weakform}
\end{equation}
Sum up equations \eqref{equ:vel_1} and \eqref{equ:pressure_1} and we get
\begin{equation}
\frac{\gamma_0}{\nu_m\Delta t}\mathbf{u}^{n+1}
-\nabla^2\mathbf{u}^{n+1}
= \frac{1}{\nu_m}\left(\mathbf{G}^{n+1}-\frac{1}{\rho_0}\nabla P^{n+1} \right)
-\frac{1}{\nu_m}\left(\frac{\mu^{n+1}}{\rho^{n+1}}-\nu_m \right)
\nabla\times\bm{\omega}^{*,n+1}.
\label{equ:vel_1_trans}
\end{equation}
Let $\varpi(\mathbf{x})$ be an arbitrary scalar function with sufficient regularity
and satisfying the condition
\begin{equation}
\varpi(\mathbf{x}) = 0, \quad \text{on} \ \partial\Omega_i\cup\partial\Omega_w.
\label{equ:cond_0_vel}
\end{equation}
Taking the $L^2$ inner product between
$\varpi(\mathbf{x})$ and equation \eqref{equ:vel_1_trans}
leads to
\begin{equation}
\begin{split}
\frac{\gamma_0}{\nu_m\Delta t}\int_{\Omega}\mathbf{u}^{n+1}\varpi
&+ \int_{\Omega}\nabla\varpi\cdot\nabla\mathbf{u}^{n+1}
= \frac{1}{\nu_m}\int_{\Omega}\left(\mathbf{G}^{n+1}-\frac{1}{\rho_0}\nabla P^{n+1} \right)
\varpi \\
&- \frac{1}{\nu_m}\int_{\Omega}\left( \frac{\mu^{n+1}}{\rho^{n+1}}-\nu_m \right)\nabla\times\bm{\omega}^{*,n+1}\varpi
+ \int_{\partial\Omega_o}\mathbf{n}\cdot\nabla\mathbf{u}^{n+1}\varpi,
\quad \forall \varpi
\end{split}
\label{equ:vel_weak_1}
\end{equation}
where we have used integration by part,
the divergence theorem and the condition \eqref{equ:cond_0_vel}.
Noting the relation
\begin{equation*}
\int_{\Omega}\left( \frac{\mu}{\rho}-\nu_m \right)\nabla\times\bm{\omega}\varpi =
\int_{\Omega}\left( \frac{\mu}{\rho}-\nu_m \right)\bm{\omega}\times\nabla\varpi -
\int_{\Omega}\nabla\left(\frac{\mu}{\rho} \right)\times\bm{\omega}\varpi
+ \int_{\partial\Omega}\left(\frac{\mu}{\rho}-\nu_m \right)\mathbf{n}\times\bm{\omega}\varpi
\end{equation*}
and in light of \eqref{equ:vel_3}, we can transform
\eqref{equ:vel_weak_1} into
\begin{equation}
\begin{split}
& \frac{\gamma_0}{\nu_m\Delta t}\int_{\Omega}\mathbf{u}^{n+1}\varpi
+ \int_{\Omega}\nabla\varpi\cdot\nabla\mathbf{u}^{n+1} \\
&= \frac{1}{\nu_m}\int_{\Omega}\left(\mathbf{G}^{n+1}
-\frac{1}{\rho_0}\nabla P^{n+1}
+\nabla\left(\frac{\mu^{n+1}}{\rho^{n+1}} \right)\times\bm{\omega}^{*,n+1} \right)\varpi \\
& \ \ \
- \frac{1}{\nu_m}\int_{\Omega}\left( \frac{\mu^{n+1}}{\rho^{n+1}}-\nu_m \right)\bm{\omega}^{*,n+1}\times\nabla\varpi
-\frac{1}{\nu_m}\int_{\partial\Omega_o}\left(\frac{\mu^{n+1}}{\rho^{n+1}}-\nu_m \right)\mathbf{n}\times\bm{\omega}^{*,n+1}\varpi \\
& \ \ \
+ \int_{\partial\Omega_o}\left\{
- \mathbf{n}\cdot(\nabla\mathbf{u}^{*,n+1})^T
+ \left(1 - \frac{\mu^{n+1}}{\mu_0} \right)\mathbf{n}\cdot\mathbf{D}(\mathbf{u}^{*,n+1})
\right. \\
& \qquad\qquad \ \ \
\left.
+ \frac{1}{\mu_0}\left[
P^{n+1}\mathbf{n} + H(\vec{c}^{n+1})\mathbf{n}
+ \mathbf{E}(\mathbf{n},\mathbf{u}^{*,n+1},\rho^{n+1})
+ \mathbf{f}_b^{n+1} -\mu_0(\nabla\cdot\mathbf{u}^{*,n+1})\mathbf{n}
\right]
\right\}\varpi,
\ \ \forall \varpi
\end{split}
\label{equ:vel_weakform}
\end{equation}
which is the weak form about $\mathbf{u}^{n+1}$.
Let $H^1(\Omega)$ denote the set of globally continuous
square-integrable functions on $\Omega$ with square-integrable
derivatives.
Define
\begin{equation}
\left\{
\begin{split}
&
H_{c0}^1(\Omega) = \left\{\
v\in H^1(\Omega) \ : \ v|_{\partial\Omega_i} = 0
\ \right\}, \\
&
H_{p0}^1(\Omega) = \left\{\
v\in H^1(\Omega) \ : \ v|_{\partial\Omega_o} = 0
\ \right\}, \\
&
H_{u0}^1(\Omega) = \left\{\
v\in H^1(\Omega) \ : \ v|_{\partial\Omega_i\cup\partial\Omega_w} = 0
\ \right\}.
\end{split}
\right.
\label{equ:def_space_0}
\end{equation}
We require that the equations \eqref{equ:psi_weakform} and
\eqref{equ:phi_weakform} hold for all $\varphi \in H_{c0}(\Omega)$,
and that equation \eqref{equ:p_weakform} holds for all
$q \in H_{p0}(\Omega)$, and that
equation \eqref{equ:vel_weakform} holds for
all $\varpi \in H_{u0}(\Omega)$.
To discretize these equations using $C^0$ spectral
elements, we first partition the domain $\Omega$
using a spectral element mesh.
Let $\Omega_h$ denote the discretized $\Omega$,
$
\Omega_h = \cup_{e=1}^{N_{el}}\Omega_h^e,
$
where $\Omega_h^e$ ($1\leqslant e\leqslant N_{el}$) denotes the element $e$ and
$N_{el}$ is the number of elements in the mesh.
Let $\partial\Omega_h$, $\partial\Omega_{ih}$, $\partial\Omega_{wh}$
and $\partial\Omega_{oh}$ denote the
discretized boundaries of different types,
$
\partial\Omega_h = \partial\Omega_{ih}\cup\partial\Omega_{wh}\cup\partial\Omega_{oh}.
$
Let $d$ ($d=2$ or $3$) denote
the dimension in space, and $\Pi_{K}(\Omega_h^e)$ denote
the linear space of polynomials defined on $\Omega_h^e$
whose degrees are characterized by $K$ ($K$ is referred
to the element order hereafter).
Define
\begin{equation}
\left\{
\begin{split}
&
X_{h} = \{\ v\in H^1(\Omega_h) \ :\ v|_{\Omega_h^e}\in \Pi_{K}(\Omega_h^e),
\ 1\leqslant e\leqslant N_{el} \ \}, \\
&
X_{h0}^{u} = \{\ v\in X_{h} \ :\ v|_{\partial\Omega_{ih}\cup\partial\Omega_{wh}}=0 \ \}, \\
&
X_{h0}^p = \{\ v\in X_h \ :\ v|_{\partial\Omega_{oh}}=0 \ \}, \\
&
X_{h0}^c = \{\ v\in X_h \ :\ v|_{\partial\Omega_{ih}}=0 \ \},
\end{split}
\right.
\end{equation}
In the following we use subscript in $(\cdot)_h$ to represent
the discretized version of $(\cdot)$.
The fully discretized equations consists of
the following: \\
\underline{For $\psi_{ih}^{n+1}$:}
find $\psi_{ih}^{n+1} \in X_h$ such that
\begin{equation}
\begin{split}
\int_{\Omega_h} &\nabla\psi_{ih}^{n+1}\cdot\nabla\varphi_h
+ \left(\alpha + \frac{S}{\eta^2} \right)\int_{\Omega_h} \psi_{ih}^{n+1}\varphi_h
= -\int_{\Omega_h} Q_{ih}\varphi_h + \int_{\Omega_h}\nabla R_{ih}\cdot\nabla\varphi_h \\
&
- \int_{\partial\Omega_{wh}\cup\partial\Omega_{oh}}g_{bih}^{n+1}\varphi_h
+ \left(\alpha + \frac{S}{\eta^2} \right)\int_{\partial\Omega_{wh}}g_{cih}^{n+1}\varphi_h \\
&
+ \left(\alpha + \frac{S}{\eta^2} \right)
\int_{\partial\Omega_{oh}}\left(-d_0\left.\frac{\partial c_{ih}}{\partial t} \right|^{n+1}_{exp} + g_{eih}^{n+1}\right)\varphi_h,
\quad 1\leqslant i\leqslant N-1,
\quad \forall \varphi_h \in X_{h0}^{c},
\end{split}
\label{equ:psi_weakform_disc}
\end{equation}
and
\begin{equation}
\psi_{ih}^{n+1} = \alpha c_{bih}^{n+1}
+ \sum_{j=1}^{N-1}\zeta_{ij}h_j(\vec{c}_{bh}^{n+1}) - g_{aih}^{n+1},
\quad 1\leqslant i\leqslant N-1,
\quad \text{on} \ \partial\Omega_{ih}.
\label{equ:dbc_psi_disc}
\end{equation}
\underline{For $c_{ih}^{n+1}$:} find $c_{ih}^{n+1} \in X_h$ such that
\begin{multline}
\int_{\Omega_h}\nabla c_{ih}^{n+1}\cdot\nabla\varphi_h
- \alpha\int_{\Omega_h}c_{ih}^{n+1}\varphi_h
+ \frac{\gamma_0 d_0}{\Delta t}\int_{\partial\Omega_{oh}}c_{ih}^{n+1}\varphi_h
= -\int_{\Omega_h}\psi_{ih}^{n+1}\varphi_h
+ \int_{\partial\Omega_{wh}} g_{cih}^{n+1}\varphi_h \\
+ \int_{\partial\Omega_{oh}}\left(\frac{d_0}{\Delta t}\hat{c}_{ih} + g_{eih}^{n+1} \right)\varphi_h,
\quad 1\leqslant i\leqslant N-1,
\quad \forall \varphi_h \in X_{h0}^c,
\label{equ:phi_weakform_disc}
\end{multline}
and
\begin{equation}
c_{ih}^{n+1} = c_{bih}^{n+1},
\quad 1\leqslant i\leqslant N-1,
\quad \text{on} \ \partial\Omega_{ih}.
\label{equ:dbc_phi_disc}
\end{equation}
\underline{For $P^{n+1}_h$:} find $P_h^{n+1}\in X_h$ such that
\begin{equation}
\begin{split}
&\int_{\Omega_h}\nabla P_h^{n+1}\cdot\nabla q_h
= \rho_0\int_{\Omega_h}\left[\mathbf{G}_h^{n+1}
+ \nabla\left(\frac{\mu_h^{n+1}}{\rho_h^{n+1}} \right)\times\bm{\omega}_h^{*,n+1} \right]\cdot\nabla q_h \\
&
-\rho_0\int_{\partial\Omega_{ih}\cup\partial\Omega_{wh}\cup\partial\Omega_{oh}}\frac{\mu_h^{n+1}}{\rho_h^{n+1}}\mathbf{n}_h\times\bm{\omega}_h^{*,n+1}\cdot\nabla q_h
- \frac{\gamma_0\rho_0}{\Delta t}\int_{\partial\Omega_{ih}\cup\partial\Omega_{wh}}\mathbf{n}_h\cdot\mathbf{w}_h^{n+1} q_h,
\quad \forall q_h\in X_{h0}^p,
\end{split}
\label{equ:p_weakform_disc}
\end{equation}
and
\begin{equation}
P_h^{n+1} = \mu_h^{n+1}\mathbf{n}_h\cdot\mathbf{D}(\mathbf{u}_h^{*,n+1})\cdot\mathbf{n}_h
-H(\vec{c}_h^{n+1}) - \mathbf{n}_h\cdot\mathbf{E}(\mathbf{n}_h,\mathbf{u}_h^{*,n+1},\rho_h^{n+1})
-\mathbf{f}_{bh}^{n+1}\cdot\mathbf{n}_h,
\quad \text{on} \ \partial\Omega_{oh}.
\label{equ:dbc_p_disc}
\end{equation}
\underline{For $\mathbf{u}_h^{n+1}$:} find
$\mathbf{u}_h^{n+1} \in [X_h]^d$ such that
\begin{equation}
\begin{split}
& \frac{\gamma_0}{\nu_m\Delta t}\int_{\Omega_h}\mathbf{u}_h^{n+1}\varpi_h
+ \int_{\Omega_h}\nabla\varpi_h\cdot\nabla\mathbf{u}_h^{n+1} \\
&= \frac{1}{\nu_m}\int_{\Omega_h}\left(\mathbf{G}_h^{n+1}
-\frac{1}{\rho_0}\nabla P_h^{n+1}
+\nabla\left(\frac{\mu_h^{n+1}}{\rho_h^{n+1}} \right)\times\bm{\omega}_h^{*,n+1} \right)\varpi_h \\
& \ \ \
- \frac{1}{\nu_m}\int_{\Omega_h}\left( \frac{\mu_h^{n+1}}{\rho_h^{n+1}}-\nu_m \right)\bm{\omega}_h^{*,n+1}\times\nabla\varpi_h
-\frac{1}{\nu_m}\int_{\partial\Omega_{oh}}\left(\frac{\mu_h^{n+1}}{\rho_h^{n+1}}-\nu_m \right)\mathbf{n}_h\times\bm{\omega}_h^{*,n+1}\varpi_h \\
& \ \ \
+ \int_{\partial\Omega_{oh}}\left\{
- \mathbf{n}_h\cdot(\nabla\mathbf{u}_h^{*,n+1})^T
+ \left(1 - \frac{\mu_h^{n+1}}{\mu_0} \right)\mathbf{n}_h\cdot\mathbf{D}(\mathbf{u}_h^{*,n+1})
\right. \\
& \qquad\qquad
\left.
+ \frac{1}{\mu_0}\left[
P_h^{n+1}\mathbf{n}_h + H(\vec{c}_h^{n+1})\mathbf{n}_h
+ \mathbf{E}(\mathbf{n}_h,\mathbf{u}_h^{*,n+1},\rho_h^{n+1})
+ \mathbf{f}_{bh}^{n+1} -\mu_0(\nabla\cdot\mathbf{u}_h^{*,n+1})\mathbf{n}_h
\right]
\right\}\varpi_h, \\
& \quad \forall \varpi_h \in X_{h0}^u,
\end{split}
\label{equ:vel_weakform_disc}
\end{equation}
and
\begin{equation}
\mathbf{u}_h^{n+1} = \mathbf{w}_h^{n+1},
\quad \text{on} \ \partial\Omega_{ih}\cup\partial\Omega_{wh}.
\label{equ:dbc_vel_disc}
\end{equation}
So the final solution procedure is as follows.
Given $(\mathbf{u}_h^n, P_h^n, \psi_{ih}^{n},c_{ih}^n)$,
we compute $\psi_{ih}^{n+1}$, $c_{ih}^{n+1}$,
$P_h^{n+1}$ and $\mathbf{u}_{h}^{n+1}$
successively through these steps:
\begin{itemize}
\item
Solve \eqref{equ:psi_weakform_disc}, together with
the Dirichlet condition \eqref{equ:dbc_psi_disc}, for $\psi_{ih}^{n+1}$;
\item
Solve \eqref{equ:phi_weakform_disc}, together with
the Dirichlet condition \eqref{equ:dbc_phi_disc},
for $c_{ih}^{n+1}$;
\item
Solve \eqref{equ:p_weakform_disc}, together with
the Dirichlet condition \eqref{equ:dbc_p_disc},
for $P_h^{n+1}$;
\item
Solve \eqref{equ:vel_weakform_disc}, together with
the Dirichlet condition \eqref{equ:dbc_vel_disc},
for $\mathbf{u}_h^{n+1}$.
\end{itemize}
When implementing the Dirichlet condition \eqref{equ:dbc_p_disc},
it should be noted that a projection of
the computed pressure data onto $H^1(\partial\Omega_{oh})$
is needed with $C^0$ elements because of the
spatial derivatives involved in
the $\mathbf{D}(\mathbf{u})$ term.
It can be noted that the final algorithm only requires
the solution of a number of de-coupled individual
Helmholtz-type equations (including Poisson) within
a time step. The linear algebraic systems
after discretization involve only constant and time-independent
coefficient matrices for all flow variables,
even though large density contrasts and large viscosity
contrasts may be present with the different fluids.
Therefore these coefficient matrices can be pre-computed,
which makes the computation very efficient in cases with
large density ratios and large viscosity ratios.
\subsection{A Thermodynamically Consistent N-Fluid Mixture Model}
We summarize below the phase field model proposed in \cite{Dong2017}
for an isothermal mixture
of $N$ ($N\geqslant 2$) immiscible incompressible fluids.
This model modifies and generalizes the N-phase model
developed in \cite{Dong2014}, and it
satisfies the conservations of mass and momentum,
the second law of thermodynamics, and the Galilean invariance
principle. This model forms the basis for the development
of outflow/open boundary conditions in subsequent sections.
Consider a mixture of $N$ ($N\geqslant 2$) immiscible incompressible
fluids contained in some flow domain $\Omega$ with boundary $\partial\Omega$.
Let $\tilde{\rho}_i$ and $\tilde{\mu}_i$
($1\leqslant i\leqslant N$) denote
the constant densities and constant dynamic
viscosities of these $N$ pure fluids (before mixing).
Define auxiliary parameters
\begin{equation}
\tilde{\gamma}_i = \frac{1}{\tilde{\rho}_i}, \ 1\leqslant i\leqslant N;
\quad
\Gamma = \sum_{i=1}^N \tilde{\gamma}_i;
\quad
\Gamma_{\mu} = \sum_{i=1}^N \frac{\tilde{\mu}_i}{\tilde{\rho}_i}.
\end{equation}
Let $\phi_i$ ($1\leqslant i\leqslant N-1$) denote
the ($N-1$) independent order parameters,
or interchangeably the phase field variables,
that characterize the system,
and $\vec{\phi}=(\phi_1,\dots,\phi_{N-1})$.
Let $\rho_i(\vec{\phi})$ and $c_i(\vec{\phi})$
($1\leqslant i\leqslant N$)
denote the density and volume fraction of
fluid $i$ {\em within the mixture}, and
let $\rho(\vec{\phi})$ denote the density of
the N-phase mixture. Then we have the relations \cite{Dong2014}
\begin{equation}
c_i = \frac{\rho_i}{\tilde{\rho}_i}, \ 1\leqslant i\leqslant N; \quad
\sum_{i=1}^N c_i = 1; \quad
\rho = \sum_{i=1}^N \rho_i.
\label{equ:volfrac_expr}
\end{equation}
Let $W(\vec{\phi},\nabla\vec{\phi})$ denote
the free energy density function of the system
which satisfies the condition,
$
\sum_{i=1}^{N-1}\nabla\phi_i \otimes
\frac{\partial W}{\partial(\nabla\phi_i)}
= \sum_{i=1}^{N-1}\frac{\partial W}{\partial(\nabla\phi_i)}
\otimes \nabla\phi_i,
$
where $\otimes$ denote the tensor product.
Then the motion of this N-phase system
is described by the following equations~\cite{Dong2017}:
\begin{subequations}
\begin{equation}
\rho(\vec{\phi})\left(
\frac{\partial\mathbf{u}}{\partial t}
+ \mathbf{u}\cdot\nabla\mathbf{u}
\right)
+ \tilde{\mathbf{J}}\cdot\nabla\mathbf{u}
=
-\nabla p
+ \nabla\cdot\left[
\mu(\vec{\phi}) \mathbf{D}(\mathbf{u})
\right]
- \sum_{i=1}^{N-1} \nabla\cdot\left(
\nabla\phi_i \otimes \frac{\partial W}{\partial(\nabla\phi_i)}
\right),
\label{equ:nse_original}
\end{equation}
\begin{equation}
\nabla\cdot\mathbf{u} = 0,
\label{equ:continuity_original}
\end{equation}
\begin{equation}
\sum_{j=1}^{N-1}\frac{\partial\varphi_i}{\partial\phi_j}\left(
\frac{\partial\phi_j}{\partial t} + \mathbf{u}\cdot\nabla\phi_j
\right)
=
\sum_{j=1}^{N-1}\nabla\cdot\left[
\tilde{m}_{ij}(\vec{\phi}) \nabla \mathcal{C}_j
\right],
\qquad 1 \leqslant i \leqslant N-1,
\label{equ:CH_original}
\end{equation}
\end{subequations}
where $\mathbf{u}(\mathbf{x},t)$ is velocity,
$p(\mathbf{x},t)$ is pressure,
$
\mathbf{D}(\mathbf{u}) = \nabla\mathbf{u} + \nabla\mathbf{u}^T
$
(superscript $T$ denoting transpose),
$\mathbf{x}$ and $t$
are respectively the spatial and temporal coordinates.
$\tilde{m}_{ij}$ ($1\leqslant i,j\leqslant N-1$)
are coefficients and the matrix formed by these coefficients
\begin{equation}
\tilde{\mathbf{m}} = \begin{bmatrix} \tilde{m}_{ij} \end{bmatrix}_{(N-1)\times (N-1)}
\end{equation}
is required to be symmetric positive definite (SPD)~\cite{Dong2017}.
$\varphi_i(\vec{\phi})$ are defined by
\begin{equation}
\varphi_i \equiv \rho_i(\vec{\phi}) - \rho_N(\vec{\phi}) = \varphi_i(\vec{\phi}),
\quad 1\leqslant i\leqslant N-1.
\label{equ:varphi_expr}
\end{equation}
The chemical potentials
$\mathcal{C}_i(\vec{\phi},\nabla\vec{\phi})$ ($1\leqslant i\leqslant N-1$)
are given by the following linear
algebraic system
\begin{equation}
\sum_{j=1}^{N-1} \frac{\partial\varphi_j}{\partial\phi_i} \mathcal{C}_j
=
\frac{\partial W}{\partial \phi_i}
- \nabla\cdot \frac{\partial W}{\partial(\nabla\phi_i)},
\quad 1\leqslant i\leqslant N-1,
\label{equ:chem_potential}
\end{equation}
which can be solved given $W(\vec{\phi},\nabla\vec{\phi})$
and $\varphi_i(\vec{\phi})$.
$\tilde{\mathbf{J}}(\vec{\phi},\nabla\vec{\phi})$ takes the form
\begin{equation}
\tilde{\mathbf{J}} = -\sum_{i,j=1}^{N-1}\left(
1 - \frac{N}{\Gamma}\tilde{\gamma}_i
\right)
\tilde{m}_{ij}(\vec{\phi})\nabla \mathcal{C}_j.
\label{equ:J_expr}
\end{equation}
The density of fluid $i$ within the mixture $\rho_i$,
the volume fraction $c_i$,
and the mixture density $\rho$ and dynamic viscosity $\mu$,
are given by
\begin{equation}
\left\{
\begin{split}
&
\rho_i(\vec{\phi}) =
\frac{1}{\Gamma} + \sum_{j=1}^{N-1}\left(
\delta_{ij} - \frac{\tilde{\gamma}_j}{\Gamma}
\right)\varphi_j(\vec{\phi}),
\quad 1\leqslant i\leqslant N, \\
&
c_i(\vec{\phi}) = \tilde{\gamma}_i\rho_i(\vec{\phi}) =
\frac{\tilde{\gamma}_i}{\Gamma} + \sum_{j=1}^{N-1}\left(
\tilde{\gamma}_i\delta_{ij} - \frac{\tilde{\gamma}_i\tilde{\gamma}_j}{\Gamma}
\right)\varphi_j(\vec{\phi}),
\quad 1\leqslant i\leqslant N, \\
&
\rho(\vec{\phi})=\sum_{i=1}^N \rho_i =
\frac{N}{\Gamma} + \sum_{i=1}^{N-1}\left(1 - \frac{N}{\Gamma}\tilde{\gamma}_i \right)\varphi_i(\vec{\phi}), \\
&
\mu(\vec{\phi}) = \sum_{i=1}^N \tilde{\mu}_i c_i(\vec{\phi})
= \frac{\Gamma_{\mu}}{\Gamma} + \sum_{i=1}^{N-1}\left(
\tilde{\mu}_i - \frac{\Gamma_{\mu}}{\Gamma}
\right) \tilde{\gamma}_i \varphi_i(\vec{\phi})
\end{split}
\right.
\label{equ:density_expr}
\end{equation}
where $\delta_{ij}$ is the Kronecker delta.
In this model the functions $\varphi_i(\vec{\phi})$,
the free energy density function $W(\vec{\phi},\nabla\vec{\phi})$,
and the coefficients $\tilde{m}_{ij}$ ($1\leqslant i,k\leqslant N-1$)
remain to be specified. Once they are known, all the other
quantities can be computed.
Note that the equation \eqref{equ:varphi_expr} is
to define the set of order parameters $\vec{\phi}$.
Once $\varphi_i(\vec{\phi})$ is given, the set of order parameters
$\phi_i$ ($1\leqslant i\leqslant N-1$) will be fixed.
\section{Concluding Remarks}
\label{sec:summary}
We have developed a set of effective outflow/open boundary
conditions (and also inflow boundary conditions) for
simulating multiphase flows consisting of
$N$ ($N\geqslant 2$) immiscible incompressible fluids
in domains involving outflow and inflow boundaries.
These boundary conditions are designed to satisfy
two properties: energy stability and reduction
consistency. The proposed boundary conditions ensure that,
at the continuum level,
their contributions to the N-phase energy balance
will not cause the total system energy to increase over time,
regardless of the flow state at the outflow/open boundary.
In other words, this property holds even in situations
where strong vortices or backflows occur at the outflow/open
boundary.
This is the reason why the proposed boundary conditions are
effective in overcoming the backflow instability in
N-phase flow problems.
The reduction consistency of the boundary conditions is
a physical consistency requirement for N-phase formulations~\cite{Dong2017}.
This property means that the boundary conditions
honor the inherent equivalence relations
between N-phase systems and the resultant smaller multiphase systems
when some fluid components were absent from the N-phase
system.
We have also presented an efficient numerical algorithm for
the proposed outflow/inflow boundary
conditions together with the N-phase governing equations.
The main issue lies in the numerical treatments of
the inertia term in the open boundary conditions
for the phase field equations and the variable viscosity
in the open boundary condition for the momentum equation.
With appropriate reformulations and treatments of such terms
in our algorithm, the computations for different flow variables
and the computations for different phase field variables
have been completely de-coupled. The proposed
algorithm involves only the solution of a number of
Helmholtz-type equations within each time step.
The linear algebraic systems resulting from discretizations involve
only constant and time-independent coefficient matrices,
which can be pre-computed,
even though large density contrasts and large viscosity contrasts
may be present in the N-phase system.
These characteristics make the algorithm computationally very
efficient and attractive.
We have tested the proposed method with extensive
numerical experiments for several problems involving multiple
fluid components and in domains with outflow and inflow boundaries.
In particular, we have compared in detail our simulation results
for the three-phase capillary wave problem
with Prosperetti's exact physical solution~\cite{Prosperetti1981}
under various physical and simulation parameters.
These comparisons demonstrate that the proposed method
produces physically accurate results.
Multiphase flows involving inflow/outflow boundaries are
an important class of problems, which have widespread applications
in oil/gas industries, carbon sequestration, microfluidics
and optofluidics~\cite{HuppertN2014,PsaltisQY2006,RodriguezSMG2015}.
These problems are also critical to the study of long-time behaviors and
statistical features of multiphase flows.
The key technique for simulating multiphase inflow/outflow
problems lies in how to deal with the multiphase
outflow/open boundaries.
The method developed in the current work provides an effective
and powerful tool for simulating this class of problems.
We anticipate that it will be useful and instrumental in
the investigation of long-time statistics of multiphase problems
and in the development a number of related areas.
While the outflow/open boundary conditions proposed here
ensure the energy stability of the N-phase system at
the continuum level, this property is not guaranteed
by the numerical algorithm presented here at the discrete level.
The current algorithm is only conditionally stable,
and requires sufficient spatial resolution and small enough
time step size to achieve stable and accurate simulations.
An interesting question is how to devise an algorithm
for these outflow/open boundary conditions together with
the N-phase governing equations to guarantee the energy
stability at the discrete level.
This problem seems to be highly non-trivial.
It would be an interesting problem to contemplate
for future research.
\section{Representative Numerical Examples}
\label{sec:tests}
In this section we provide extensive numerical results for several flow problems
involving multiple fluid components and inflow/outflow boundaries
in two dimensions
to test the set of open/outflow boundary conditions and the numerical
algorithm developed in the previous section. The results demonstrate that
the proposed method can serves as an accurate and reliable tool for the
investigation of multi-phase flow problems in unbounded domains.
Note that all numerical simulations presented here are performed
by using the volume fractions $c_i\,(1\leq i \leq N-1)$ as the order parameters, as defined in \eqref{equ:varphi_expr}.
To begin with, we briefly comment on the normalization of physical variables and parameters, which has been addressed in detail in the previous works \cite{Dong2014,Dong2015,Dong2017}. Let $L$ denote a length scale, $U_0$ denote a velocity scale and $\varrho_d$ denote a density scale. By consistently normalizing the physical variables and parameters
based on the normalization constants given in Table \ref{table:normalization}, the resultant non-dimensionalized problem (governing equations, boundary/initial conditions) will retain the same form as its dimensional problem. Hereafter, all the flow variables and parameters
have been appropriately normalized based on Table \ref{table:normalization}, unless otherwise specified.
\begin{table}[t]
\centering
\begin{tabular}{l c| l c}
\hline
Variables/parameters & normalization constant & Variables/parameters & normalization constant \\
\hline
$\bs x,\, \eta$& $L$ &$t,\,\Delta t$& $L/U_0$\\
$\bs u,\,\bs w$ &$U_0$ & $\rho,\,\rho_i,\,\tilde \rho_i,\rho_0,\,\varphi_i$ & $\varrho_d$\\
$S,\,c_i,c_{bi}$& 1&$g_i$&$U_0/L$\\
$\alpha,\,g_{ai}$&$1/L^2$&$g_{bi}$&$1/L^3$\\
$g_{ci},\,g_{ei}$&1/L&$d_0$&$1/U_0$\\
$\bs g_r$&$U_0^2/L$ &$\bs f$&$\varrho_d U_0^2/L$\\
$P,\,p,\, {\bs f}_b$&$\varrho_dU_0^2$&$\mu,\tilde \mu_i,\,\mu_0$&$\varrho_d U_0 L$\\
$\Gamma_{\mu},\,\nu_m$&$U_0L$&$\tilde \gamma_i,\, \Gamma$&$1/\varrho_d$\\
$\lambda_{ij}$&$\varrho_dU_0^2L^2$&$\sigma_{ij}$&$\varrho_d U_0^2L$\\
$m_0$&$U_0L^3$&$\zeta_{ij}$&$1/\varrho_dU_0^2L^2$\\
\hline
\end{tabular}
\caption{Normalization of flow variables and simulation parameters. $L$ is a length scale, $U_0$ is a velocity scale, and $\varrho_d$ is a density scale. }
\label{table:normalization}
\end{table}
\subsection{Convergence Rates}
\begin{figure}[tbp]
\centering
\subfigure[Spatial and temporal convergence test]{ \includegraphics[scale=.50]{standconfigure.pdf}}\\
\subfigure[$L^2$ errors vs element order]{ \includegraphics[scale=.42]{spatialconvL21.pdf}} \qquad
\subfigure[$L^2$ errors vs $\Delta t$]{ \includegraphics[scale=.42]{temporalconvL21.pdf}}
\caption{Spatial/temporal convergence tests: (a) Problem configuration; (b) $L^{2}$ errors of flow variables versus element order (fixed $\Delta t=0.001$ and $t_f=0.1$); (c) $L^{2}$ errors of flow variables versus $\Delta t$ (fixed element order $16$ and $t_f=0.1$). }
\label{standtest}
\end{figure}
The goal of this subsection is to demonstrate numerically the spatial and temporal convergence rates of the method developed herein using a contrived analytic solution with the proposed N-phase energy-stable open boundary conditions.
Consider the computational domain $\Omega=\overline{ABCD}:=\{(x,y)| 0\leq x \leq 2, -1 \leq y \leq 1 \}$ shown in Fig. \ref{standtest}(a) and a four-fluid (i.e., $N=4$) mixture contained in this domain. We assume the following analytic expressions for the flow variables of this four-phase system,
\begin{equation}\label{equ:contrivedsolu}
\begin{cases}
&u=A_0 \sin(ax) \cos(\pi y) \sin(\omega_0 t)\, ,\\
&v=-(A_0a/\pi)\cos(ax)\sin(\pi y) \sin (\omega_0 t)\,,\\
&P=A_0 \sin(ax)\sin(\pi y) \cos(\omega_0 t)\,,\\
& c_1=\cfrac{1}{6}\big[1+A_1\cos(a_1x)\cos(b_1 y)\sin(\omega_1 t)\big]\, ,\\
& c_2=\cfrac{1}{6}\big[1+A_2\cos(a_2x)\cos(b_2 y)\sin(\omega_2 t)\big] \, ,\\
& c_3=\cfrac{1}{6}\big[1+A_3\cos(a_3x)\cos(b_3 y)\sin(\omega_3 t)\big]\, ,\\
&c_4=1-c_1-c_2-c_3,
\end{cases}
\end{equation}
where $(u,v)$ are the two components of the velocity $\bs u.$ The above expressions satisfy the system of equations with appropriate choice of the source terms. The source term $\bs f$ in \eqref{equ:nse_trans} is chosen such that the analytic expressions given in \eqref{equ:contrivedsolu} satisfy equation \eqref{equ:nse_trans}. We choose $ g_i\, (i=1,2,3)$ in equations \eqref{equ:CH} such that \eqref{equ:contrivedsolu} satisfies each of the equations \eqref{equ:CH}. The initial conditions \eqref{equ:ic_vel}-\eqref{equ:ic_phi} are imposed for the velocity and phase field functions, respectively, where $\bs u^{\rm in}$ and $c_i^{\rm in}\, (i=1,2,3)$ are chosen by letting $t=0$ at the contrived solution \eqref{equ:contrivedsolu}.
The flow domain $\Omega$ is discretized using two quadrilateral spectral elements of equal size ($\overline{AEFD}$ and $\overline{EBCF}$). On the sides $\overline{AD},\,\overline{AB},\,\overline{BC},$ we impose Dirichlet boundary condition \eqref{equ:dbc_vel} for the velocity field, where the boundary velocity $\bs w$ is chosen according to the analytic expressions in \eqref{equ:contrivedsolu}. For the phase field functions, we impose the wall contact-angle conditions \eqref{equ:bc_chempot} and \eqref{equ:wbc_phi_1_mod} on $\overline{AD}$ and $\overline{BC},$ and impose the Dirichlet conditions \eqref{equ:ibc_phi_1} and \eqref{equ:ibc_phi_2_mod} on $\overline{AB}.$ On the side $\overline{DC}$ we impose the open boundary conditions \eqref{equ:obc_vel_mod} with $(\theta,\alpha_1,\alpha_2)=(1,1,0)$ for the momentum equations,
and \eqref{equ:bc_chempot} and \eqref{equ:obc_phi_2_mod} for the phase field functions. The source terms $g_{ai},\, g_{bi},\, g_{ci},\,g_{ei},\, c_{bi},\, (i=1,2,3)$ and $\bs f_b$ therein are chosen such that the contrived solution in \eqref{equ:contrivedsolu} satisfies the boundary conditions.
\begin{table}[tb]
\centering
\begin{tabular}{l c| l c}
\hline
Parameter & Value & Parameter & Value \\
\hline
$a,\,a_1,\,a_2,\,a_3$ & $\pi$ & $b_1,\,b_2,\,b_3$ & $\pi$ \\
$A_0$ & 2.0 & $A_1,\,A_2,\,A_3$ & 1.0 \\
$\omega_0,\,\omega_1$ & 1.0 & $\omega_2$ & 1.2 \\
$\omega_3$ & 0.8 & $\eta$, $t_f$ & 0.1 \\
${\tilde \rho}_1$ & 1.0 &${\tilde \rho}_2$ & 3.0 \\
${\tilde \rho}_3$ &2.0 &${\tilde \rho}_4$ & 4.0 \\
${\tilde \mu}_1$ &0.01 &${\tilde \mu}_2$ &0.02 \\
${\tilde \mu}_3$ &0.03 &${\tilde \mu}_4$ &0.04 \\
$\sigma_{12}$ &$6.236\times 10^{-3}$ & $\sigma_{13} $&$7.265\times 10^{-3}$ \\
$\sigma_{14} $ &$3.727\times 10^{-3}$ &$\sigma_{23} $ & $8.165\times 10^{-3}$ \\
$\sigma_{24} $ &$5.270\times 10^{-3}$ &$\sigma_{34} $ & $ 6.455\times 10^{-3}$ \\
$\alpha_1,\, \theta$&1.0 & $\alpha_2$& 0 \\
$ \delta$&0.05 &
$d_0$& 0.2\\
$m_0$ &$1.0\times 10^{-5}$ &$\mu_0$ & ${\rm max}(\tilde \mu_1,\cdots, \tilde \mu_4)$ \\
$\rho_0$ & ${\rm min}(\tilde \rho_1,\cdots,\tilde \rho_4) $ &$\nu_m$ & $\frac{1}{2} \big[{\rm max}\{ \frac{\mu_i}{\rho_i} \}_{i=1}^4+{\rm min}\{\frac{\mu_i}{\rho_i} \}_{i=1}^4\big]$ \\
$J$ (temporal order) & 2& Number of elements & 2 \\
$\Delta t$ & (varied) & Element order & (varied) \\
\hline
\end{tabular}
\caption{Simulation parameter values for the convergence-rate tests.}
\label{table:standtest}
\end{table}
The numerical algorithm from Section \ref{sec:algorithm}-\ref{sect: SEM} is employed to integrate in time the governing equations for this four-phase system from $t=0$ to $t=t_f.$ Then the numerical solution and the exact solution as given by \eqref{equ:contrivedsolu} at $t=t_f$ are compared, and the errors in the $L^2$ norm for various flow variables are computed.
All the physical and numerical parameters involved in the simulation of this problem, including the values of constants $A_i$ and $\omega_i$ ($i=0,\cdots,3$), $a,$ $a_i$ and $b_i$ ($i=1,2,3$) in the contrived solution \eqref{equ:contrivedsolu}, are tabulated in Table \ref{table:standtest}.
Both spatial and temporal convergence tests have been performed to demonstrate the reliability of the proposed algorithm. In the first test, we fix the integration time at $t_f=0.1$ and the time step size at $\Delta t=0.001$ ($100$ time steps), and vary the element order systematically between $2$ and $20.$ The same element order has been used for these two spectral elements. Fig. \ref{standtest}(b) plots the numerical errors at $t=t_f$ in $L^2$ norm for different flow variables as a function of the element order. It is evident that within a specific range of the element order (below around $12$), the errors decrease exponentially when increasing element order, displaying an exponential convergence rate in space. Beyond the element order of about $12,$ the error curves level off as the element order further increases, showing a saturation caused by the temporal truncation error.
In the second test, we fix the integration time at $t_f=0.1$ and the element order at a large value $16,$ and vary the time step size systematically between $\Delta t=1.953125\times 10^{-4}$ and $\Delta t=0.025.$ Fig. \ref{standtest}(c) shows the numerical errors at $t=t_f$ in $L^2$ norm for different variables as a function of $\Delta t$ in logarithmic scales. It can be observed that the numerical errors exhibit a second order convergence rate in time.
The above numerical results indicate that the numerical algorithm developed herein has a spatial exponential convergence rate and a temporal second-order convergence rate for multi-phase problems with energy-stable open boundary conditions.
\subsection{A Three-Phase Capillary Wave Problem}
\begin{figure}[tbp]
\centering
\subfigure[Configuration for three-phase capillary wave problem ]{ \includegraphics[scale=.4]{capillaryconfigure.pdf}} \qquad
\subfigure[Spectral element mesh ]{ \includegraphics[scale=.4]{capillarymesh.pdf}}
\caption{Three-phase capillary wave problem: (a) Computational domain and configuration. (b) Spectral element mesh of $800$ quadrilateral elements.}
\label{CapillaryConfig}
\end{figure}
In this subsection, we use a three-phase capillary wave problem as a benchmark to
test the physical accuracy of the current method with energy-stable open boundary conditions.
The problem setting is as follows.
We consider three immiscible incompressible fluids contained in an infinite domain
(see Fig.~\ref{CapillaryConfig}(a) for an illustration).
The upper portion of the domain is occupied by the lightest fluid (fluid $\#1$), and the lower portion of the domain is occupied by the heaviest fluid (fluid $\#3$), and the middle is occupied by fluid $\#2.$ The gravity is assumed to be in the downward direction.
The interfaces formed between fluid $\#1$ and fluid $\#2$ (interface $\#1$) and between fluid $\#2$ and fluid $\#3$ (interface $\#2$) are perturbed from their horizontal equilibrium positions by a small amplitude sinusoidal wave form, and start to oscillate at $t=0.$
The objective here is to study the motion of the interfaces over time.
Although this is a three-phase problem,
if the two interfaces are far part and the capillary-wave amplitudes are sufficiently
small compared with the distance between the interfaces and
the dimension of the domain in the vertical direction,
the interaction between the interfaces will be weak.
The motion of each interface will therefore be essentially the same as
that of the interface alone in a two-phase setting, i.e.~with
the third fluid absent.
This allows us to compare qualitatively and quantitatively
the numerical results for the three-phase capillary-wave simulations
with e.g.~Prosperetti's exact physical solution
(see \cite{Prosperetti1981}) for two-phase capillary-wave problems.
In \cite{Prosperetti1981} an exact time-dependent standing-wave solution to the
two-phase capillary-wave problem was derived, given that the two fluids must have matched kinematic viscosities (but their densities and dynamic viscosities can be different).
In what follows, we will simulate the three-phase capillary-wave problem
under the following settings:
(i) the two interfaces are far apart;
(ii) the capillary amplitudes
are small compared with both the distance between the interfaces and the
vertical dimension of the domain; and (iii)
the kinematic viscosity $\nu$ satisfies $ \nu=\frac{\tilde \mu_1}{\tilde \rho_1}=\frac{\tilde \mu_2}{\tilde \rho_2}=\frac{\tilde \mu_3}{\tilde \rho_3}$.
Specifically, the simulation setting is illustrated in Fig.~\ref{CapillaryConfig}(a).
We consider the computational domain $\Omega=\{(x,y)| 0\leq x \leq 1, -3 \leq y \leq 1 \}.$ The bottom side of the domain is a solid wall of neutral wettability,
and the top side is open where the fluid can freely leave (or enter) the domain.
On the left and right sides, all the variables are assumed to be periodic at $x=0$ and $x=1.$ The equilibrium positions of the fluid interface $\#1$ and interface $\#2$ are assumed to coincide with $y=0$ and $y=-2$, respectively. The initial perturbed profile of the fluid interface $\#1$ and interface $\#2$ are given by $y=H_0 \cos(k_w x)$ and $y=y_1+H_0\cos(k_w x),$ respectively, where $y_1=-2,$ $H_0=0.01$ is the initial amplitude, $\lambda_w=1$ is the wavelength of the perturbation profiles, and $k_w=2\pi/\lambda_w$ is the wave number. Note that the initial capillary amplitude $H_0$ is small compared with the dimension of the domain in the vertical direction and the distance between the two fluid interfaces.
Therefore, the effect of the wall at the domain bottom and
the influence of the third fluid on the motion of
the fluid interface will be small.
\begin{table}[tbp]
\centering
\begin{tabular}{l c| l c}
\hline
Parameter & Value & Parameter & Value \\
\hline
$H_0$ &0.01 & $k_w$ (wave number) & $2\pi$ \\
$\sigma_{ij}\, (1\leq i\neq j\leq 3)$ & 1.0 & $|\bs g_r|$ (gravity) & 1.0 \\
$\tilde \rho_1$ & 1.0 & $\tilde \rho_2,\,\tilde \rho_3$ & (varied) \\
$\tilde \mu_1$ & 0.01 & $\tilde \mu_2$ & $\tilde \mu_1 \frac{\tilde \rho_2}{\tilde \rho_1}$ \\
$\tilde \mu_3$ & $\tilde \mu_1 \frac{\tilde \rho_3}{\tilde \rho_1}$ & $\nu=\frac{\tilde \mu_1}{\tilde \rho_1}=\frac{\tilde \mu_2}{\tilde \rho_2}=\frac{\tilde \mu_3}{\tilde \rho_3}$ (kinematic viscosity) & $0.01$ \\
$\delta$&0.05&$\mu_0$ & ${\rm max}(\tilde \mu_1,\tilde \mu_2, \tilde \mu_3)$\\
$\theta$ &1.0&$\alpha_1,$ $\alpha_2$ & $0$ \\
$d_0$& 0 & Element order & 8 \\
$\rho_0$ & ${\rm min}(\tilde \rho_1,\cdots,\tilde \rho_3) $ &$\nu_m$ & 0.01 \\
$J$ (temporal order) & 2& Number of elements & 800 \\
$m_0$ &(varied) & $\eta$ & (varied) \\
$\Delta t$ & (varied) \\
\hline
\end{tabular}
\caption{Simulation parameter values for the three-phase capillary wave problem.}
\label{table:capillary}
\end{table}
The computational domain is partitioned with $800$ quadrilateral elements,
with $10$ and $80$ elements respectively in $x$ and $y$ directions (Fig.~\ref{CapillaryConfig}(b)). The elements are uniform in the $x$ direction, and are non-uniform and clustered around the regions $-0.012\leq y \leq 0.012$ and $-2.012 \leq y \leq -1.988.$
In the simulations, the external body force $\bs f$ in equation \eqref{equ:nse_trans} is set to $\bs f=\rho \bs g_r,$ where $\bs g_r$ is the gravitational acceleration, and
the source terms in \eqref{equ:CH} are set to $g_i=0 \,(i=1,2).$ On the bottom wall, the boundary condition \eqref{equ:dbc_vel} with $\bs w=\bs 0$ is imposed for the velocity, and the boundary conditions \eqref{equ:bc_chempot} and \eqref{equ:wbc_phi_1_mod} with $g_{bi}=g_{ci}=0\,(i=1,2)$ are imposed for the phase field functions.
On the top domain boundary, the energy-stable open boundary condition \eqref{equ:obc_vel_mod}
with $\bs f_b=\bs 0$ and $(\theta,\alpha_1,\alpha_2)=(1,0,0)$ is imposed for the momentum equation, and the conditions
\eqref{equ:bc_chempot} and \eqref{equ:obc_phi_2_mod}
with $g_{bi}=g_{ei}=0\,(i=1,2)$ and $d_0=0$ are imposed for the phase field functions.
The initial velocity is set to zero, and the initial volume fractions are prescribed as follows:
\begin{equation}\label{equ:capiinitial}
\begin{cases}
&c_1=\cfrac{1}{2}\Big[ 1+\tanh \cfrac{y-H_0 \cos ( k_w x)}{\sqrt{2} \eta} \Big],\\
&c_2=\cfrac{1}{2}\Big[ \tanh \cfrac{y-y_1-H_0 \cos( k_w x)}{\sqrt{2} \eta} - \tanh \cfrac{y-H_0 \cos( k_w x)}{\sqrt{2} \eta} \Big],\\
&c_3=1-c_1-c_2.
\end{cases}
\end{equation}
We list in Table \ref{table:capillary} the values for the physical and numerical parameters involved in this problem.
\begin{figure}[tbp]
\centering
\subfigure[Interface $\#1$ with various element orders]{ \includegraphics[scale=.4]{interf1order.pdf}}
\qquad
\subfigure[Interface $\#2$ with various element orders]{ \includegraphics[scale=.4]{interf2order.pdf}}
\subfigure[Interface $\#1$ with various $\Delta t$]{ \includegraphics[scale=.4]{interf1timestep.pdf}}
\qquad
\subfigure[Interface $\#2$ with various $\Delta t$]{ \includegraphics[scale=.4]{interf2timestep.pdf}}
\caption{Three-phase capillary wave problem (matched density $\tilde \rho_1=\tilde \rho_2=\tilde \rho_3=1$). (a)-(b): Effect of spatial resolution (element order) on the capillary amplitude history. Simulation results are obtained with a fixed time step size $\Delta t=10^{-4},$ interfacial thickness $\eta=0.01,$ mobility $m_0=10^{-5}$ and various element orders.
(c)-(d): Effect of time step size on the capillary amplitude history. Simulation results are obtained with a fixed element order 8, interfacial thickness $\eta=0.005$, mobility $m_0=10^{-5}$ and various time step sizes $\Delta t$. }
\label{varyordertime}
\end{figure}
Let us first focus on a matched density for the three fluids, i.e., $\tilde \rho_1=\tilde \rho_2=\tilde \rho_3=1$, and study the effects of several parameters on
the simulation results.
We have performed extensive tests to ensure that our simulation results
have converged with respect to the spatial and temporal resolutions.
Fig.~\ref{varyordertime}(a)-(b) show a spatial resolution test.
Here we compare the time histories of the capillary wave amplitudes of the interfaces $\#1$ and $\#2$ obtained with several element orders ranging from 6 to 12 in the simulations.
The history curves corresponding to different element orders overlap with one another,
suggesting independence of the results with respect to the grid resolution.
Fig.~\ref{varyordertime}(c)-(d) show a temporal resolution test.
We compare the capillary wave amplitude histories obtained using several time step sizes.
The results obviously indicate the convergence with respect to $\Delta t.$
These resolution tests indicate that an element order $8$
and a time step size $\Delta t=10^{-4}$ will be sufficient for the spatial
and temporal resolutions with current spectral element mesh.
Therefore, the majority of subsequent simulations will be conducted
using these parameter values.
\begin{figure}[tbp]
\centering
\subfigure[Interface $\#1$ with various $m_0$]{ \includegraphics[scale=.4]{interf1varymob.pdf}}
\qquad
\subfigure[Interface $\#2$ with various $m_0$]{ \includegraphics[scale=.4]{interf2varymob.pdf}}
\subfigure[Interface $\#1$ with various $\eta$]{ \includegraphics[scale=.4]{interf1varyeta.pdf}}
\qquad
\subfigure[Interface $\#2$ with various $\eta$]{ \includegraphics[scale=.4]{interf2varyeta.pdf}}
\caption{Capillary wave problem (matched density $\tilde \rho_1=\tilde \rho_2=\tilde \rho_3=1$). (a)-(b): Comparison of capillary amplitude histories corresponding to
different mobility $m_0$ values and Prosperetti's exact solution~\cite{Prosperetti1981}.
Simulation results correspond to a time step size $\Delta t=10^{-4},$ element order $8$,
and interfacial thickness $\eta=0.005.$
(c)-(d): Comparison of capillary amplitude histories corresponding to
different interfacial thickness $\eta$ values and Prosperetti's exact solution.
Simulation results correspond to a time step size $\Delta t=10^{-4},$ element order $8$,
and mobility $m_0=5 \times 10^{-7}.$}
\label{varymobeta}
\end{figure}
The effect of the mobility coefficient $m_0$ on the simulation results
is shown by Fig.~\ref{varymobeta}(a)-(b), in which we
compare the time histories of the capillary wave amplitudes of the two interfaces
obtained with a fixed interfacial thickness scale $\eta=0.005$ and
various mobility values ranging between $m_0=3\times 10^{-5}$ and $m_0=10^{-8}$.
The exact physical solution given by \cite{Prosperetti1981} for
this case is also
included in the figure for comparison.
It is observed that the computation becomes unstable
if $m_0$ is too large (larger than around $m_0=3\times 10^{-5}$).
As $m_0$ decreases from $3\times 10^{-5}$ to $10^{-8}$, we initially observe
an effect on the amplitude and phase of the history signals obtained from
the simulations.
But as $m_0$ becomes sufficiently small,
the difference in the simulated capillary amplitude histories becomes very small,
and the history curves converge to the exact solution by \cite{Prosperetti1981}.
In fact, when $m_0$ decreases below $10^{-6}$, the difference between the numerical results and the theoretic solution is negligible.
Fig.~\ref{varymobeta}(c)-(d) show the effect of the interfacial thickness
scale $\eta$ on the simulation results.
In this figure we compare time histories of the capillary amplitude obtained with the interfacial thickness scale parameter $\eta$ ranging from $0.02$ to $0.003$ with a fixed mobility $m_0=5\times 10^{-7}.$ The exact physical solution is also included in the plots.
Some influence on the amplitude and the phase of the history curves can be observed
as $\eta$ decreases from $0.02$ to $0.01.$ As $\eta$ decreases further to $\eta=0.0075$ and below, on the other hand, the history curves essentially overlap with one another and little difference can be discerned among them, suggesting a convergence of the results with respect to $\eta.$
\begin{figure}[tbp]
\caption{Three-phase capillary wave (different density ratios): Comparison of time histories of the capillary wave amplitude between simulation and Prosperetti's exact solution for densities $(\tilde \rho_1,\tilde \rho_2,\tilde \rho_3)=(1,10,10)$ ((a)-(b)), $(1,10,100)$ ((c)-(d)),
$(1,100,100)$ ((e)-(f)), $(1,10,1000)$ ((g)-(h)), and $(1,1,1000)$ ((i)-(j)).
The simulation results are obtained with a time step size $\Delta t=10^{-4}$ for (a)-(f), $\Delta t=2\times 10^{-5}$ for (g)-(j), an element order 8, interfacial thickness $\eta=0.003$, and mobility $m_0=5 \times 10^{-7}.$}
\centering
\subfigure[Interface $\#1$ with $(\tilde \rho_1,\tilde \rho_2,\tilde \rho_3)=(1,10,10)$]{ \includegraphics[scale=.4]{interf1rho1_10_10.pdf}}
\qquad
\subfigure[Interface $\#2$ with $(\tilde \rho_1,\tilde \rho_2,\tilde \rho_3)=(1,10,10)$]{ \includegraphics[scale=.4]{interf2rho1_10_10.pdf}}
\subfigure[Interface $\#1$ with $(\tilde \rho_1,\tilde \rho_2,\tilde \rho_3)=(1,10,100)$]{ \includegraphics[scale=.4]{interf1rho1_10_100.pdf}}
\qquad
\subfigure[Interface $\#2$ with $(\tilde \rho_1,\tilde \rho_2,\tilde \rho_3)=(1,10,100)$]{ \includegraphics[scale=.4]{interf2rho1_10_100.pdf}}
\subfigure[Interface $\#1$ with $(\tilde \rho_1,\tilde \rho_2,\tilde \rho_3)=(1,100,100)$]{ \includegraphics[scale=.4]{interf1rho1_100_100.pdf}}
\qquad
\subfigure[Interface $\#2$ with $(\tilde \rho_1,\tilde \rho_2,\tilde \rho_3)=(1,100,100)$]{ \includegraphics[scale=.4]{interf2rho1_100_100.pdf}}
\label{densityvary1}
\end{figure}
\begin{figure}[htbp]\ContinuedFloat
\centering
\subfigure[interface $\#1$ with $(\tilde \rho_1,\tilde \rho_2,\tilde \rho_3)=(1,10,1000)$]{ \includegraphics[scale=.4]{interf1rho1_10_1000.pdf}}
\qquad
\subfigure[interface $\#2$ with $(\tilde \rho_1,\tilde \rho_2,\tilde \rho_3)=(1,10,1000)$]{ \includegraphics[scale=.4]{interf2rho1_10_1000.pdf}}
\subfigure[interface $\#1$ with $(\tilde \rho_1,\tilde \rho_2,\tilde \rho_3)=(1,1,1000)$]{ \includegraphics[scale=.4]{interf1rho1_1_1000.pdf}}
\qquad
\subfigure[interface $\#2$ with $(\tilde \rho_1,\tilde \rho_2,\tilde \rho_3)=(1,1,1000)$]{ \includegraphics[scale=.4]{interf2rho1_1_1000.pdf}}
\end{figure}
Let us next investigate the effect of density ratios
on the motion of the fluid interfaces.
In these tests we vary the densities and dynamic viscosities of the fluid $\#2$ and fluid $\#3$
($\tilde \rho_2$, $\tilde \rho_3$ and $\tilde \mu_2$, $\tilde \mu_3$) systematically
while the relation $\nu=\frac{\tilde \mu_1}{\tilde \rho_1}=\frac{\tilde \mu_2}{\tilde \rho_2}=\frac{\tilde \mu_3}{\tilde \rho_3}$ is maintained as required by the theoretic solution in \cite{Prosperetti1981}.
In Fig. \ref{densityvary1}, we show the time histories of the capillary amplitudes
corresponding to five density contrasts,
$(\tilde \rho_1,\tilde \rho_2,\tilde \rho_3)$ equal (a)-(b): $(1,10,10)$, (c)-(d): $(1,10,100)$,
(e)-(f): $(1,100,100)$, (g)-(h): $(1,10,1000)$, and (i)-(j): $(1,1,1000)$,
and compare them with the theoretic solutions from \cite{Prosperetti1981}.
The simulation results are obtained with an
element order 8, interfacial thickness $\eta=0.003$, and mobility $m_0=5 \times 10^{-7}$.
The time step size in the simulations is $\Delta t=10^{-4}$ for the plots (a)-(f),
and a smaller $\Delta t=2\times 10^{-5}$ for the cases involving
$\tilde{\rho}_3=1000$ (plots (g)-(j)) in order to ensure the stability of simulations.
We observe that the density contrasts have a dramatic effect on the motions of
the interfaces, and the dynamics of the two interfaces have become very different.
Under the same density ratio, increase in the density values appears to cause
the period of oscillation to increase
and the attenuation of the oscillation amplitude to be more pronounced;
see e.g.~Figs.~\ref{densityvary1}(c) and (d).
Increase in the density ratio seems to have a similar effect with respect to
the oscillation amplitude and period; compare e.g.~Figs.~\ref{densityvary1}(a) and (e).
It can also be observed that
the history curves from the simulations essentially overlap with those of the
exact solutions for all the density contrasts and little difference can be perceived,
indicating that our method has captured the dynamics of the
fluid interfaces correctly.
The three-phase capillary wave problem and in particular the comparisons
with Prosperetti's exact solution for this problem demonstrate that the N-phase formulation with the proposed open boundary conditions and the numerical method
developed herein (with $N=3$) have produced physically accurate results for a wide range of density ratios (up to density ratio $1000$ tested here).
\subsection{Interaction of Two Liquid Jets in Ambient Water}
\begin{figure}[tb]
\centering
\includegraphics[width=0.35\textwidth]{oiljetconfigure.pdf}
\caption{\small Configuration of the interaction of $F_1$-oil jets in water. }
\label{oiljetconfig}
\end{figure}
In this subsection, we test the proposed open boundary conditions and
the numerical method by considering the interactions of two fluid jets
in an infinite expanse of ambient water.
The two jets consists of two different liquids. One of the jets is oil,
and the other is a liquid referred to as ``F$_1$''. The F$_1$ liquid
is assumed to be lighter than water and immiscible with both oil and water.
This test problem involves multiphase inflow/outflow boundaries. How to
deal with such boundaries is critical to the successful simulation of this
problem.
Specifically,
we consider a rectangular flow domain $\Omega=\{(x,y)|-0.5L \leq x\leq 0.5L, 0 \leq y \leq 1.5L \}$ where $L=6cm$, as shown in Fig.~\ref{oiljetconfig}. The bottom side of the domain ($y=0$) is a solid wall of neutral wettability. The other three sides of the domain are all open
where the fluids can enter or leave the domain freely.
The domain initially contains water inside. The bottom wall has two orifices, each having a diameter $0.2L$. The centers of two orifices are located at $(x_1,y_1)=(-0.2L,0)$ and $(x_2,y_2)=(0.2L,0)$, respectively. A jet of a certain fluid labeled by F$_1$ enters the domain through the left orifice, and
a jet of oil is introduced into the domain through the right orifice.
The gravity $\bs g_r$ is assumed to point downward ($-y$ direction).
The configuration of this problem models the motion of the $F_1$ jet and the oil jet
in an infinite expanse of water.
The two jets rise through the water due to buoyancy,
interact with each other, and move out of the domain through the open boundaries.
The goal here is to investigate the long-time behavior of this three-phase system.
\begin{table}[tbp]
\centering
\begin{tabular}{l l l l}
\hline
Density [$kg/m^3$]:& $F_1$ - 600 & water - 998.2071& oil - 400 or 100\\
Dynamic viscosity [$kg/(m\cdot s)$]: &$F_1$ - $2\times 10^{-2}$ & water - $1.002\times 10^{-3}$ & oil - $9.15\times 10^{-2}$\\
Surface tension [$kg/s^2$]:&$F_1/$water - $4.5\times 10^{-2}$ &$F_1/$oil - $4.8\times 10^{-2}$ & oil/water - $4.4\times 10^{-2}$ \\
Gravity [$m/s^2$]:&9.8&&\\
\hline
\end{tabular}
\caption{Physical property values of fluids $F_1$, water and oil.}
\label{table:oiljet}
\end{table}
The physical properties (including the densities, viscosities, pair-wise surface tensions) of F$_1$, water, and oil employed in this problem, as well as the gravitational acceleration, are listed in Table \ref{table:oiljet}. We choose $L=6cm$ as the length scale, the density of F$_1$ as the density scale $\varrho_d,$ and the centerline velocity at
the orifices as the velocity scale $U_0$.
Then the problem is non-dimensionalized based on Table \ref{table:normalization}. In what follows, all physical and numerical parameters have been properly normalized.
\begin{table}[tbp]
\centering
\begin{tabular}{l c| l c}
\hline
Parameter & Value & Parameter & Value \\
\hline
$x_1$&-0.2&$x_2$&0.2\\
$R$&0.1&$d_0$&0.5\\
$\alpha_1,$ $\theta$ &1 & $\alpha_2$& 0\\
$\delta$&0.01&$\mu_0$ & ${\rm max}(\tilde \mu_1, \tilde \mu_2, \tilde \mu_3)$\\
$\rho_0$ & ${\rm min}(\tilde \rho_1,\cdots,\tilde \rho_3) $ &$\nu_m$ & $1.56\times 10^{-2}$ \\
$m_0$ &$1\times 10^{-8}$ & $\eta$ & 0.01 \\
$J$ (temporal order) & 2& Number of elements & 600 \\
$\Delta t$ & $2\times 10^{-5}$ & Element order & 6 \\
\hline
\end{tabular}
\caption{Simulation parameter values for the interaction of two liquid jets in ambient water.}
\label{table:oiljet2}
\end{table}
In the numerical experiments, we specify $F_1$, water and oil as the first, second, and the third fluids, with the normalized densities $\tilde \rho_1$, $\tilde \rho_2$, and $ \tilde \rho_3,$ respectively. We discretize the computational domain with a mesh of $600$ quadrilateral elements of uniform size, with $20$ elements in the $x$ direction and $30$ elements in the $y$ direction. The element order is $6$ for all the elements. The time step size is chosen as $\Delta t=2\times 10^{-5}$ and all the simulation results afterwards are obtained with interfacial thickness $\eta=0.01$, and mobility $m_0=10^{-8}.$
To balance the gravity of the water, in the simulations we also apply an external pressure gradient pointing upward ($y$ direction) in the whole domain with a magnitude $\rho_w |\bs g_r|,$ where $\rho_w$ is the density of water. As a result, the region occupied by water has no net external body force exerted on it.
The external body force $\bs f$ in equation \eqref{equ:nse_trans} is set to $\bs f=\rho \bs g_r-\tilde \rho_2 \bs g_r,$ where $\tilde \rho_2$ and $\bs g_r$ are the normalized density of water and gravitational acceleration, respectively. The source term in \eqref{equ:CH} are set to $g_i=0 \,(i=1,2).$ On the bottom wall (excluding the fluid inlets), we impose the Dirichlet boundary condition \eqref{equ:dbc_vel} for the velocity with $\bs w=\bs 0$ and the boundary conditions \eqref{equ:bc_chempot} and \eqref{equ:wbc_phi_1_mod} with $g_{bi}=g_{ci}=0\,(i=1,2)$ for the phase field variables. At the F$_1$ and oil inlets, we assume a parabolic profile for
the velocity, i.e.~$\bs w=(0,w_y)$ in \eqref{equ:dbc_vel} with
\begin{equation}\label{equ:wxy}
w_y=U_0\left[1-\Big(\frac{x-x_1}{R}\Big)^2\right], \;\;x\in (x_1-R,x_1+R);\;\;\;
w_y=U_0\left[1-\Big(\frac{x-x_2}{R}\Big)^2\right], \;\;x\in (x_2-R,x_2+R),\quad
\end{equation}
where $R=0.1L$ is the radius of the orifice and $U_0=24.49cm/s$ is the centerline velocity at the orifices.
For the phase field functions we impose the following distributions at
the two fluid inlets,
\begin{equation}\label{equ:oiljetphase}
c_1=1,\;\;c_2=0, \ x\in (x_1-R,x_1+R);\;\;\; c_1=0,\;\;c_2=0, \;\;x\in (x_2-R,x_2+R).
\end{equation}
This distribution means that only the F$_1$ fluid is present at the left inlet,
and only the oil is present at the right inlet.
On the other three sides, we impose the open boundary
conditions \eqref{equ:obc_vel_mod}, \eqref{equ:bc_chempot}
and \eqref{equ:obc_phi_2_mod}, respectively for the velocity and the phase field functions, where $\bs f_b=\bs 0$, $g_{bi}=g_{ei}=0,\ (i=1,2)$, and $(\theta,\alpha_1,\alpha_2)=(1,1,0)$.
In \eqref{equ:obc_vel_mod}, the $d_0$ value is determined by the following procedure.
We first perform a preliminary simulation using $d_0=0$, and then
estimate a convection velocity scale at the outlet boundary. The $d_0$
is then set as the inverse of this convection velocity scale.
For the current problem, $d_0$ is set to $0.5$ based on this procedure.
The initial velocity is set to zero, and the initial volume fractions are set as follows:
\begin{equation}\label{equ:oilphaseinit}
\begin{cases}
&c_1=\big[ H(x- x_1+R)-H(x-x_1-R)\big] \big[H(y)-H(y-2R)\big],\\
&c_2=1-c_1-c_3 ,\\
&c_3=\big[ H(x-x_2+R)-H(x-x_2-R)\big] \big[H(y)-H(y-2R)\big],
\end{cases}
\end{equation}
where $H(x)$ is the Heaviside step function, taking the unit value if $x\geqslant 0$
and vanishing otherwise.
It should be noted that these initial distributions for
the phase field functions and the velocity have no effect on
the long-term behavior of the system. Any transient influence
will be convected out of the domain eventually.
The values for the simulation parameters in this problem
are collected in Table \ref{table:oiljet2}.
We have considered two cases, corresponding to two different density
values for the oil: $400kg/m^3$ in the first case, and
$100 kg/m^3$ in the second case.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.4\textwidth,height=0.4\textwidth]{velocityrho1.pdf}
\caption{\small \small Time histories of the maximum and average velocity magnitudes for
the case $(\tilde \rho_1, \tilde \rho_2, \tilde \rho_3)=(1,1.664, 0.667)$, showing that the flow has reached a statistically stationary state.}
\label{velositymagrho1}
\end{figure}
Let us first consider the case with an oil density $400kg/m^3$.
The normalized densities for $F_1,$ water and oil are $(\tilde \rho_1,\tilde \rho_2,\tilde \rho_3)=(1,1.664, 0.667)$ for this case.
We have performed a long-time simulation of the problem so that the flow has reached a statistically stationary state.
We have monitored the following maximum magnitudes $U_{\rm max}, V_{\rm max}$ and average magnitudes $(U_{\rm ave}, V_{\rm ave})$ of the $x$ and $y$ components of velocity at each time step:
\begin{equation}\label{equ:velmagnitude}
\begin{aligned}
&U_{\rm max}(t)= {\rm max}_{\bs x\in \Omega}|u(\bs x, t)|,\quad V_{\rm max}(t)= {\rm max}_{\bs x\in \Omega}|v(\bs x, t)|; \\
&U_{\rm ave}(t)=\Big(\frac{1}{V_{\Omega}}\int_{\Omega}|u|^2 d{\Omega} \Big)^{\frac{1}{2}},\quad V_{\rm ave}(t)=\Big(\frac{1}{V_{\Omega}}\int_{\Omega}|v|^2 d{\Omega} \Big)^{\frac{1}{2}},
\end{aligned}
\end{equation}
where $V_{\Omega}=\int_{\Omega}d\Omega$ is the volume of the domain. Fig.~\ref{velositymagrho1}
shows a temporal window of the time histories of these velocity magnitudes.
It can be observed that while these physical quantities fluctuate over time, their fluctuations
are all around some constant mean values,
indicating that the flow has reached a statistically stationary state.
\begin{figure}[tbp]
\centering
\subfigure[$t=303.62$]{ \includegraphics[scale=.22]{rho1contoursnap0.pdf}} \hspace*{-10pt}
\subfigure[$t=303.72$]{ \includegraphics[scale=.22]{rho1contoursnap1.pdf}} \hspace*{-10pt}
\subfigure[$t=303.82$]{ \includegraphics[scale=.22]{rho1contoursnap2.pdf}} \hspace*{-10pt}
\subfigure[$t=303.92$]{ \includegraphics[scale=.22]{rho1contoursnap3.pdf}} \\
\subfigure[$t=304.02$]{ \includegraphics[scale=.22]{rho1contoursnap4.pdf}} \hspace*{-10pt}
\subfigure[$t=304.12$]{ \includegraphics[scale=.22]{rho1contoursnap5.pdf}} \hspace*{-10pt}
\subfigure[$t=304.22$]{ \includegraphics[scale=.22]{rho1contoursnap6.pdf}} \hspace*{-10pt}
\subfigure[$t=304.32$]{ \includegraphics[scale=.22]{rho1contoursnap7.pdf}} \\
\subfigure[$t=304.42$]{ \includegraphics[scale=.22]{rho1contoursnap8.pdf}} \hspace*{-10pt}
\subfigure[$t=304.52$]{ \includegraphics[scale=.22]{rho1contoursnap9.pdf}} \hspace*{-10pt}
\subfigure[$t=304.62$]{ \includegraphics[scale=.22]{rho1contoursnap10.pdf}} \hspace*{-10pt}
\subfigure[$t=304.72$]{ \includegraphics[scale=.22]{rho1contoursnap11.pdf}}
\caption{Temporal sequence of snapshots of fluid interfaces, visualized by the volume-fraction contours $c_i=1/2\, (i=1,2,3),$ showing the interaction of two liquid jets of $F_1$ and oil in water, with the normalized densities $(\tilde \rho_1,\tilde \rho_2,\tilde \rho_3 )=(1,1.664,0.667)$.
The jet inlets are centered at $x=-0.2,\,0.2,$ respectively with radius $0.1$.
}
\label{oiljetrho1}
\end{figure}
We look into the dynamical characteristics of the fluid $F_1$ and oil jets in water.
Fig.~\ref{oiljetrho1} shows a temporal sequence of snapshots of the fluid interfaces,
visualized by contours of the volume fractions $c_i=1/2 \,(i=1,2,3)$ for
the three fluids.
First we observe that at the bottom wall the F$_1$ fluid and the oil coming out of
the orifices spread on the wall and fill up the space in between. As a result, the
two fluids touch each other on the bottom wall and form a compound oil-$F_1$ jet.
Note that the base of the compound oil-F$_1$ jet is broader than
the combined size of the two orifices.
The compound jet exhibits distinct characteristics in different regions.
In the region near the orifices ($y/L \lesssim 0.5$ in this case), the compound
jet maintains a relatively stable configuration. The jet tapers off
along the vertical direction in this region,
due to the velocity increase caused by the buoyancy.
This is reminiscent of the behavior of a single oil jet in ambient water
studied in \cite{Dong2014obc}.
Beyond this stable region, the compound jet exhibits a wavy pattern in its profile. The jet diameter modulates along the vertical direction, and bulges form around
the jet continually and periodically
(Figs.~\ref{oiljetrho1}(a)-(b), (f)-(h)) due to a Plateau-Rayleigh
instability~\cite{Plateau1873,Rayleigh1892}.
Further downsteam, the dynamics of the jet becomes very complicated.
The compound jet and the bulges along its profile appear to fold back
in certain regions at times, causing very large deformations of the jet;
see e.g.~Figs.~\ref{oiljetrho1}(e)-(g) and (j)-(l).
We observe that the regions occupied by the F$_1$ fluid and by the oil
in the compound jet are not symmetric.
It can also be observed that our method allows the compound oil-F$_1$ jet
and the fluid interfaces to
exit the domain through the open boundary in a fairly natural fashion;
see e.g.~Figs.~\ref{oiljetrho1}(a)-(d) and (h)-(k).
\begin{figure}[tbp]
\centering
\subfigure[$t=303.62$]{ \includegraphics[scale=.22]{rho1vectorsnap0.pdf}} \hspace*{-10pt}
\subfigure[$t=303.72$]{ \includegraphics[scale=.22]{rho1vectorsnap1.pdf}} \hspace*{-10pt}
\subfigure[$t=303.82$]{ \includegraphics[scale=.22]{rho1vectorsnap2.pdf}} \hspace*{-10pt}
\subfigure[$t=303.92$]{ \includegraphics[scale=.22]{rho1vectorsnap3.pdf}} \\
\subfigure[$t=304.02$]{ \includegraphics[scale=.22]{rho1vectorsnap4.pdf}} \hspace*{-10pt}
\subfigure[$t=304.12$]{ \includegraphics[scale=.22]{rho1vectorsnap5.pdf}} \hspace*{-10pt}
\subfigure[$t=304.22$]{ \includegraphics[scale=.22]{rho1vectorsnap6.pdf}} \hspace*{-10pt}
\subfigure[$t=304.32$]{ \includegraphics[scale=.22]{rho1vectorsnap7.pdf}} \\
\subfigure[$t=304.42$]{ \includegraphics[scale=.22]{rho1vectorsnap8.pdf}} \hspace*{-10pt}
\subfigure[$t=304.52$]{ \includegraphics[scale=.22]{rho1vectorsnap9.pdf}} \hspace*{-10pt}
\subfigure[$t=304.62$]{ \includegraphics[scale=.22]{rho1vectorsnap10.pdf}} \hspace*{-10pt}
\subfigure[$t=304.72$]{ \includegraphics[scale=.22]{rho1vectorsnap11.pdf}}
\caption{Temporal sequence of snapshots of velocity distributions of
two liquid jets in water with normalized densities $(\tilde \rho_1,\tilde \rho_2,\tilde \rho_3 )=(1,1.664,0.667)$.
Velocity vectors are plotted on every eighth quadrature points in each direction within each element.
}
\label{oiljetrhovector1}
\end{figure}
Fig.~\ref{oiljetrhovector1} shows a temporal sequence of snapshots of the velocity fields of this flow, taken at identical time instants as those of the volume-fraction plots of Fig.~\ref{oiljetrho1}.
Several characteristics are evident from these plots.
First, the velocity patterns clearly indicate that
the streams of the F$_1$ fluid and the oil bend toward each other
after exiting the orifices, and merge to form a flow stream of
the compound jet. The velocity in the region between the two
orifices near the wall is very weak. Note that this region is occupied
by the F$_1$ and the oil.
Second,
the region occupied by the compound jet stream,
as shown by the velocity patterns, is wider than the actual region the material oil/F$_1$ occupy (see Fig.~\ref{oiljetrho1}),
especially in the regions more downstream and near
the upper open boundary.
This suggests that the water in the vicinity of compound $F_1$-oil jet has been accelerated to form a wider high-speed region.
Third,
the jet stream exhibits a lateral spread along the streamwise direction, as can be observed from the velocity patterns, and
pairs of vortices can be observed to form along the jet profile.
These vortices reside behind the $F_1$-oil bulges,
form periodically as new bulges emerge,
and travel downstream along with the bulges.
Finally, we note that on the side boundaries
the velocity generally points into the domain,
indicating that the water has in general been sucked into the domain from both sides.
The velocity patterns of Fig.~\ref{oiljetrhovector1}
indicate that the method developed herein allows the flow
to pass through the open/outflow boundaries in a smooth
and natural way.
\begin{figure}[tbp]
\centering
\includegraphics[height=0.4\textwidth]{velocityrho2.pdf}
\caption{\small Two liquid jets in water: time histories of the
maximum and average velocity magnitudes with normalized densities
$(\tilde \rho_1, \tilde \rho_2, \tilde \rho_3)=(1,1.664, 0.1664)$, showing that the flow has reached a statistically stationary state.}
\label{velositymagrho2}
\end{figure}
\begin{figure}[tbp]
\centering
\subfigure[$t=426.92$]{ \includegraphics[scale=.22]{rho2contoursnap0.pdf}} \hspace*{-10pt}
\subfigure[$t=427.02$]{ \includegraphics[scale=.22]{rho2contoursnap1.pdf}} \hspace*{-10pt}
\subfigure[$t=427.12$]{ \includegraphics[scale=.22]{rho2contoursnap2.pdf}} \hspace*{-10pt}
\subfigure[$t=427.22$]{ \includegraphics[scale=.22]{rho2contoursnap3.pdf}} \\
\subfigure[$t=427.32$]{ \includegraphics[scale=.22]{rho2contoursnap4.pdf}} \hspace*{-10pt}
\subfigure[$t=427.42$]{ \includegraphics[scale=.22]{rho2contoursnap5.pdf}} \hspace*{-10pt}
\subfigure[$t=427.52$]{ \includegraphics[scale=.22]{rho2contoursnap6.pdf}} \hspace*{-10pt}
\subfigure[$t=427.62$]{ \includegraphics[scale=.22]{rho2contoursnap7.pdf}} \\
\subfigure[$t=427.72$]{ \includegraphics[scale=.22]{rho2contoursnap8.pdf}} \hspace*{-10pt}
\subfigure[$t=427.82$]{ \includegraphics[scale=.22]{rho2contoursnap9.pdf}} \hspace*{-10pt}
\subfigure[$t=427.92$]{ \includegraphics[scale=.22]{rho2contoursnap10.pdf}} \hspace*{-10pt}
\subfigure[$t=428.02$]{ \includegraphics[scale=.22]{rho2contoursnap11.pdf}}
\caption{ Temporal sequence of snapshots of fluid interfaces, visualized by the volume-fraction contours $c_i=1/2\, (i=1,2,3)$,
showing the interaction of two liquid jets in water,
with the normalized densities $(\tilde \rho_1,\tilde \rho_2,\tilde \rho_3 )=(1,1.664,0.167)$.
}
\label{oiljetrho2contour}
\end{figure}
\begin{figure}[tbp]
\centering
\subfigure[$t=426.92$]{ \includegraphics[scale=.22]{rho2vectorsnap0.pdf}} \hspace*{-10pt}
\subfigure[$t=427.02$]{ \includegraphics[scale=.22]{rho2vectorsnap1.pdf}} \hspace*{-10pt}
\subfigure[$t=427.12$]{ \includegraphics[scale=.22]{rho2vectorsnap2.pdf}} \hspace*{-10pt}
\subfigure[$t=427.22$]{ \includegraphics[scale=.22]{rho2vectorsnap3.pdf}} \\
\subfigure[$t=427.32$]{ \includegraphics[scale=.22]{rho2vectorsnap4.pdf}} \hspace*{-10pt}
\subfigure[$t=427.42$]{ \includegraphics[scale=.22]{rho2vectorsnap5.pdf}} \hspace*{-10pt}
\subfigure[$t=427.52$]{ \includegraphics[scale=.22]{rho2vectorsnap6.pdf}} \hspace*{-10pt}
\subfigure[$t=427.62$]{ \includegraphics[scale=.22]{rho2vectorsnap7.pdf}} \\
\subfigure[$t=427.72$]{ \includegraphics[scale=.22]{rho2vectorsnap8.pdf}} \hspace*{-10pt}
\subfigure[$t=427.82$]{ \includegraphics[scale=.22]{rho2vectorsnap9.pdf}} \hspace*{-10pt}
\subfigure[$t=427.92$]{ \includegraphics[scale=.22]{rho2vectorsnap10.pdf}} \hspace*{-10pt}
\subfigure[$t=428.02$]{ \includegraphics[scale=.22]{rho2vectorsnap11.pdf}}
\caption{ Temporal sequence of snapshots of velocity distributions of two liquid jets in
water problem,
with normalized densities $(\tilde \rho_1,\tilde \rho_2, \tilde \rho_3 )=(1,1.664,0.167).$
Velocity vectors are plotted on every eighth quadrature points in each direction within each element.
}
\label{oiljetrho2vector}
\end{figure}
Let us next consider the second case, with an oil density
$100kg/m^3$.
The normalized densities for $F_1$, water and oil are
$(\tilde \rho_1, \tilde \rho_2, \tilde \rho_3)=(1,1.664, 0.1664)$.
All the other physical parameters are the same as in the first case.
Long-time simulations have been performed for this case, and
Fig.~\ref{velositymagrho2} shows time histories of the maximum and average
velocity magnitudes defined in \eqref{equ:velmagnitude}, indicating that
the flow has reached a statistically stationary state.
Figs.~\ref{oiljetrho2contour} and \ref{oiljetrho2vector}
are the temporal sequence of snapshots of the fluid interfaces and
the velocity fields corresponding to this case.
The general characteristics of the dynamics of jets and
the velocity distributions are similar to those of
the first case. But some marked differences can be noticed.
The compound oil-F$_1$ jet becomes notably more unstable because
of the stronger buoyancy force in the oil region.
We observe
a smaller region ($y/L\gtrsim 0.3$) with a relatively stable jet profile
near the base of the jet.
Downstream of this region, the deformation of the jet profiles
is much more pronounced than in the first case,
and droplets of the oil and F$_1$ fluid are observed
to break off from the compound jet.
The velocity field in the region occupied by the compound oil-F$_1$ jet
appear stronger and more violent compared with that of the first case.
Vortices and backflows can also be observed at the upper or side
boundaries at times; see Fig.~\ref{oiljetrho2vector}(a)-(d).
The results indicate that with the proposed method the fluid interfaces
and the flow structures appear to be able to pass through
the open/outflow boundaries smoothly and seamlessly. |
1805.08353 | \section{Introduction}
Many methods to obtain sentence embeddings have been researched in the past. Some of these include Hamid at al. \cite{5}. who have used LSTMs and Ozan et al. \cite{6} who have made use of the recursive networks. Another approach used by Mikolov et al. \cite{6} is to include the paragraph in the context of the target word and then use gradient descent at inference time to find a new paragraph’s embedding. This method suffers from the obvious limitation that the number of paragraphs could grow exponentially with the number of words. Chen \cite{7} has used the average of the embeddings of words sampled from the document. Lin et al. \cite{8} have used a bidirectional LSTM to find the sentence embeddings which is further used for classification tasks.\\
There have been attempts in the past to train sentence vectors using dictionary definitions \cite{2}. Tissier et al. \cite{3} have trained word embeddings similar to Mikolov et al. \cite{4} using dictionary definitions. However they have not used these embeddings in the reverse dictionary application.\\
We train the LSTM model on an enlarged unsupervised set of data where we randomise the words in the sentence as seen in section 4.\\
The results of our experiments are as follows:\\
1) Randomising the sentences increases the accuracy of the LSTM model by 25-40\%\\
2) RNNs perform better than LSTM given limited data. The top 3 test set accuracy on the reverse definition task increases by 40\% using RNN as compared to LSTM on the same ammount of data
3) Pretraining the LSTM model with the dictionary and fine-tuning does not change the classification accuracy on the test set but it becomes difficult to overfit the training set\\
All code is made publicly available. \footnote{\url{https://github.com/ansonb/reverse_dictionary}}
\section{System Architecture}
We have made use of three architectures as described below\\
\subsection{LSTM}
We make use of the basic LSTM architecture as proposed by Zaremba et al. \cite{9}. The equations describing the network is as follows:\\
\begin{equation}
i = \sigma(W_{i_{1}}*h_{t}^{l-1} + W_{i_{2}}*h_{t-1}^{l} + B_{i})
\end{equation}
\begin{equation}
f = \sigma(W_{f_{1}}*h_{t}^{l-1} + W_{f_{2}}*h_{t-1}^{l} + B_{f}\\
\end{equation}
\begin{equation}
o = \sigma(W_{o_{1}}*h_{t}^{l-1} + W_{g_{2}}*h_{t-1}^{l} + B_{g})\\
\end{equation}
\begin{equation}
g = tanh(W_{g_{1}}*h_{t}^{l-1} + W_{o}^{2}*h_{t-1}^{l} + B_{g})\\
\end{equation}
\begin{equation}
c_{t}^{l} = f{\odot}c_{t-1}^{l} + i{\odot}g\\
\end{equation}
\begin{equation}
h_{t}^{l} = o{\odot}tanh(c_{t}^{l})\\
\end{equation}
where $h_{t}^{l-1}$ is the output of the previous layer or the input\\
$h_{t-1}^{l}$ is the output of the previous time step of the same layer
\begin{figure}[h]
\centering
\includegraphics[scale =0.5]{LSTM_architecture}
\caption{The architecture of the LSTM model. The example sentence used is 'I do enjoy parties'}
\end{figure}
The words are fed to the network in the reverse order as suggested by Sutskever et al. \cite{11}. The embeddings from the embedding matrix are then fed to a uni directional LSTM containing 2 layers and 256 hidden units each. We make use of the output of the last layer at the final timestep as the vector representation of the sentence.\\
\subsection{Recursive neural network (RNN) with shared weights}
In the recursive network we form the parse tree using the syntaxnet parser \cite{10}. For any node in the parsed tree the following equation is computed\\
\begin{equation}
f(i) = relu(E_{i}*W + b + f_{children}(i))
\end{equation}
\begin{equation}
f_{children}(i) = \sum_{j\ {\in{\ children\ of\ node\ i}}}(f(j))
\end{equation}
where $E_{i}$ is the embedding of the word at the ith node\\
\begin{figure}[h]
\includegraphics[width=0.5\textwidth]{Parse_tree}
\caption{The parse tree structure of the sentences 'I do enjoy parties' and 'I do not enjoy parties' obtained using syntaxnet}
\end{figure}
\begin{figure}[h]
\includegraphics[width=0.5\textwidth]{RNN_architecture}
\caption{The architecture of the RNN model shared weights}
\end{figure}
Here we make use of a common shared matrix W.\\
\subsection{RNN with unique weights}
This method is similar to the above method except that the weights W are different for different POS words and the output of each node is multiplied by a weight .\\
\begin{equation}
f(i) = relu(E_{i}*W(i) + b + f_children(i))
\end{equation}
\begin{equation}
f_children(i) = \sum_{j{\in{children of node i}}}( f(j)*w(j) )
\end{equation}
$w(j)$ is the weightage between -1 and 1 to be given to the output at node j and is decided by the word at node j and its immediate children.\\
Algorithm 1: Find w at node i\\
=============================\\
1) Initialise an array arr as empty\\
2) Fill the current node and its children in a nodes in arr\\
3) for j in nodes :\\
4) \hspace{1cm} arr.append(classifier(E(j)))\\
5) $w(i) = tanh(maxpool(arr))$ \\
6) return $w(i)$
\begin{figure}[h]
\includegraphics[width=0.5\textwidth]{RNN_2_architecture}
\caption{ The architecture of RNN with separate weights for each POS and taken a weighted sum}
\end{figure}
\begin{figure}[h]
\includegraphics[width=0.5\textwidth]{w4}
\caption{ The architecture for determining the weight at node enjoy}
\end{figure}
The intuition behind this approach is that not all words will have the same processing. For example consider the sentence “I do enjoy parties” vs the sentence “I do not enjoy parties”. In both the only difference is the negative word not. Let’s consider that the model learns to take the average of all the word embeddings. In this case it will not be able to properly model each negative case of every sentence. Ideally we would expect the embedding of a negative sentence to be in the opposite direction of the corresponding positive sentence. This model consists of the weighted sum of all the embeddings, the weights being decided by the current node and its direct children nodes if any. So in case of the sentence “I do not enjoy parties” the final embedding at root enjoy should be multiplied by a negative weight to give the opposite of “I do enjoy parties”. Another advantage of this method is that it could lean to give more importance to some sibling nodes and lesser importance to others, as in the case of conjunctions. The algorithm of the method to find the weights at each node is given above in algorithm 1.
\section{Objective function}
The sentence embedding from all of the methods above is multiplied with the word embeddings to find the closest matching word from the dictionary. \\
Let Es be the sentence embedding, Ed be the embeddings of the dictionary words. then the output word is given as\\
\begin{equation}
logits = E_{s}^{T}*E_{d}
\end{equation}
\begin{equation}
Output\ word = argmax(logits)
\end{equation}
In all the above three methods we minimise the cross entropy loss of the output.\\
\begin{equation}
\resizebox{.45 \textwidth}{!}
{
\textit{loss = labels*-log($\sigma$(logits)) + (1-labels)*-log(1-$\sigma$(logits))}
}
\end{equation}
According to the above equation the first term will cause similar sentences to move closer together and the second term will cause the dissimilar sentence embeddings to move away from each other
\section{Preparing the data}
The Webster’s dictionary provided by the Gutenberg project was was used for obtaining the word definitions. After processing the dictionary 296,39 definitions were obtained. The dataset contains definitions of 95,831 unique words. The vocabulary size was around 138,443.\\
As training the entire set of words takes around a month on a dual core system with 4 GB RAM we have taken a small subset of 144 words only for comparison between the three models. The test set was prepared by manually framing the definitions for these words. For training with the LSTM we randomise the sentences to obtain 10, 100 and 1000 times the data. We compare the performance on the test set for these three augmented datasets using only the LSTM as it would not make any sense to obtain a parse tree from randomised sentences. \\
For the classification task we take all the definitions of the words present in the rotten tomatoes training dataset. We obtain 7779 unique word definitions and a vocabulary size of 40,990.
\section{Experimental setup and training}
The word embeddings are of size 32. For a large vocabulary Felix et al. \cite{2} suggest using a size of 200-500. However for our sample dataset a dimension of 32 suffices. For the LSTM we have made use of 2 layers each containing 256 hidden units. For the RNNs the Weight matrix is of size 32x32. The hidden layer used for finding the weights for the weighted sum uses 10 units.\\
The training was done on a dual core and 4 GB RAM ubuntu system. It takes around 3-4 hours to train the LSTM on the 144 words augmented (144*1000) dataset, 4 hours to train the RNNs on the 144 words dataset, 5-6 hours for training the LSTM on the rotten tomatoes vocabulary.
For RNNs we split the data into buckets of maximum tree levels, of 2, 4, 6 and so on till 14, in each bucket. This helps in making the tree size uniform for all examples in each bucket for training. Having a single uniform tree size would consume a huge amount of memory, around 3 million nodes or 63 MB per training data. The extra nodes in each example are filled with -1s and their embeddings are kept a constant of value 0.
\section{Results}
\begin{table}[!htbp]
\centering
\label{tab :result}
\begin{tabular}{|c|p{1cm}|p{1cm}|p{1cm}|p{1cm}|}
\hline
Method & Wout seperate & Data Augmentation & Accuracy (top 1) & Accuracy (top 3)\\
\hline
LSTM & Yes & x1 & 24.32\% & 29.73\%\\
\hline
LSTM & No & x1 & 27.03\% & 29.73\%\\
\hline
RNN with shared weights & Yes & x1 & \bf{54.05}\% & \bf{72.97}\%\\
\hline
RNN with shared weights & No & x1 & 45.94\% & 64.86\%\\
\hline
RNN with separate weights & Yes & x1 & 43.24\% & 62.16\%\\
\hline
RNN with separate weights & No & x1 & 27.03\% & 37.84\%\\
\hline
LSTM & No & x1 & 21.62\% & 27.02\%\\
\hline
LSTM & No & x1 & 54.05\% & 67.57\%\\
\hline
LSTM & No & x1 & 51.35\% & 70.27\%\\
\hline
\end{tabular}
\caption{Comparison of LSTMs with RNNs on the sampled words dataset. In this we present the accuracy of the three methods trained on the dataset of 144 words and tested on 37 words.}
\end{table}
It can be seen from table 1 that RNNs have a higher accuracy as compared to the LSTMs when trained with same data. Even with 1000 times more data the LSTMs perform only comparably but not very better. It may be because of the padding used in the LSTMs, for making all the sequences of a constant maximum size of 66 in our case. Another reason could be that the LSTMs use much higher dimension (256 vs 32 for RNN) and so require more data to prevent overfitting.\\
We try two combinations of the output embedding keeping it same as the input embedding and a more general case of a new embedding. The new embedding may or may not learn the same embeddings as the original input. We observe that the accuracy increases when a separate embedding is used. It seems that it is more difficult to obtain an embedding for the sentence in the word embedding space.\\
\begin{table}[!htbp]
\centering
\label{tab :result}
\begin{tabular}{|p{2cm}|p{1cm}|p{1cm}|}
\hline
Method & Training set accuracy & Test set accuracy\\
\hline
LSTM end to end training & 96\% & 60\%\\
\hline
LSTM pertained on dictionary words without fine tuning & 50\% & 43\%\\
\hline
LSTM pertained on dictionary words with fine tuning & 70\% & 60\%\\
\hline
\end{tabular}
\caption{Comparison of accuracies obtained on the rotten tomatoes dataset. We compare the training and test accuracies of the three methods using only LSTMs.}
\end{table}
In the table 2 we see that there is a huge difference in the training and test set accuracy when we train end to end using LSTMs without pre training on the dictionary words. When we retrain using the LSTM and keep the embeddings fixed and train only the classification layer the training accuracy falls to 50\% and so does the test accuracy. On pre training with the dictionary words and then fine-tuning the embeddings the test set accuracy is maintained and the training accuracy is also closer to the test accuracy. We were expecting an increase in the test accuracy without a change in the training accuracy. However the results suggest that overfitting becomes difficult when pre trained. Every method has been trained for around 50,000 steps. When the pre trained model was trained further to 200,000 steps the training accuracy increases to only 80\% and the test accuracy decreases to 57\%. Further research might help in confirming that fine tuning with such a pretrained model could give a correct representation of the actual error in the absence of a test set.\\
\section{Conclusion}
n this paper we try to implement a different method to find sentence embeddings using recursive neural networks (RNN). We find out that RNNs perform at least comparably to LSTMs at a much lesser parameters (32 compared to 256). We train word embeddings using the LSTM and use these pretrained embeddings on the rotten tomatoes dataset and find out that pertaining makes it difficult to overfit the dataset and the train and test set errors are comparable.\\
Further research could be done into whether the methods employed with the weighted sum in RNNs are actually able to identify semantically opposite sentences accurately. Also we could train embeddings and verify the performance on the rotten tomatoes dataset using RNNs. Training the entire dataset could be done to check the performance on the reverse dictionary application.
\bibliographystyle{IEEEtran} |
2009.07664 | \section{Introduction}
Bio-signals, such as Electroencephalograms and Electrocardiograms, are multivariate time-series generated by biological processes that can be used to assess seizures, sleep disorders, head injuries, memory problems, heart diseases, just to name a few \cite{nait2009advanced}. Although clinicians can successfully learn to correctly interpret such bio-signals, their protocols cannot be directly converted into a set of numerical rules yielding a comparable assessment performance.
Currently, the most effective way to transfer this expertise into an automated system is to gather a large number of examples of bio-signals with the corresponding labeling provided by a clinician, and to use them to train a deep neural network. However, collecting such labeling is expensive and time-consuming. In contrast, bio-signals without labeling are more readily available in large numbers.
Recently, self-supervised learning (SelfSL) techniques have been proposed to limit the amount of required labeled data. These techniques define a so-called \emph{pretext} task that can be used to train a neural network in a supervised manner on data without manual labeling. The pretext task is an artificial problem, where a model is trained to output what transformation was applied to the data. For instance, a model could be trained to output the probability that a time-series had been time-reversed \cite{wei2018learning}. This step is often called pre-training and it can be carried out on large data sets as no manual labeling is required. The training of the pre-trained neural network then continues with a small learning rate on the small target data set, where labels are available. This second step is called \emph{fine-tuning}, and it yields a substantial boost in performance \cite{noroozi2016unsupervised}. Thus, SelfSL can be used to automatically learn physiologically relevant features from unlabelled bio-signals and improve classification performance.
SelfSL is most effective if the pretext task focuses on features that are relevant to the target task. Typical features work with the amplitude or the power of the bio-signals, but as shown in the literature, the phase carries information about the underlining biological processes \cite{busch2009phase,ng2013eeg,lopez2019dynamic}. Thus, in this paper, we propose a pretext task to learn the coupling between the amplitude and the phase of the bio-signals, which we call \emph{phase swap} (PS). The objective is to predict whether the phase of the Fourier transform of a multivariate physiological time-series segment was swapped with the phase of another segment.
We show that features learned through this task help classification tasks generalize better, regardless of the neural network architecture.
Our contributions are summarized as follows
\begin{itemize}
\item We introduce phase swap, a novel self-supervised learning task to detect the coupling between the phase and the magnitude of physiological time-series;
\item With phase swap, we demonstrate experimentally the importance of incorporating the phase in bio-signal classification;
\item We show that the learned representation generalizes better than current state of the art methods to new subjects and to new recording sessions;
\item We evaluate the method on four different data sets and analyze the effect of various hyper-parameters and of the amount of available labeled data on the learned representations.
\end{itemize}
\section{Related Work}
\noindent\textbf{Self-supervised Learning.} Self-supervised learning refers to the practice of pre-training deep learning architectures on user-defined pretext tasks. This can be done on large volumes of unlabeled data since the annotations can be automatically generated for these tasks. This is a common practice in the Natural Language Processing literature. Examples of such works include Word2Vec \cite{mikolov2013efficient}, where the task is to predict a word from its context, and BERT \cite{devlin2018bert}, where the model is pretrained as a masked language model and on the task of detecting consecutive sentences. The self-supervision framework has also been gaining popularity in Computer Vision.
Pretext tasks such as solving a jigsaw puzzle \cite{noroozi2016unsupervised}, predicting image rotations \cite{gidaris2018unsupervised} and detecting local inpainting \cite{jenni2020steering} have been shown to be able to learn useful data representations for downstream tasks.
Recent work explores the potential of self-supervised learning for EEG signals \cite{banville2019self} and time series in general \cite{jawed2020self}. In \cite{banville2019self}, the focus is on long-term/global tasks such as determining whether two given windows are nearby temporally or not. \\
\noindent\textbf{Deep Learning for Bio-signals.} Bio-signals include a variety of physiological measures across time such as: Electroencephalogram (EEG), Electrocardiogram (ECG), Electromyogram (EMG), Electrooculography (EOG), etc.
These signals are used by clinicians in various applications, such as sleep scoring \cite{mourtazaev1995age} or seizure detection \cite{shoeb2009application}.
Similarly to many other fields, bio-signals analysis has also seen the rise in popularity of deep learning methods for both classification \cite{humayun2019end} and representation learning \cite{banville2019self}.The literature review \cite{roy2019deep} showcases the application of deep learning methods to various EEG classification problems such as brain computer interfaces, emotion recognition and seizure detection. The work by Banville et al.~\cite{banville2019self} leverages self-supervised tasks based on the relative temporal positioning of pairs/triplets of EEG segments to learn a useful representation for a downstream sleep staging application.\\
\noindent\textbf{Phase Analysis.} The phase component of bio-signals has been analyzed before. Busch et al.~\cite{busch2009phase} show a link between the phase of the EEG oscillations, in the alpha (8-12Hz) and theta (4-8Hz) frequency bands, and the subjects' ability to perceive the flash of a light. The phase of the EEG signal is also shown to be more discriminative for determining firing patterns of neurons in response to certain types of stimuli \cite{ng2013eeg}.
More recent work, such as \cite{lopez2019dynamic}, highlights the potential link between the phase of the different EEG frequency bands and cognition during proactive control of task switching.
\begin{figure}[t]
\centering
\includegraphics[width=.75\textwidth,trim=0 1.95cm 0 2cm,clip]{Plots/PhaseSwapDiagram.pdf}
\caption{Illustration of the phase-swap operator $\Phi$. The operator takes two signals as input and then combines the amplitude of the first signal with the phase of the second signal in the output.}
\label{fig:phaseswap_diag}
\end{figure}
\section{Learning to Detect the Phase-Amplitude Coupling}
\label{sec:phaseswap}
In this section, we define the \emph{phase swap} operator and the corresponding SelfSL task, and present the losses used for pre-training and fine-tuning.
Let $D^W_{i,j} = \{(x^{i, j, k}, y^{i, j, k})\}_{k=1}^N$ be the set of samples associated with the i-th subject during the j-th recording session. Each sample $x^{i, j, k} \in \mathbf{R}^{C \times W}$ is a multivariate physiological time-series window where $C$ and $W$ are the number of channels and the window size respectively. $y^{i, j, k}$ is the class of the k-th sample.
Let $\mathcal{F}$ and $\mathcal{F}^{-1}$ be the Discrete Fourier Transform operator and its inverse, respectively.
These operators will be applied to a given vector $x$ extracted from the bio-signals. In the case of multivariate signals, we apply these operators channel-wise.
For the sake of clarity, we provide the definitions of the absolute value and the phase element-wise operators. Let $z\in \mathbf{C}$, where $\mathbf{C}$ denotes the set of complex numbers. Then, the absolute value, or \emph{magnitude}, of $z$ is denoted $|z|$ and the phase of $z$ is denoted $\measuredangle z$. With such definitions, we have the trivial identity $z = |z|\measuredangle z$.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth,trim=0 .2cm 0 .2cm,clip]{Plots/example.pdf}
\caption{Illustration of the PS operator on a pair of 1.25 seconds segments taken from the Pz-Oz channel in the SC data set \cite{mourtazaev1995age}. The original signals are $x_1$ and $x_2$.}
\label{fig:phaseswap}
\end{figure}
Given two samples $x^{i, j, k}$, $x^{i, j, k'} \in D^W_{i, j}$, the \emph{phase swap} (PS) operator $\Phi$ is
\begin{align}
\label{eq:swap}
\textstyle
\Phi \left(x^{i, j, k}, x^{i, j, k'}\right) \doteq \mathcal{F}^{-1} \left[ \left|\mathcal{F}\left(x^{i, j, k}\right)\right| \odot \measuredangle \mathcal{F}\left(x^{i, j, k'}\right)\right] = x^{i, j, k}_{swap},
\end{align}
where $\odot$ is the element-wise multiplication (see Fig.~\ref{fig:phaseswap_diag}).
Note that the energy per frequency is the same for both $x^{i,k}_{swap}$ and $x^{i,k}$ and that only the phase, \emph{i.e.}, the synchronization between the different frequencies, changes. Examples of phase swapping between different pairs of signals are shown in Fig.~\ref{fig:phaseswap}.
Notice how the shape of the oscillations change drastically when the PS operator is applied and no trivial shared patterns seem to emerge.
The PS pretext task is defined as a binary classification problem. A sample belongs to the positive class if it is transformed using the PS operator, otherwise it belongs to the negative class. In all our experiments, both inputs to the PS operator are sampled from the same patient during the same recording session.
Because the phase is decoupled from the amplitude of white noise, our model has no incentive to detect noise patterns. On the contrary, it will be encouraged to focus on the structural patterns in the signal in order to detect whether the phase and magnitude of the segment are coupled or not.
\label{sec:archi}
\begin{figure}[t]
\centering
\includegraphics[width=1\textwidth,trim=0 2.1cm 0cm 0cm,clip]{Plots/Network2.pdf}
\caption{Training with either the self-supervised or the supervised learning task.}
\label{fig:overview}
\end{figure}
We use the FCN architecture proposed by Wang et al.~\cite{wang2017time} as our core neural network model $E: \mathbf{R}^{C\times W} \to \mathbf{R}^{H \times W/128}$. It consists of 3 convolutions blocks using a Batch Normalization layer \cite{ioffe2015batch} and a ReLU activation followed by a pooling layer. The output of $E$ is then flattened and fed to two Softmax layers $C_{Self}$ and $C_{Sup}$, which are trained on the self-supervised and supervised tasks respectively.
Instead of a global pooling layer, we use an average pooling layer with a stride of 128. This allows us to keep the number of weights of the supervised network $C_{Sup} \circ E$ constant when the self-supervised task is defined on a different window size.
The overall framework is illustrated in Fig.~\ref{fig:overview}. Note that the encoder network $E$ is the same for both tasks.
The loss function for training on the SelfSL task is the cross-entropy
\begin{align}
\mathcal{L}_{Self}\left(y^{Self}, E, C_{Self}\right) = \frac{1}{N} \sum_{i=1}^N \sum_{k=1}^{K_{Self}} y^{Self}_{i,k} \log \left(C_{Self} \circ E(x_i)\right)_k,
\label{eq:loss_ssl}
\end{align}
where $y^{Self}_{i,k}$ and $(C_{Self} \circ E(x_i))_k$ are the one-hot representations of the true SelfSL pretext label and the predicted probability vector respectively.
We optimize eq.~\eqref{eq:loss_ssl} with respect to the parameters of both $E$ and $C_{Self}$.
Similarly, we define the loss function for the (supervised) fine-tuning as the cross-entropy
\begin{align}
\mathcal{L}_{Sup}\left(y^{Sup}, E, C_{Sup}\right) = \frac{1}{N} \sum_{i=1}^N \sum_{k=1}^{K_{Sup}} y^{Sup}_{i,k} \log \left(C_{Sup} \circ E(x_i)\right)_k,
\label{eq:loss_sup}
\end{align}
where $y^{Sup}_{i,k}$ denotes the label for the target task. The $y^{Sup/Self}_{i,k}$
vectors are in $\mathbf{R}^{N \times K_{Sup/Self}}$, where $N$ and $K_{Sup/Self}$ are the number of samples and classes respectively. In the fine-tuning, $E$ is initialized with the parameters obtained from the optimization of eq.~\eqref{eq:loss_ssl} and $C_{Sup}$ with random weights, and then they are both updated to optimize eq.~\eqref{eq:loss_sup}, but with a small learning rate.
\section{Experiments}
\subsection{Data Sets}
In our experiments, we use the Expanded SleepEDF \cite{mourtazaev1995age,kemp2000analysis,goldberger2000physiobank}, the CHB-MIT \cite{shoeb2009application} and ISRUC-Sleep \cite{khalighi2016isruc} data sets as they contain recordings from multiple patients. This allows us to study the generalization capabilities of the learned feature representation to new recording sessions and new patients.
The Expanded SleepEDF database contains two different sleep scoring data sets
\begin{itemize}
\item Sleep Cassette Study (SC) \cite{mourtazaev1995age}: Collected between 1987 and 1991 in order to study the effect of age on sleep. It includes 78 patients with 2 recording sessions each (3 recording sessions were lost due to hardware failure).
\item Sleep Telemetry Study (ST) \cite{kemp2000analysis}: Collected in 1994 as part of a study of the effect of Temazepam on sleep in 22 different patients with 2 recordings sessions each.
\end{itemize}
Both data sets define sleep scoring as a 5-way classification problem. The 5 classes in question are the sleep stages: Wake, NREM 1, NREM 2, NREM 3/4, REM. The NREM 3 and 4 are merged into one class due to their small number of samples (these two classes are often combined together in sleep studies). \\
The third data set we use in our experiments is the CHB-MIT data set \cite{shoeb2009application} recorded at the Children’s Hospital Boston from pediatric patients with intractable seizures. It includes multiples recording files across 22 different patients. We retain the 18 EEG channels that are common to all recording files.
The sampling rate for all channels is 256Hz. The target task defined on this data set is predicting whether a given segment is a seizure event or not, \emph{i.e.}, a binary classification problem.
For all the data sets, the international 10-20 system \cite{malmivuo1995bioelectromagnetism} was adopted for the choice of the positioning of the EEG electrodes.
The last data set we use is ISRUC-Sleep \cite{khalighi2016isruc}, for sleep scoring as a 4-way classification problem. We use the 14 channels extracted in the Matlab version of the data set. This data set consists of three subgroups: subgroups I and II contain respectively recordings from 100 and 8 subjects with sleep disorders, whereas subgroup III contains recordings from 10 healthy subjects. This allows us to test the generalization from diagnosed subjects to healthy ones.
For the SC, ST and ISRUC-sleep data sets we resample the signals to 102.4Hz. This resampling allows us to simplify the neural network architectures we use, because in this case most window sizes can be represented by a power of 2, \emph{e.g.}, a window of 2.5sec corresponds to 256 samples.
We normalize each channel per recording file in all data sets to have zero mean and a standard deviation of one.
\subsection{Training Procedures and Models}
In the supervised baseline (respectively, self-supervised pre-training), we train the randomly initialized model $C_{Sup} \circ E$ (respectively, $C_{Self} \circ E$) on the labeled data set for 10 (respectively, 5) epochs using the Adam optimizer \cite{KingmaB14Adam} with a learning rate of $10^{-3}$ and $\beta = (0.9, 0.999)$. We balance the classes present in the data set using resampling (no need to balance classes in the self-supervised learning task).
In fine-tuning, we initialize $E$'s weights with those obtained from the SelfSL training and then train $C_{Sup} \circ E$ on the labeled data set for 10 epochs using the Adam optimizer \cite{KingmaB14Adam}, but with a learning rate of $10^{-4}$ and $\beta = (0.9, 0.999)$. As in the fully supervised training, we also balance the classes using re-sampling.
In all training cases, we use a default batch size of 128.
\label{sec:models}
We evaluate our self-supervised framework using the following models
\begin{itemize}
\item\textbf{PhaseSwap}: The model is pre-trained on the self-supervised task and fine-tuned on the labeled data;
\item\textbf{Supervised}: The model is trained solely in a supervised fashion;
\item\textbf{Random}: $C_{Sup}$ is trained on top of a frozen randomly initialized $E$;
\item\textbf{PSFrozen}: We train $C_{Sup}$ on top of the frozen weights of the model $E$ pre-trained on the self-supervised task.
\end{itemize}
\subsection{Evaluation Procedures}
We evaluate our models on train/validation/test splits in our experiments. In total we use at most 4 sets, which we refer to as the training set, the Validation Set, the Test set A and the Test set B.
The validation set and test set A and the training set share the same patient identities, while B contains recordings from other patients. The validation set and test set A use distinct recording sessions. Validation Set and the training set share the same patient identities and recording sessions with a $75\%$ (for the training set) and $25\%$ (Validation Set) split.
We use each test set for the following purposes
\begin{itemize}
\item \textbf{Validation Set}: this set serves as a validation set;
\item \textbf{Test set A}: this set allows us to evaluate the generalization error on new recording sessions for patients observed during training;
\item \textbf{Test set B}: this set allows us to evaluate the generalization error on new recording sessions for patients not observed during training.
\end{itemize}
We use the same set of recordings and patients for both the training of the self-supervised and supervised tasks.
For the ST, SC and ISRUC data sets we use class re-balancing only during the supervised fine-tuning. However, for the CHB-MIT data set, the class imbalance is much more extreme: The data set consists of less than $0.4\%$ positive samples. Because of that, we under-sample the majority class both during the self-supervised and supervised training. This prevents the self-supervised features from completely ignoring the positive class. Unless specified otherwise, we use $W_{Self}=5 sec$ and $W_{Sup}=30 sec$ for the ISRUC, ST and SC data sets, $W_{Self}= 2sec$ and $W_{Sup} = 10 sec$ for the CHB-MIT data set, where $W_{Self}$ and $W_{Sup}$ are the window size for the self-supervised and supervised training respectively. For the ISRUC, ST and SC data sets, the choice of $W_{Sup}$ corresponds to the granularity of the provided labels. For the CHB-MIT data set, although labels are provided at a rate of 1Hz, the literature in neuroscience usually defines a minimal duration of around 10sec for an epileptic event in humans \cite{fisher2014can}, which motivates our choice of $W_{Sup} = 10 sec$. \\
\noindent\textbf{Evaluation Metric.} As an evaluation metric, we use the balanced accuracy
\begin{equation}
\label{eq:balanced}
Acc^{Balanced} (y, \hat{y}) = \frac{1}{K} \sum_{k=1}^K \frac{ \sum_{i=1}^N \hat{y}_{i,k} y_{i,k}
}{\sum_{i=1}^N y_{i,k}},
\end{equation}
which is defined as the average of the recall values per class, where $K$, $N$, $y$ and $\hat{y}$ are respectively the number of classes, the number of samples, the one-hot representation of true labels and the predicted labels.
\subsection{Generalization on the Sleep Cassette Data Set}
We explore the generalization of the self-supervised trained model by varying the number of different patients used in the training set for the SC data set. The $r_{train}$ is the percentage of patient identities used for training, in Validation Set and in Test set A. In Table~\ref{tab:generalization}, we report the balanced accuracy on all test sets for various values of $r_{train}$. The self-supervised training was done using a window size of $W_{Self}=5sec$. We observe that the \textbf{PhaseSwap} model performs the best for all values of $r_{train}$. We also observe that the performance gap between the \textbf{PhaseSwap} and \textbf{Supervised} models is narrower for larger values for $r_{train}$. This is to be expected since including more identities in the training set allows the \textbf{Supervised} model to generalize better. For $r_{train} = 100\%^*$ we use all recording sessions across all identities for the training set and in the Validation Set (since all identities and sessions are used, the Test sets A and B are empty). The results obtained for this setting show that there is still a slight benefit with the \textbf{PhaseSwap} pre-training even when labels are available for most of the data.
\begin{table}[t]
\centering
\caption{Comparison of the performance of the \textbf{PhaseSwap} model on the SC data set for various values of $r_{train}$. For $r_{train} = 100\%^*$ we use all recording sessions across all identities for the training set and the Validation Set. Notice that results obtained with different $r_{train}$ are not directly comparable.}
\label{tab:generalization}
\begin{tabular}{c@{\hspace{2em}}c@{\hspace{2em}}r@{\hspace{2em}}r@{\hspace{2em}}r}
\toprule
$r_{train}$ & Experiment & Validation Set & Test set A & Test set B \\ \midrule
20\% & \textbf{PhaseSwap} & \textbf{84.3\%} & \textbf{72.0\%} & \textbf{69.6\%} \\
20\% & \textbf{Supervised} & 79.4\% & 67.9\% & 66.0\% \\ \hline
50\% & \textbf{PhaseSwap} & \textbf{84.9\%} & \textbf{75.1\%} & \textbf{73.3\%} \\
50\% & \textbf{Supervised} & 81.9\% & 71.7\% & 69.4\% \\ \hline
75\% & \textbf{PhaseSwap} & \textbf{84.9\%} & \textbf{77.6\%} & \textbf{76.1\%} \\
75\% & \textbf{Supervised} & 81.6\% & 73.7\% & 72.8\% \\ \hline
100\%* & \textbf{PhaseSwap} & \textbf{84.3\%} & - & - \\
100\%* & \textbf{Supervised} & 83.5\% & - & - \\ \bottomrule
\end{tabular}
\end{table}
\subsection{Generalization on the ISRUC-Sleep Data Set}
Using the ISRUC-Sleep data set \cite{khalighi2016isruc}, we aim to evaluate the performance of the PhaseSwap model on healthy subjects when it was trained on subjects with sleep disorders. For the self-supervised training, we use $W_{Self} = 5 sec$. The results are reported in Table~\ref{tab:isruc}. Note that we combined the recordings of subgroup II and the ones not used for the training from subgroup I into a single test set since they are from subjects with sleep disorders. We observe that for both experiments, $r_{train} = 25 \%$ and $r_{train} = 50 \%$, the PhaseSwap model outperforms the supervised baseline for both test sets. Notably, the performance gap on subgroup III is larger than $10 \%$. This can be explained by the fact that sleep disorders can drastically change the sleep structure of the affected subjects, which in turn leads the supervised baseline to learn features that are specific to the disorders/subjects present in the training set.
\begin{table}[t]
\centering
\caption{Comparison of the performance of the \textbf{PhaseSwap} model on the ISRUC-Sleep data set for various values of $r_{train}$.}
\label{tab:isruc}
\begin{tabular}{c@{\hspace{2em}}c@{\hspace{2em}}r@{\hspace{2em}}r@{\hspace{2em}}r}
\toprule
$r_{train}$ & Model & Validation set & \begin{tabular}[c]{@{}c@{}}Test set B \\ (subgroup I + II)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Test set B\\ (subgroup III)\end{tabular} \\ \midrule
25\% & PhaseSwap & 75.8\% & \textbf{67.3\%} & \textbf{62.8\%} \\
25\% & Supervised & \textbf{75.9\%} & 63.1\% & 47.9\% \\ \hline
50\% & PhaseSwap & \textbf{76.3\%} & 68.2\% & \textbf{67.1\%} \\
50\% & Supervised & 75.5\% & \textbf{68.3}\% & 57.3\% \\ \bottomrule
\end{tabular}
\end{table}
\subsection{Comparison to the Relative Positioning Task}
The Relative Positioning (RP) task was introduced by Banville et al.~\cite{banville2019self} as a self-supervised learning method for EEG signals, which we briefly recall here.
Given $x_t$ and $x_{t'}$, two samples with a window size $W$ and starting points $t$ and $t'$ respectively, the RP task defines the following labels
$C_{Self}(|h_t - h_{t'}|) = \mathbbm{1}\left(|t- t'| \leq \tau_{pos}\right)-\mathbbm{1}\left(|t- t'| > \tau_{neg}\right)$,
where $h_t = E(x_t)$, $h_{t'} = E(x_{t'})$, $\mathbbm{1}(\cdot)$ is the indicator function, and $\tau_{pos}$ and $\tau_{neg}$ are predefined quantities. Pairs that yield $C_{Self}=0$ are discarded. $|\cdot|$ denotes the element-wise absolute value operator.
Next, we compare our self-supervised task to the RP task \cite{banville2019self}. For both settings, we use $W_{Self} = 5 sec$ and $r_{train}=20\%$. For the the RP task we choose $\tau_{pos}=\tau_{neg}= 12 \times W_{Self}$. We report the balanced accuracy for all test sets on the SC data set in Table~\ref{tab:comparison}. We observe that our self-supervised task outperforms the RP task. This means that the features learned through the PS task allow the model to perform better on unseen data.
\begin{table}[t]
\centering
\caption{Comparison between the PS and RP pre-training on the SC data set.}
\label{tab:comparison}
\begin{tabular}{c@{\hspace{1em}}c@{\hspace{1em}}c@{\hspace{1em}}c@{\hspace{2em}}c}
\toprule
Pre-training & Validation Set & Test set A & Test set B & SelfSL validation accuracy \\ \midrule
Supervised & 79.4\% & 67.9\% & 66.0\% & - \\
\midrule
PS & \textbf{84.3\%} & \textbf{72.0\%} & \textbf{69.6\%} & 86.9\% \\
RP & 80.3\% & 66.2\% & 65.4\% & 56.9\% \\ \bottomrule
\end{tabular}
\end{table}
\subsection{Results on the Sleep Telemetry and CHB-MIT Data Sets}
In this section, we evaluate our framework on the ST and CHB-MIT data sets. For the ST data set, we use $W_{Self}= 1.25sec$, $W_{Sup}=30sec$ and $r_{train}=50\%$. For the CHB-MIT data set, we use $W_{Self}= 2sec$, $W_{Sup}= 10sec$, $r_{train}=25\%$ and 30 epochs for the supervised fine-tuning/training.
As shown in Table~\ref{tab:other-datasets}, we observe that for the ST data set, the features learned through the PS task produce a significant improvement, especially on Test sets A and B.
For the CHB-MIT data set, the PS fails to provide the performance gains as observed for the previous two data sets. We believe that this is due to the fact that the PS task is too easy on this particular data set: Notice how the validation accuracy is above $99\%$. With a trivial task, self-supervised pre-training fails to learn any meaningful feature representations.
In order to make the task more challenging, we introduce a new variant, which we call \textbf{PS + Masking}, where we randomly zero out all but 6 randomly selected channels for each sample during the self-supervised pre-training. The model obtained through this scheme performs the best on both sets A and B and is comparable to the \textbf{Supervised} baseline on the validation set.
As for the reason why the PS training was trivial on this particular data set, we hypothesize that this is due to the high spatial correlation in the CHB-MIT data set samples.
This data set contains a high number of homogeneous channels (all of them are EEG channels), which in turn result in a high spatial resolution of the brain activity. At such a spatial resolution, the oscillations due to the brain activity show a correlation both in space and time \cite{ito2005spatial}. However, our PS operator ignores the spatial aspect of the oscillations. When applied, it often corrupts the spatial coherence of the signal, which is then easier to detect than the temporal phase-amplitude incoherence. This hypothesis is supported by the fact that the random channel masking, which in turn reduces the spatial resolution during the self-supervised training, yields a lower training accuracy, \emph{i.e.}, it is a non-trivial task.
\begin{table}[t]
\centering
\caption{Evaluation of the \textbf{PhaseSwap} model on the ST and CHB-MIT datasets.}
\label{tab:other-datasets}
\begin{tabular}{c@{\hspace{1em}}c@{\hspace{1em}}r@{\hspace{1em}}r@{\hspace{1em}}r@{\hspace{1em}}r}
\toprule
Dataset & Experiment & Val. Set & Test set A & Test set B \quad & SelfSL val. accuracy \\ \midrule
ST & \textbf{Supervised} & 69.2\% & 52.3\% & 46.7\% & - \\
ST & \textbf{PhaseSwap} & \textbf{74.9\%} & \textbf{60.4\%} & \textbf{52.3\%} & 71.3\% \\ \hline
CHB-MIT & \textbf{Supervised} & \textbf{92.6\%} & 89.5\% & 58.0\% & - \\
CHB-MIT & \textbf{PhaseSwap} & 92.2\% & 86.8\% & 55.1\% & 99.8\% \\
CHB-MIT & \textbf{PS+Masking} & 91.7\% & \textbf{90.6\%} & \textbf{59.8\%} & 88.1\% \\ \bottomrule
\end{tabular}
\end{table}
\begin{table}[t]
\centering
\caption{Comparison of the performance of the \textbf{PhaseSwap} model on the SC data set for various values of the window size $W_{Self}$.}
\label{tab:window_size}
\begin{tabular}{c@{\hspace{2em}}c@{\hspace{2em}}r@{\hspace{2em}}r@{\hspace{2em}}r}
\toprule
$W_{Self}$ & Experiment & Validation Set & Test set A & Test set B \\ \midrule
1.25sec & \textbf{PhaseSwap} & 84.3\% & 72.0\% & 69.6\% \\
2.5sec & \textbf{PhaseSwap} & \textbf{84.6\%} & 71.9\% & 70.0\% \\
5sec & \textbf{PhaseSwap} & 83.4\% & \textbf{72.5\%} & \textbf{70.9\%} \\
10sec & \textbf{PhaseSwap} & 83.6\% & 71.6\% & 69.9\% \\
30sec & \textbf{PhaseSwap} & 83.9\% & 71.0\% & 69.2\% \\
- & \textbf{Supervised} & 79.4\% & 68.1\% & 66.1\% \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Impact of the Window Size}
In this section, we analyze the effect of the window size $W_{Self}$ used for the self-supervised training on the final performance. We report the balanced accuracy on all our test sets for the SC data set in Table~\ref{tab:window_size}. For all these experiments, we use $20\%$ of the identities in the training set. The capacity of the \textbf{Supervised} model $C_{Sup} \circ E$ is independent of $W_{Self}$ (see sec.~\ref{sec:archi}), and thus so is its performance. We observe that the best performing models are the ones using $W_{Self}=2.5 sec$ for the Validation Set and $W_{Self}=5 sec$ for sets A and B. We argue that the features learned by the self-supervised model are less specific for larger window sizes. The PS operator drastically changes structured parts of the time series, but barely affects pure noise segments. As discussed in sec.~\ref{sec:phaseswap}, white noise is invariant with respect to the PS operator. With smaller window sizes, most of the segments are either noise or structured patterns, but as the window size grows, its content becomes a combination of the two.
\subsection{Frozen vs Fine-tuned Encoder}
In Table~\ref{tab:summary}, we analyze the effect of freezing the weights of $E$ during the supervised fine-tuning. We compare the performance of the four variants described in sec.~\ref{sec:models} on the SC data set. All variants use $W_{Self}= 5 sec$, $W_{Sup}=30 sec$ and $r_{train}=20\%$. As expected, we observe that the \textbf{PhaseSwap} variant is the most performant one since it is less restricted in terms of training procedure than \textbf{PSFrozen} and \textbf{Random}. Moreover, the \textbf{PSFrozen} outperforms the \textbf{Random} variant on all test sets and is on par with the \textbf{Supervised} baseline on the Test set B. This confirms that the features learned during pre-training are useful for the downstream classification even when the encoder model $E$ is frozen during the fine-tuning.
The last variant, \textbf{Random}, allows us to disentangle the contribution of the self-supervised task from the prior imposed by the architecture choice for $E$. As we can see in Table~\ref{tab:summary}, the performance of the \textbf{PhaseSwap} variant is significantly higher than the latter variant, confirming that the self-supervised task chosen here is the main factor behind the performance gap.
\begin{table}[t]
\centering
\caption{Balanced accuracy reported on the SC data set for the four training variants.}
\label{tab:summary}
\begin{tabular}{c@{\hspace{2em}}r@{\hspace{2em}}r@{\hspace{2em}}r}
\toprule
Experiment & Validation Set & Test set A & Test set B \\ \midrule
\textbf{Supervised} & 79.4\% & 67.9\% & 66.0\% \\
\textbf{PhaseSwap} & \textbf{84.3\%} & \textbf{72.0\%} & \textbf{69.6\%} \\
\textbf{PSFrozen} & 75.2\% & 68.1\% & 67.1\% \\
\textbf{Random} & 70.1\% & 62.1\% & 63.9\% \\ \bottomrule
\end{tabular}
\end{table}
\subsection{Architecture}
Most of the experiments in this paper use the FCN architecture \cite{wang2017time}. In this section, we illustrate that the performance boost of the PhaseSwap method does not depend on the neural network architecture. To do so, we also analyze the performance of a deeper architecture in the form of the Residual Network (ResNet) proposed by Humayun et al.~\cite{humayun2019end}. We report in Table~\ref{tab:resnet-comparison} the balanced accuracy computed using the SC data set for two choices of $W_{Self} \in \{2.5 sec, 30 sec\}$ and two choices of $r_{train} \in \{ 20\% , 100\%^*\}$. The table also contains the performance of the FCN model trained using the PS task as a reference. We do not report the results for the RP experiment using $W_{Self}=30sec$ as we did not manage to make the self-supervised pre-training converge. All ResNet models were trained for 15 epochs for the supervised fine-tuning.
For $r_{train}=20\%$, we observe that pre-training the ResNet on the PS task outperforms both the supervised and RP pre-training. We also observe that for this setting, the model pre-trained with $W_{Self}=30sec$ performs better on both the validation set and test set B compared to the one pre-trained using $W_{Self}=5sec$. Nonetheless, the model using the simpler architecture still performs the best on those sets and is comparable to the best performing one on set A. We believe that the lower capacity of the FCN architecture prevents the learning of feature representations that are too specific to the pretext task compared the ones learned with the more powerful ResNet.
For the setting $r_{train} = 100\%^*$, the supervised ResNet is on par with a model pre-trained on the PS task with $W_{Self}=30sec$. Recall that $r_{train} = 100\%^*$ refers to the setting where all recording session and patients are used for the training set. Based on these results, we can conclude that there is a point of diminishing returns in terms of available data beyond which the self-supervised pre-training might even deteriorate the performance of the downstream classification tasks.
\begin{table}[t]
\centering
\caption{Evaluation of the \textbf{PhaseSwap} model using the ResNet architecture on the SC data set. Values denoted with a * are averages across two runs.}
\label{tab:resnet-comparison}
\begin{tabular}{c@{\hspace{1em}}c@{\hspace{1em}}c@{\hspace{1em}}c@{\hspace{1.5em}}c@{\hspace{1.5em}}c@{\hspace{1.5em}}c}
\toprule
$r_{train}$ & $W_{Self}$ & Architecture & Experiment & Val. Set & Test set A & Test set B \\ \midrule
20\% & 5sec & FCN & FCN + PS & 84.3\% & 72.0\% & 69.6\% \\ \hline
20\% & 5sec & ResNet & phase swap & 82.1\% & \textbf{72.5\%} & 69.6\% \\
20\% & 5sec & ResNet & RP & 72.3\% & 67.4\% & 65.9\% \\
20\% & - & ResNet & supervised & 79.1\%* & 70.0\%* & 66.5\%* \\
20\% & 30sec & ResNet & phase swap & \textbf{83.6\%} & 70.7\% & \textbf{69.3\%} \\ \hline
100\%* & 5sec & FCN & FCN + PS & 84.3\% & - & - \\ \hline
100\%* & 5sec & ResNet & phase swap & 81.2\% & - & - \\
100\%* & 5sec & ResNet & RP & 79.1\% & - & - \\
100\%* & - & ResNet & supervised & \textbf{84.2\%*} & - & - \\
100\%* & 30sec & ResNet & phase swap & \textbf{84.2\%} & - & - \\ \bottomrule
\end{tabular}
\end{table}
\section{Conclusions}
We have introduced the phase swap pretext task, a novel self-supervised learning approach suitable for bio-signals. This task aims to detect when bio-signals have mismatching phase and amplitude components. Since the phase and amplitude of white noise are uncorrelated, features learned with the phase swap task do not focus on noise patterns. Moreover, these features exploit signal patterns present both in the amplitude and phase domains. We have demonstrated the benefits of learning features from the phase component of bio-signals in several experiments and comparisons with competing methods. Most importantly, we find that pre-training a neural network with limited capacity on the phase swap task builds features with a strong generalization capability across subjects and observed sessions. One possible future extension of this work, as suggested by the results on the CHB-MIT data set \cite{shoeb2009application}, is to incorporate spatial correlations in the PS operator through the use of a spatio-temporal Fourier transformation.
\section*{Acknowledgements}
This research is supported by the Interfaculty Research Cooperation ``Decoding Sleep: From Neurons to Health \& Mind'' of the University of Bern.
\bibliographystyle{splncs03} |
1805.09346 | \section{INTRODUCTION}
\label{sec:intro}
The Event Horizon Telescope\footnote{\indent http://www.eventhorizontelescope.org/} (EHT) is a very-long-baseline interferometry (VLBI) experiment operating at observing frequencies of 230 and 345~GHz\cite{2009astro2010S..68D}. The EHT aims to study the immediate environment of supermassive black holes such as Sagittarius A* (Sgr~A*) at the center of our galaxy\cite{2008Natur.455...78D} and the black hole in the center of galaxy M87\cite{2012Sci...338..355D} with angular resolution sufficient to resolve the event horizons of these black holes. The EHT array is composed of submillimeter telescopes and telescope arrays around the globe, each of which is outfitted with precise time standards and systems for fast digitization and recording of data. As of 2017, the EHT has performed its first 230~GHz observation with an array consisting of the Submillimeter Array (SMA) and the James Clerk Maxwell Telescope (JCMT) in Hawaii, the Submillimeter Telescope (SMT) in Arizona, the Large Millimeter Telescope (LMT) in Mexico, the Institute de Radioastronomie Millim\'{e}trique (IRAM) 30 m telescope at Pico Veleta, Spain, the Atacama Large Millimeter/submillimeter Array (ALMA)\cite{2018PASP..130a5002M} and the Atacama Pathfinder Experiment (APEX)\cite{2015A&A...581A..32W} in Chile, and the South Pole Telescope (SPT) in Antarctica.
The inclusion of the SPT is crucial to the EHT because of its geographic location. The South Pole is not only an outstanding place for submillimeter observations, due to its low precipitable water vapor and stable atmosphere\cite{2016PASP..128g5001R}, but it also provides the most extended baselines in the array when the SPT is paired with the other EHT sites, most of which are located in the northern hemisphere. For example, the baseline between the SPT and the SMT is greater than 10,000~km and gives 15~micro-arcsecond ($\sim$ 14 G$\lambda$) angular resolution at 345~GHz. For the main target, Sgr~A*, the apparent diameter of the black hole event horizon is approximately 50~$\mu$as, and this high resolution is necessary for imaging of the innermost region around the black hole event horizon. Polarization-sensitive EHT observations also enable the study of magnetic structure in the accretion flow\cite{2015Sci...350.1242J}. Finally, the SPT can serve as a continuous observing partner for any other EHT station because Sgr~A* never sets at the South Pole. This allows the longest time series observations, which will be an important resource for time variability studies of this unique object.
The SPT is a 10-meter diameter, off-axis telescope built to observe the cosmic microwave background (CMB) radiation at millimeter wavelengths\cite{2011PASP..123..568C}. It is sited at the Dark Sector Lab (DSL) of the Amundsen-Scott South Pole Station, Antarctica, along with the Background Imaging of Cosmic Extragalactic Polarization (BICEP) project. The SPT CMB camera has gone through several upgrades, including adding polarization sensitivity and increasing the number of detectors in the array\cite{2009AIPC.1185..475C, Austermann:2012jl, Benson:2014br}. Currently, the third generation SPT-3G is in operation. Since the camera uses a transition-edge sensor (TES) bolometer array that is insensitive to the phase of incoming radiation, a new coherent signal chain is required to perform interferometric observation in coordination with the other EHT sites. We have developed a dual-frequency VLBI receiver system to incorporate the SPT to the EHT array. Both 230 and 345 GHz receivers facilitate dual-polarization, two-single-sideband observations. We have deployed the receiver system including hydrogen maser and the VLBI recording setup to the South Pole. The 230 GHz receiver had the first on-sky test in January 2015 and successfully detected an interferometric fringe with the APEX\cite{Kim2018CenA}. It began scientific operation with the EHT observation in April 2017. In this paper, we present the receiver design, optics, and software, and report the lab test results of the system components.
\section{RECEIVER SYSTEM}
\label{sec:rxsystem}
The VLBI receiving system of the SPT comprises the receiver, the receiver electronics, VLBI backend, and optics as illustrated schematically in Figure~\ref{fig:system}. The receiver combines electromagnetic waves from the sky with a high-purity and stable reference tone (local oscillator; LO) in a superconducting mixer, downconverting the sky signal to an intermediate frequency (IF) of a few GHz. The IF signal is amplified through the receiver electronics and forwarded to the VLBI backend. The VLBI backend digitizes the analog signal and records the data to arrays of hard disk drives with accurate timestamping. The whole receiver system is synchronized with the 10 MHz reference signal from the hydrogen maser, an atomic clock. In this section, we explain each of the receiver system element. We will describe the optics separately in the next section.
\begin{figure}[t]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=12cm]{systemdiagram.pdf}
\end{tabular}
\end{center}
\caption[]
{\label{fig:system} SPT VLBI receiver system diagram. The coherent receiver works at 230 and 345 GHz frequencies. Receiver electronics provide LO locked to hydrogen maser to the mixers as well as deliver the IF signal to the VLBI backend for digitization and recording. Receiver control, telescope control, and VLBI control computers run software for the system.}
\end{figure}
\subsection{Receiver}
\label{sec:rxsystem_rx}
The receiver operates at both 230 and 345 GHz frequency bands. Following ALMA terminology, we will often refer to the 230 GHz and 345 GHz portions of the receiver as band 6 (211$-$275 GHz) and band 7 (275$-$373 GHz), respectively. Telescopes in the EHT use identical LO and IF frequencies, with the IF chosen to match the ALMA receivers. The LO frequencies were chosen to avoid the rest frequencies of carbon monoxide (CO), which would absorb emission from the galactic center, and maximize atmospheric transmission in both sidebands in bands 6 and 7. Table~\ref{tab:eht_freq} shows the LO frequency, IF and the corresponding sky frequency of each band. The band 7 mixer for the SPT receiver is under development as of May 2018.
\begin{table}[t]
\vspace{10pt}
\caption{SPT VLBI receiver frequency setup}
\label{tab:eht_freq}
\begin{center}
\begin{tabular}{|l|l|l|}
\hline
\rule[-1ex]{0pt}{3.5ex} & Band 6 & Band 7 \\
\hline
\rule[-1ex]{0pt}{3.5ex} Gunn oscillator frequency & 73.7 GHz & 114.2 GHz \\
\hline
\rule[-1ex]{0pt}{3.5ex} Local oscillator (LO) frequency & 221.1 GHz & 342.6 GHz \\
\hline
\rule[-1ex]{0pt}{3.5ex} Intermediate frequency (IF) & 5$-$9 GHz & 4$-$8 GHz \\
\hline
\multirow{2}{*}{\hspace{0.75ex}Sky frequency}& 212.1$-$216.1 GHz & 334.6$-$338.6 GHz \\
& 226.1$-$230.1 GHz & 346.6$-$350.6 GHz \\
\hline
\end{tabular}
\end{center}
\end{table}
The SPT VLBI receiver incorporates bands 6 and 7 in a package that fits within the confined space of the climate-controlled SPT receiver cabin. The cabin is dominated by the SPT-3G receiver, its optics, and optical path, so the VLBI receiver is positioned behind the SPT-3G tertiary mirror towards the primary (Figure~\ref{fig:optics_model}) and illuminated by a separate optical system.
The receiver cryostat surrounds a Sumitomo RDK-408D2P closed-cycle refrigerator, which is connected to an F-70L helium compressor. The two-stage Gifford-McMahon cold head achieves temperatures of 43~K for the first and 4~K for the second stages. Eight LakeShore DT-670 silicon diodes monitor both stages of the cold head and the mixer block temperatures.
The feed horns, ortho-mode transducers (OMTs), and mixers are cooled to 4~K, and the rest of the cold assembly including frequency triplers for the LOs, LO waveguide, and wiring harnesses are coupled to the first stage (Figure~\ref{fig:rx_assy}). Gold plated heat straps between the mixer blocks and the second stage ensure that the blocks cool efficiently. To prevent the thermal contact between the stages via the electrically conductive LO waveguide, we use waveguide thermal isolators with periodic bandgap structure\cite{2003stt..conf..148H} supported by G-10 fiberglass. There are two sets of isolators, separating the ambient and first-stage waveguide segments, and the first and the second stage segments.
\begin{figure}[t]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=10cm]{vlbioptics_note.png}
\end{tabular}
\end{center}
\caption[]
{\label{fig:optics_model} CAD model of the SPT receiver cabin. The SPT-3G and VLBI systems are indicated by white and yellow arrows, respectively. Prime and Cassegrain foci are shown with black stars. The primary is located beyond the right side of the figure. The grey box shows the location of receiver cabin roof and walls.}
\end{figure}
\begin{figure}[ht]
\begin{center}
\begin{tabular}{cc}
\includegraphics[height=7.5cm]{rx_assy.jpg} & \includegraphics[height=7.5cm]{rx_assy_230ghz_note.png}
\end{tabular}
\end{center}
\caption[]
{ \label{fig:rx_assy} {\it Left}: The 4~K stage of the receiver. The left half is the 230 GHz receiving system, the right half is the 345 GHz system. Feed horns for both frequencies are tilted by 5.74 degrees with respect to the central vertical plane to share the rotatable tertiary mirror. {\it Right}: 230 GHz receiver assembly. Feed horn, OMT, and mixer blocks are attached to the second refrigeration stage and cooled to 4~K.}
\end{figure}
The 230 GHz receiver employs an ALMA band 6 corrugated feed horn and two mixer/preamplifier modules developed by the National Radio Astronomy Observatory (NRAO)\cite{2014ITTST...4..201K}. The mixers are driven by the LO signal generated from a Spacek Labs bias-tuned Gunn oscillator and tripled by Virginia Diodes WR3.4$\times$3 broadband triplers. The LO is waveguide-injected. The fixed-frequency Gunn oscillator reduces complexity compared to frequency/backshort tuning for broadband oscillators, which is useful for winter operation of the receiver. We fabricated a waveguide quadrature hybrid\cite{SIRKANTH:gZT_O6qs} to equally divide the Gunn output for the two mixer blocks, and electronically control the LO power to each using QuinStar Technology PIN diode variable attenuators. The LO system, including a harmonic mixer and a cross-guide coupler, is attached outside the dewar, and waveguide vacuum feedthroughs\cite{ediss:2005vr} bridge the vacuum shield. All the waveguide designs of the assembly follow the ALMA standard\cite{Kerr99waveguideflanges}.
Each mixer block delivers two isolated sidebands in a single polarization, with a 4-12 GHz IF. The polarization splitting is achieved by the OMT, which is a version of
the design in Ref.~\citenum{2009stt..conf..191D}, proportionally scaled to operate in 230 GHz frequency band. The S-parameters and the polarization isolation of the scaled design were simulated using the frequency domain solver of the CST Microwave studio. Figure~\ref{fig:omt} shows the cross-section of the OMT block. The OMT separates the input radiation from the horn to two linearly polarized signals. We add a polarization twist \cite{Chattopadhyay:2010kba} at the end of one OMT output simplify the geometry of the mixer blocks in the dewar. The IF output of the mixers are sent via stainless coaxial cable to the SMA feedthroughs on the bottom of the dewar, and then to the electronics rack inside the cabin through LMR-240 coaxial cables.
\begin{figure}[ht]
\vspace{10pt}
\begin{center}
\begin{tabular}{cc}
\includegraphics[height=7.5cm]{omt_block.png} & \includegraphics[height=7.5cm]{omt_large_note.png}
\end{tabular}
\end{center}
\caption[]
{ \label{fig:omt} {\it Left}: The inner surface of the split block OMT. We have a polarization twist on one end of the OMT. {\it Right}: The T-shaped waveguide of the OMT for the polarization separation of the input signal.}
\end{figure}
The receiver assembly is surrounded by an aluminum radiation shield with Mylar super-insulation. There are separate vacuum windows above the two feed horns. The size of the window is chosen such that it is greater than five times the beam waist size. We use Z-cut quartz manufactured by Boston Piezo-Optics for the windows and bonded Teflon sheets to both sides of the window to implement anti-reflection (AR) coating\cite{2001stt..conf..410K}. We estimate the insertion loss to be less than 0.05 dB in the receiver sky frequency bands, from the thickness measurement of the Teflon glued window. An AR-coated quarter-wave plate is installed on top of the window to convert circular polarization to linear polarization. We also ran mechanical finite element analysis (FEA) of the dewar. For dewar tilt angles between 0 and 90 degrees, the maximum displacement of the horn aperture is less than 80 $\mu$m, corresponding to $\sim$ 2\% of the aperture diameter.
The receiver electronics shown in Figure~\ref{fig:system} are installed in the SPT receiver cabin under the optical bench, together with the SPT-3G electronics. An external computer controls mixer and amplifier bias settings. The LO phased locked loop (PLL) incorporates a 100~MHz crystal reference oscillator that is locked to the maser 10~MHz, to which a 12.3~GHz dielectric resonator oscillator (DRO) is locked for the band 6 LO. This LO is locked to the 6$^{\rm th}$ harmonic of the DRO signal with a 100~MHz offset, again referenced to the 100~MHz crystal. The band 7 LO is locked to a 12.7~GHz DRO at the 9$^{\rm th}$ harmonic. The PLL box has computer-adjustable loop parameters and lock monitoring. The warm IF amplifier box encompasses four chains (two sidebands, two polarizations) of amplifiers and computer-controlled variable attenuators. Since the cabin tilts and rotates with the telescope, the IF signal is transferred via optical fiber link to the stationary SPT control room, where the VLBI backend is installed. The iBOB spectrometer is explained in a later section.
We verified the receiver performance with noise temperature measurements, tone injection tests, and the phase stability tests. Figure~\ref{fig:Trx} shows the noise temperature of the band 6 receiver across the IF band for one polarization. It is $\sim$40~K for all four IF channels. This is measured through the Y-factor technique using a nitrogen-temperature cold load. The same test measured at the VLBI backend shows no degradation in noise temperature from the subsequent gain, fiber, and downconversion stages.
\begin{figure}[t]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=7.5cm]{Trx_20141019_avg15.png}
\end{tabular}
\end{center}
\caption[]
{\label{fig:Trx} 230~GHz band spectral noise temperature for both sidebands of one polarization. The receiver temperature is $\sim$40~K in the 5-9~GHz IF passband.}
\end{figure}
\subsection{Frequency reference}
A Hydrogen-maser manufactured by T4Science
was installed at the DSL to serve as the fundamental frequency reference for VLBI observation. It provides 10 MHz references for synchronizing the receiver system with LO and time-stamping of recorded data. We set up the maser inside a temperature-controlled enclosure placed on a stack of urethane to vibrationally isolate the maser from the DSL building and keep a temperature stable environment. The maser signal is connected to the receiver cabin via a long run of coax: approximately 100 feet of LMR-400 through the interior of the SPT/DSL building and approximately 200 feet of Times Microwave Phase Track 210 (PT210) cable through the azimuth and elevation cable wraps of the SPT. These cables are selected to be particularly phase stable at 10~MHz at their operating temperatures\cite{rogers2008_mk5_069}. The PT210 cable is run through silicone foam sponge tubing to slow thermal changes, and this is further encased in a flexible metal conduit. Both the LMR-400 and PT210 segments are composed of a pair of cables so that the round-trip phase on the 10~MHz can be monitored at the maser. A low noise distribution amplifier (LNDA) inside the receiver cabin distributes the \mbox{10 MHz} coming from the maser and provides a 10~MHz loopback signal that returns on the second coax. During observations the 10~MHz round-trip phase is continuously monitored for changes induced by thermal, mechanical, or other disturbances. The Allan deviation of the round-trip maser phase is virtually identical to what was found by the manufacturer when beating the maser against an identical model, around $1\times10^{-13}$ at 1 s and $1.2\times10^{-15}$ at 1000 s. To ensure that the maser is operating properly before observations, it is compared against an Oscilloquartz crystal oscillator to verify the frequency stability and the phase noise characteristics.
\subsection{VLBI backend}
\label{sec:rxsystem_vlbi}
The VLBI backend is designed to ingest four IF bands from the receiver (two sidebands of two polarizations), each spanning 4~GHz. For band 6 these are 5-9~GHz, for band 7 they are 4-8~GHz.
The block downconverter (BDC) divides each IF into two 0--2 GHz basebands, using an internal LO at 7 or 6 GHz. When all of these bands are digitized to two bits of precision at the Nyquist rate, the instantaneous recording rate is 64~Gbits per second. The digitization is done by ROACH2 (Reconfigurable Open Architecture Computing Hardware) digital backend (R2DBE) units, which have demonstrated 4096 megasamples per second sampling for two channels\cite{2015PASP..127.1226V}. The 64~Gbps EHT backend system consists of four R2DBEs and four Mark 6 recorders\cite{2011evga.conf...31W, 2013PASP..125..196W}. Figure~\ref{fig:vlbirack} shows the VLBI backend setup at the SPT. The recorded data are correlated on the DiFX correlators \cite{2011PASP..123..275D} at the MIT Haystack Observatory and the Max Planck Institute for Radio Astronomy.
\begin{figure}[hb]
\vspace{10pt}
\begin{center}
\begin{tabular}{c}
\includegraphics[height=12cm]{SPT_VLBIrack.png}
\end{tabular}
\end{center}
\caption[]
{\label{fig:vlbirack} VLBI backend installed at the SPT control room. These racks comprise a 64Gbps data recording system.}
\end{figure}
\subsection{Calibration system and spectrometer}
\label{sec:rxsystem_cal}
We developed a calibration system\footnote{Receiver Selection and Calibration Unit for EHT-SPT (RESCUES); http://hdl.handle.net/10150/579318} for the receiver to keep track of the system temperature during the observation. The system temperature can be derived by the ratio of powers received from the sky and a load of known temperature. Due to the limited space inside the receiver cabin, the calibration system is installed above the receiver cryostat, along with the tertiary mirror mount (Figure~\ref{fig:cal_tert}). We use microwave absorber as an ambient load, coupled to a Schneider Electric Motion LMDCE421 motor with a rotation shaft. It covers the receiver beam as we run the calibration. The AD-590 temperature transducer is buried inside the absorber to read the load temperature. The atmospheric opacity is useful information for the calibration because the system temperature depends on a source elevation. At the telescope site, we have a 350~$\mu$m tipping radiometer installed and can convert 350~$\mu$m opacity to 225 GHz opacity by a conversion relation given in Ref.~\citenum{2016PASP..128g5001R}. The calibration load also carries a feed horn and a harmonic mixer that can be positioned in the receiver beam to generate a coherent tone for signal path verification and coherence testing.
\begin{figure}[t]
\begin{center}
\begin{tabular}{cc}
\includegraphics[height=7.5cm]{tert_front1.png} & \includegraphics[height=7.5cm]{tert_front2.png}
\end{tabular}
\end{center}
\caption[]
{ \label{fig:cal_tert} {\it Left}: The tertiary mirror assembly installed on top of the receiver dewar for the observation. {\it Right}: The calibration load for the system temperature measurement, on the tertiary mount. A feed horn is located on the calibration load with a harmonic mixer for a tone injection. The tone injection feed horn assembly is tilted downward so that it can illuminate the 230 GHz receiver feed horn.}
\end{figure}
To aid in pointing, we installed a digital spectrometer for measurements of CO lines that lie near to the EHT observing bands. The spectrometer itself is a pair of FPGA spectrometers based on the CASPER iBOB board\footnote{https://casper.berkeley.edu/wiki/1\_GHz\_-\_1024\_Channel\_Wideband\_Spectrometer}. They are fed by a special signal chain that converts the CO 2-1 line (230.538 GHz, IF=9.438 GHz for band 6) and CO 3-2 (345.796 GHz, IF=3.196 GHz for band 7) to approximately 750~MHz. An example of a spectral line map with this system is provided in Figure~\ref{fig:ibob_spec}.
\begin{figure}[t]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=10cm]{iBOB_I16293.png}
\end{tabular}
\end{center}
\caption[]
{\label{fig:ibob_spec} Spectral channel map of the protostar IRAS 16293-2422, a pointing source. The map shows 16 1~MHz channels starting from a second IF of 695 MHz. Pointing offsets can be determined from one or many channels, as appropriate for each source.}
\end{figure}
\section{OPTICS}
\label{sec:optics}
The SPT has an off-axis Gregorian design with a 10-meter primary mirror to minimize blockage and scattering of incident light from the faint CMB (see Ref.~\citenum{2008ApOpt..47.4418P} for detail). The SPT was not initially designed to illuminate any instrument other than its CMB camera, so special optics are required to redirect the light from the primary mirror to the VLBI receiver.
\subsection{Design}
\label{sec:optics_design}
The VLBI optical system has a Cassegrain design, with hyperbolic secondary and ellipsoidal tertiary mirrors. The mirror parameters were optimized with the Zemax optical design software. The model was chosen such that the optics illuminate the 10 m dish to greater than 12 dB at both frequencies, given the beam parameters and locations of the 230 and 345 GHz feed horns. In Figure~\ref{fig:optics_model}, we show the VLBI optics installed around the SPT-3G receiver and its optics. The VLBI secondary mirror blocks the 3G secondary mirror and reflects the beam from the primary to the VLBI tertiary mirror, and then to feed horns of the VLBI receiver. The tertiary mirror is mounted to the top of the dewar and, to simplify winter operation, this mirror rotates around the optical axis so that it focuses the beam towards either 230 or 345 GHz side of the receiver.
The mirrors are easily removable to clear the optical path for the CMB receiver and only installed for the EHT observing campaigns. We use a SpitzLift portable crane for transport of the mirrors to the receiver cabin top. Figure~\ref{fig:vlbioptics} shows the secondary and tertiary mirrors installed for VLBI observation.
\begin{figure}[ht]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=7.5cm]{mirrors.png}
\end{tabular}
\end{center}
\caption[]
{\label{fig:vlbioptics} The secondary and tertiary mirrors of the SPT VLBI receiver system installed at the South Pole, viewed from the primary mirror. The mirror assemblies are covered by environmental seals to prevent cold air flowing into the receiver cabin.}
\end{figure}
\subsection{Beam measurement}
\label{sec:optics_beam}
The receiver beam pattern has been measured in both frequency bands (Figure~\ref{fig:rx_beam}) by measuring the response to a coherent tone as it is scanned across a plane above the receiver\cite{2018KimMarroneISSTT}. As shown in Figure~\ref{fig:rx_assy}, both 230 and 345 GHz receivers are inside a single dewar and the feed horns are intentionally tilted inward, toward the centerline of the receiver between the two horns, so that they both can face the shared tertiary. We model the near-field scan as a Gaussian beam propagating at an angle to the measurement plane. We characterize the model parameters, including the three-dimensional location of the feed horn phase center and its tilt angle, using the fit between the model and the data. The inferred parameters indicate that the feed horn assemblies are correctly positioned and oriented.
\begin{figure}[ht]
\begin{center}
\begin{tabular}{cc}
\includegraphics[height=6cm]{20161014_230_1_high_fit_x_power.png} & \includegraphics[height=6cm]{20161014_230_1_high_fit_x_phase.png}
\end{tabular}
\end{center}
\caption[]
{ \label{fig:rx_beam} The vector beam measurement of the 230 GHz receiver provides both power ({\it left}) and phase ({\it right}). The blue dots and the red squares show the data and the best fit model.}
\end{figure}
We also measured the beam pattern after the tertiary mirror using the same technique in planes near the Cassegrain focus. These data were used to verify the beam propagation direction between tertiary and secondary, based on the measured propagation angle and the position of the beam in several parallel planes along the propagation direction.
\section{SOFTWARE}
\label{sec:software}
The SPT VLBI receiving system has three types of software: receiver control, calibration, and telescope control. The receiver control software runs on a BeagleBone Black (BBB), a single-board computer that supports the Linux environment. The software controls the receiver and related electronics until the optical fiber relay. The receiver tuning screen in Figure~\ref{fig:tuningscreen} is the primary control interface and was originally developed by Thomas W. Folkers for the Kitt Peak 12-m and Submillimeter Telescope (SMT) on Mount Graham, operated by the Arizona Radio Observatory (ARO). This software has been adapted for the SPT application.
The primary parameters controlled/monitored by this software are the mixer and amplifier bias settings, local oscillator power levels and phase-locked loop parameters, thermometry, and the gain of the warm amplifier chain.
\begin{figure}[t]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=8cm]{tuningscreen.png}
\end{tabular}
\end{center}
\caption[]
{\label{fig:tuningscreen} The tuning screen for the 230 GHz receiver. The screen controls mixer bias, the Gunn oscillator lock, and LO injection, and monitors the temperature inside the receiver.}
\end{figure}
The calibration software obtains data for the system temperature measurement and a priori calibration of the telescope. It involves the monitoring of the calibration load temperature and the IF power, and positioning of feed horn for the tone injection. The software interacts with the telescope control software, Generic Control Program (GCP)\cite{Story:2012dr}, by which the SPT control and data acquisition are done.
\section{SUMMARY}
In this paper, we describe the development of the VLBI receiver for the SPT. We deployed the receiver system, including a hydrogen maser and VLBI recording backend system, to the South Pole in the 2016-17 austral summer, and the SPT joined the EHT array in its first campaign in April 2017. This system samples two polarizations near LO frequencies of either 221.1 or 342.6 GHz and instantaneously digitizes 16~GHz of receiver bandwidth, yielding a 64~Gbps VLBI data rate. The clean receiver optical path, low-noise mixers, and the atmospheric environment of the South Pole combine to create a high-sensitivity VLBI station that significantly extends the baseline coverage of the EHT array.
\acknowledgments
J.K. and D.P.M. acknowledge support from NSF grants AST-1207752 and AST-1440254. The South Pole Telescope program is supported by the National Science Foundation through grant PLR-1248097. Partial support is also provided by the NSF Physics Frontier Center grant PHY-0114422 to the Kavli Institute of Cosmological Physics at the University of Chicago, the Kavli Foundation, and the Gordon and Betty Moore Foundation through Grant GBMF\#947 to the University of Chicago. We thank Chris Kendall and Dave Pernic for their assistance at the South Pole. We acknowledge essential support for this system that is provided through the loan of several key components. The maser and quartz crystal are on loan from the Academia Sinica Institute of Astronomy and Astrophysics. The National Radio Astronomy Observatory has loaned the band 6 feed horn, which does not meet ALMA specifications at non-EHT frequencies. The Smithsonian Astrophysical Observatory has loaned cryogenic IF amplifiers that are used in the band 7 receiving system. |
1004.1860 | \section{Introduction}
The purpose of this paper is to determine the signature pair (defined momentarily) for Hermitian polynomials arising from group-invariant CR mappings from spheres to hyperquadrics. Let $\Gamma$ be a finite subgroup of the unitary group $U(n)$. Let $S^{2n-1}$ denote the unit sphere in $\mathbb{C}^{n}$. We assume $n\geq2$.
A natural question is: when does there exist a non-constant $\Gamma$-invariant CR mapping from $S^{2n-1}$ to $S^{2N-1}$? Forstneri{\v c} showed that a smooth CR mapping from $S^{2n-1}$ to $S^{2N-1}$ must be a rational mapping \cite{F2}. He also found restrictions on the possible groups $\Gamma$ for which such a rational map exists \cite{F1}. Lichtblau \cite{L} proved that for non-constant $\Gamma$-invariant rational maps between spheres to exist, $\Gamma$ must be cyclic. Later D'Angelo and Lichtblau \cite{D0,DL} answered this question by finding the complete list of cyclic $\Gamma$ for which such a rational map exists. To do so they introduced the $\Gamma$-invariant Hermitian polynomial defined by \begin{equation}\label{e:phigamma}\Phi_{\Gamma}(z, \bar{z})=1-\prod_{\gamma \in \Gamma}{\left(1-\langle \gamma z, z \rangle \right)}.\end{equation} This polynomial also determines, by diagonalizing its underlying Hermitian matrix of coefficients, a group-invariant CR map from a sphere to a hyperquadric \cite{D2}. Let $N(\Gamma)$, $N^{+}(\Gamma)$, and $N^{-}(\Gamma)$ be the numbers of total eigenvalues, positive eigenvalues, and negative eigenvalues respectively of this underlying Hermitian matrix of coefficients of $\Phi_{\Gamma}$. We refer the reader to section 2 for precise definitions. The {\it signature pair} $S(\Gamma)$ is $$S(\Gamma)=(N^{+}(\Gamma), N^{-}(\Gamma)),$$ and the {\it positivity ratio} is $$L(\Gamma)=\frac{N^{+}(\Gamma)}{N(\Gamma)}.$$ Because the positivity ratio is often difficult to compute, we study its asymptotic behavior. For a family of subgroups $\Gamma_p$ of $U(n)$, we define the {\it asymptotic positivity ratio} to be $$\lim_{p\to \infty}{L(\Gamma_p)}.$$ Here the index $p$ is closely related to the order of the group (see section 3). The polynomial $\Phi_\Gamma$ canonically induces a CR mapping to a hyperquadric with $N^{+}(\Gamma)$ positive eigenvalues and $N^{-}(\Gamma)$ negative eigenvalues in its defining equation (see \cite{D2}). We do not pursue this aspect of the polynomial $\Phi_{\Gamma}$.
The main results of this paper compute the signature pair for finite subgroups of $SU(2)$, calculate the asymptotic positivity ratio for cyclic subgroups of $U(2)$, and determine the signature pair for the dihedral groups in $U(2)$. In this paper we work in $U(2)$; however, many of the results can be extended to $U(n)$. The results for arbitrary $n$ will appear in the author's doctoral thesis (\cite{G}). We first restrict to finite group subgroups $SU(2)$. The only finite subgroups of $SU(2)$ are isomorphic to one of the following:
\begin{itemize}
\item Cyclic group of order $p$: $C_p=\left< a \; | \; a^p=1\right>$.
\item Binary Dihedral group of order $4p$: $$Q_p=\left< a, b \; | a^p=b^2, a^{2p}=1, b^{-1}ab=a^{-1}\right>.$$
\item Binary Tetrahedral group of order 24: $T=\left< a, b \; | \; a^3=b^3=(a b)^2\right>$.
\item Binary Octahedral group of order 48: $O=\left< a, b \; | \; a^4=b^3=(a b)^2\right>$.
\item Binary Icosahedral group of order 120: $I=\left< a, b \; | \; a^5=b^3=(a b)^2\right>$.
\end{itemize}
The first main result of this paper computes the signature pair for finite subgroups of $SU(2)$. While we are primarily interested in families of groups, for completeness we consider the signature pair for the three exceptional groups. Using Mathematica \cite{Mathematica} we obtain the signature pair for the binary polyhedral groups. Here is the complete story for the subgroups of $SU(2)$.
\begin{theorem} \label{mt:subgroups} Let $\Gamma$ be a finite subgroup of $SU(2)$.
\begin{enumerate}
\item[1.] If $\Gamma$ is isomorphic to a cyclic group of order $p$, then $$S(\Gamma)=\left( \left \lfloor \frac{p+2}{4} \right \rfloor+2, \left \lfloor \frac{p}{4} \right \rfloor \right).$$
\item[2.] If $\Gamma$ is isomorphic to a binary dihedral group of order $4p$, then $$S(\Gamma)=\left( \left \lfloor \frac{p}{2} \right \rfloor+p+2, \left \lfloor \frac{p-1}{2} \right \rfloor +1\right).$$
\item[3.] If $\Gamma$ is isomorphic to a binary tetrahedral group of order 24, then $$S(\Gamma)=\left( 9,5 \right).$$
\item[4.] If $\Gamma$ is isomorphic to a binary octahedral group of order 48, then $$S(\Gamma)=\left( 17,9 \right).$$
\item[5.] If $\Gamma$ is isomorphic to a binary icosahedral group of order 120, then $$S(\Gamma)=\left( 40,22 \right).$$
\end{enumerate}
\end{theorem}
We next consider cyclic and dihedral groups in $U(2)$. For a cyclic subgroup of order $p$ in $U(2)$, several different signature pairs are possible. For dihedral subgroups of $U(2)$, the signature pair depends only on the isomorphism type of the group. For the cyclic group $C_p$ with $p$ elements, we consider the group representations $\pi : C_p \to \Gamma(p,q) <U(2)$ generated by \begin{equation}\label{e:gammagen}s\mapsto \begin{pmatrix}
\omega & 0\\
0 & \omega^q
\end{pmatrix}\end{equation} where $\omega$ is a primitive $p$-th root of unity and $s$ is an element of order $p$ in $C_p$. Up to conjugation, every finite cyclic subgroup of $U(2)$ is of the form $\Gamma(p,q)$ for some $p$ and $q$. We compute the asymptotic positivity ratio of $\Gamma(p,q)$ for any $q$; we show that the asymptotic positivity ratio is a rational expression depending on $q$. Further we take the limit as $q$ goes to infinity to obtain the following theorem.
\begin{theorem} \label{mt:cyclic} Let $\Gamma(p,q)$ be as in \eqref{e:gammagen}, then
$$\lim_{p\to \infty}{L(\Gamma(p,q))} = \begin{cases}
\frac{3q+1}{4q} & \text{if $q$ is odd},\\
\frac{3q-2}{4(q-1)} & \text{if $q$ is even},
\end{cases}$$ and hence
$$\lim_{q\to \infty}{\lim_{p\to \infty}{L(\Gamma(p,q))}}= \frac{3}{4}.$$
\end{theorem}
The details of the proof appear in section 4, but we give a short description now. First we recall from \cite{D4} the weight of a monomial appearing in $\Phi_{\Gamma(p,q)}$. Then we find bounds for the total number of terms and the number of terms of each weight. Using this information, we calculate bounds on the fraction of terms of odd weight and the fraction of terms of even weight. Then we show that asymptotically the numbers of even and odd weight terms are equal. We then interpret a result of Loehr, Warrington, and Wilf \cite{LWW} in terms of weights. Their result implies that all the odd weight terms are positive, and the even weight terms alternate sign. Since $\Gamma(p,q)$ is diagonal, the number of terms is the same as the number of eigenvalues. It follows that the limit as $q$ goes to infinity of the asymptotic positivity ratio is $\frac{3}{4}$.
The third main result calculates the signature pair for dihedral subgroups of $U(2)$.
\begin{theorem} \label{mt:dihedral} Let $\Delta_p$ be a dihedral subgroup of order $2p$ in $U(2)$, then
$$S(\Delta_p)=\left( \left \lfloor \frac{p}{2} \right \rfloor +\left \lfloor \frac{p}{4} \right \rfloor +2, \left \lfloor \frac{3(p+1)}{4} \right \rfloor \right),$$
and hence
$$\lim_{p\to \infty}{L(\Delta_p)}=\frac{1}{2}.$$
\end{theorem}
We conclude the introduction by outlining the rest of the paper. In section 2 we give relevant definitions, introduce the weight of a polynomial, and prove some basic facts about Hermitian polynomials. In section 3 we compute the signature pairs for finite subgroups of $SU(2)$. In sections 4 and 5 we prove the main results for subgroups of $U(2)$. In section 6 we recall some basic definitions from representation theory, and we show that in this context the polynomials $\Phi_{\Gamma(p,q)}$ are an alternating sum of orbit Chern classes.
\section{Definitions and Preliminaries}
In this section we recall some basic facts about unitary representations and Hermitian polynomials. We begin by defining Hermitian polynomials.
\begin{definition} Let $R:\mathbb{C}^n \times \mathbb{C}^n \to \mathbb{C}$ be a polynomial. We call $R$ Hermitian if $$R(z,\bar{w})=\overline{R(w,\bar{z})}.$$
\end{definition}
Given any polynomial $$r(z, \bar{w})=\sum c_{\alpha \beta} z^{\alpha} \bar{w}^{\beta},$$ then $r$ is Hermitian if and only if the matrix $(c_{\alpha \beta})$ is Hermitian if and only if $r(z, \bar{z})$ is real-valued (see \cite{D0}). We call $(c_{\alpha \beta})$ the {\it underlying matrix of $r$}. We define $N(r)$, $N^{+}(r)$, and $N^{-}(r)$ to be the numbers of total eigenvalues, positive eigenvalues, and negative eigenvalues respectively of $(c_{\alpha \beta})$. We define the {\it signature pair} $S(r)$ of a Hermitian polynomial to be the pair $S(r)=(N^{+}(r), N^{-}(r))$. Define the {\it positivity ratio} by $L(r)=\frac{N^{+}(r)}{N(r)}$. We recall the definition of the polynomial $\Phi_{\Gamma}$:
\begin{equation*}\Phi_{\Gamma}(z, \bar{z})=1-\prod_{\gamma \in \Gamma}{\left(1-\langle \gamma z, z \rangle \right)}.\end{equation*}
\begin{notation} For any $\Gamma<U(n)$, put $N^{+}(\Gamma)=N^{+}(\Phi_\Gamma)$, $N^{-}(\Gamma)=N^{-}(\Phi_\Gamma)$, and $N(\Gamma)=N^{+}(\Gamma)+N^{-}(\Gamma)$. Put $S(\Gamma)=(N^{+}(\Gamma), N^{-}(\Gamma))$, and $L(\Gamma)=\frac{N^{+}(\Gamma)}{N(\Gamma)}$.
\end{notation}
For families of subgroups $\Gamma_p$ of $U(n)$, we define the {\it asymptotic positivity ratio} to be $\lim_{p\to \infty}{L(\Gamma_p)}$.
\begin{definition} Let $C_p$ be a cyclic group of order $p$ with generator $s$. Define a unitary representation $\pi : C_p \to U(2)$ by $$\pi(s) =
\begin{pmatrix}
\omega & 0\\
0 & \omega^{q}
\end{pmatrix}$$
where $\omega$ is a $p$-th primitive root of unity. Let $\Gamma(p,q)=\pi(C_p)$.
\end{definition}
\begin{definition} Two group representations $\pi_1 : G \to U(n)$ and $\pi_2 : G \to U(n)$ are called {\it equivalent} if there exists an element $A \in U(n)$ such that $$A \pi_1(g) A^{-1}= \pi_2(g)$$ for every $g \in G$.
\end{definition}
\begin{definition}A polynomial $f(x,y)$ has {\it weight} $j$ with respect to $\Gamma(p,q)$ if $$f(\lambda x, \lambda^q y) = \lambda^{jp} f(x,y)$$ for all $\lambda \in \mathbb{C}$. In particular, the monomial $x^ay^b$ has weight $j$ if $a+qb=jp$.
\end{definition}
Because $\Phi_{\Gamma(p,q)}$ depends only on $|z_1|^2$ and $|z_2|^2$, we define the polynomial $f_{p,q}$ by
\begin{equation}\label{e:fpq} f_{p,q}(\left| z_1 \right|^2, \left| z_2 \right|^2)=f_{p,q}(x,y)=1-\prod_{j=0}^{p-1}{\left( 1-\omega^{j}x-\omega^{qj}y\right)}=\Phi_{\Gamma(p,q)}(z,\bar{z}).
\end{equation}
The polynomials $f_{p,q}$ have many interesting number-theoretic and combinatorial properties (see \cite{D4,D3,LWW,M,O}). In the case $q=1$, we obtain $f_{p,1}=(x+y)^p$. In the case $q=2$, we get a variant of the Chebyshev polynomials. The importance of the polynomials in these two cases motivates the study of $f_{p,q}$ for higher $q$. For the reader's convenience, we list $f_{p,4}$ for $1\leq p \leq 9$ in Table 1.
\begin{table}[ht]
\caption{List of $f_{p,4}$ for $1\leq p\leq 9$.}
\hrule
{\begin{tabular}{cl}
$f_{1,4}(x,y)$ & = $x+y$ \\
$f_{2,4}(x,y)$ & = $x^2+2y-y^2$ \\
$f_{3,4}(x,y)$ & = $x^3+3x^2y+3xy^2+y^3$ \\
$f_{4,4}(x,y)$ & = $x^4+4y-6y^2+4y^3-y^4$ \\
$f_{5,4}(x,y)$ & = $x^5+5xy-5x^2y^2+y^5$ \\
$f_{6,4}(x,y)$ & = $x^6+6x^2y-3x^4y^2+2y^3+3x^2y^4-y^6$\\
$f_{7,4}(x,y)$ & = $x^7+7x^3y+14x^2y^3+7xy^5+y^7$ \\
$f_{8,4}(x,y)$ & = $x^8+8x^4y+4y^2+8x^4y^3-6y^4+4y^6-y^8$ \\
$f_{9,4}(x,y)$ & = $x^9+9x^5y+9xy^2+3x^6y^3-18x^2y^4+3x^3y^6+y^9$\\
\end{tabular}}
\hrule
\end{table}
We now summarize some properties of the $f_{p,q}$. A beautiful result from \cite{D4} is that for all $q$, $f_{p,q}$ is congruent to $(x+y)^p$ mod $(p)$ if and only if $p$ is prime. We naturally ask what other properties of $(x+y)^p$ generalize to $f_{p,q}$ for all $q$. In \cite{D1} D'Angelo constructs the $f_{p,q}$ and shows that the coefficients are integers. The $f_{p,2}$ polynomials have an extremal property studied in \cite{D3}. Dilcher and Stolarsky consider a generalization of the $f_{p,2}$ polynomials in \cite{DS}. Osler uses a variant of the $f_{p,2}$ to denest radicals \cite{O}. Musiker uses the $f_{p,2}$ polynomials while studying the combinatorics of elliptic curves \cite{M}. Loehr, Warrington, and Wilf give a combinatorial interpretation for the coefficients of $f_{p,q}$ using circulant determinants \cite{LWW}. They also gave a simple method of determining the sign of each term in $f_{p,q}$, which we use to calculate the asymptotic positivity ratio for the $\Gamma(p,q)$. They also study the asymptotics of the largest coefficient appearing in $f_{p,q}$. In \cite{D5} D'Angelo uses methods from complex analysis to obtain asymptotic information; for example, he gives an asymptotic formula for the sum of the coefficients of $f_{p,q}$.
The following basic property of Hermitian polynomials will be needed in the next section. The signature pair for Hermitian polynomials is unchanged under a change of basis, and therefore equivalent representations have the same signature pair.
\begin{proposition} \label{p:changebasis} Given a Hermitian polynomial $r(z, \bar{z})$, then for every $U\in U(n)$ we have $S(r)=S(r \circ U)$.
\end{proposition}
\begin{proof} Let $r(z, \overline{z})$ be a Hermitian polynomial of degree $d$; using multi-index notation, we write $$r(z, \overline{z})=\sum_{|\alpha|, |\beta|\leq d} c_{\alpha \beta} z^{\alpha} \overline{z}^{\beta}.$$ The polynomial $r(z, \overline{z})$ is a Hermitian form on the vector space of polynomials of degree at most $d$. Composing with $U$, we have $$r(Uz, \overline{Uz})=\sum c_{\alpha \beta} (Uz)^{\alpha} (\overline{Uz})^{\beta}.$$ Since $U$ is non-singular and the monomials $z^\alpha$ form a basis of the vector space of polynomials of degree less than $d$ in $z$, then $(Uz)^{\alpha}$ also form a basis of the vector space of polynomials of degree at most $d$. Thus by Sylvester's Law (see page 223 of \cite{Horn}), $r(z, \overline{z})$ and $r(Uz, \overline{Uz})$ have the same numbers of eigenvalues of each sign.
\end{proof}
\begin{corollary} \label{c:invsig} If $\pi_1 : G \to U(n)$ and $\pi_2 : G \to U(n)$ are equivalent representations, then $S(\pi_1(G))= S(\pi_2(G))$.
\end{corollary}
\begin{proof}
The result follows by a change of coordinates. Let $g\in G$, then there exists $A\in U(n)$ such that $$A \pi_1(g) A^{-1}= \pi_2(g).$$ Thus
\begin{eqnarray*}
\Phi_{\pi_1(G)}(z, \bar{z}) & = & 1 - \prod_{\gamma \in \pi_1(G)}{\left(1-\langle \gamma z, z \rangle \right)}\\
& = & 1 - \prod_{g \in G}{\left(1-\langle \pi_1(g) z, z \rangle \right)}\\
& = & 1 - \prod_{g \in G}{\left(1-\langle A^{-1} \pi_2(g) A z, z \rangle \right)}\\
& = & 1 - \prod_{g \in G}{\left(1-\langle \pi_2(g) A z, A z \rangle \right)}\\
& = & 1 - \prod_{g \in G}{\left(1-\langle \pi_2(g) w, w \rangle \right)}\\
& = & \Phi_{\pi_2(G)}(w, \bar{w})\\
\end{eqnarray*}
where $w=Az$. By Proposition \ref{p:changebasis}, the signature pair of a Hermitian polynomial is invariant under a change of coordinates, and the result follows.
\end{proof}
\section{Subgroups of $SU(2)$}
Given a unitary representation $\pi : G \to U(n)$, a natural question is: for fixed $n$, what are the possible finite groups $G$? For $n=1$, the only finite groups are cyclic. For $n>1$, the question becomes difficult. In this paper we restrict to the case $n=2$. When $n=2$, Du Val \cite{Du1} classified the finite groups while studying what are now called Du Val singularities. He found nine families of groups. We further restrict to $SU(2)$, and in this case, five types of subgroups arise:
\begin{itemize}
\item Cyclic group of order $p$: $C_p:=\left< a \; | \; a^p=1\right>$.
\item Binary Dihedral group of order $4p$: $Q_p:=\left< a, b \; | \; a^p=b^2, a^{2p}=1, b^{-1}ab=a^{-1}\right>$.
\item Binary Tetrahedral group of order 24: $T:=\left< a, b \; | \; a^3=b^3=(a b)^2\right>$.
\item Binary Octahedral group of order 48: $O:=\left< a, b \; | \; a^4=b^3=(a b)^2\right>$.
\item Binary Icosahedral group of order 120: $I:=\left< a, b \; | \; a^5=b^3=(a b)^2\right>$.
\end{itemize}
The purpose of this section is to give a complete analysis of the signature pair for each of these groups.
\subsection{Cyclic Groups}
Let $\Gamma$ be a cyclic subgroup of order $p$ in $SU(2)$. Let $A$ be a generator of $\Gamma$ in $SU(2)$. By the results of section 2, we can diagonalize $A$ without affecting the signature pair. Thus it suffices to consider $A$ of the form $$\begin{pmatrix}
\omega^{a} & 0\\
0 & \omega^{b}
\end{pmatrix}$$ where $\omega$ is a $p$-th root of unity. Since $A$ is in $SU(2)$, we also know that $a+b=p$. Moreover, for $A$ to have order $p$, then $a$, $b$, and $p$ must be relatively prime. Then $a$, $p-a$, and $p$ are relatively prime and hence $\omega^{a j}=\omega$ for some $j$. Thus we can always choose $a$ to be 1 without loss of generality. Hence $b=p-1$, and then $A$ generates $\Gamma(p,p-1)$. Therefore up to conjugation, $\Gamma(p,p-1)$ is the only cyclic subgroup of order $p$ in $SU(2)$. Thus the only possible signature pair for a cyclic subgroup of order $p$ in $SU(2)$ is given by $\Gamma(p,p-1)$. We recall some useful facts about $\Phi_{\Gamma(p,p-1)}$ from \cite{D2}. We will use these properties for computing the asymptotic positivity ratio for other groups.
\begin{theorem} {({D'Angelo,} \cite{D2})} \label{thm:cyclicproposition} The following hold for $\Phi_{\Gamma(p,p-1)}$.
\begin{enumerate}
\item We have the following exact formula:
\begin{align*}\Phi_{\Gamma(p,p-1)}=1+|z_1|^{2p} +|z_2|^{2p}&-\left( \frac{1+\sqrt{1-4 |z_1|^2 |z_2|^2}}{2} \right)^p\\ &- \left( \frac{1-\sqrt{1-4|z_1|^2 |z_2|^2}}{2}\right)^p.\end{align*}
\item The coefficients $c_{p,j}$ in the following formula are positive integers: $$\Phi_{\Gamma(p,p-1)}(z,\bar{z})= |z_1|^{2p}+|z_2|^{2p}+\sum_{j=1}^{\left \lfloor \frac{p}{2} \right \rfloor}{(-1)^{j-1}c_{p,j} |z_1|^{2j}|z_2|^{2j}}.$$
\item These coefficients are given by $$c_{p,j}= \frac{p}{p-j}\binom{p-j}{j}.$$
\item Finally the signature pair is $$S(\Gamma(p,p-1))=\left(\left \lfloor \frac{p+2}{4} \right \rfloor + 2,\left \lfloor \frac{p}{4} \right \rfloor \right).$$
\end{enumerate}
\end{theorem}
\begin{remark} D'Angelo also showed that the coefficients of $f_{p, p-1}$ are the same as the coefficients of $f_{p,2}$ up to sign. More generally the coefficients of $f_{p, q}$ are the same as the coefficients of $f_{p,p-q+1}$ up to sign.
\end{remark}
\begin{corollary} The asymptotic positivity for cyclic subgroups of $SU(2)$ is $$\lim_{p\to \infty}{L(\Gamma(p,p-1))}=\frac{1}{2}.$$
\end{corollary}
\subsection{Binary Dihedral Groups}
Consider the binary dihedral groups $$Q_p:=\left< a, b \; | a^p=b^2, a^{2p}=1, b^{-1}ab=a^{-1}\right>.$$ The group $Q_p$ has order $4p$. Let $\eta : Q_p \to SU(2)$ be the faithful representation generated by
\begin{eqnarray*}
\eta(a) & = &
\begin{pmatrix}
\omega & 0\\
0 & \omega^{-1}
\end{pmatrix}\\
\eta(b) & = &
\begin{pmatrix}
0 & 1\\
-1 & 0
\end{pmatrix}\\
\end{eqnarray*}
where $\omega$ is a $2p$-th primitive root of unity. Here set $\Lambda_p=\eta(Q_p)$.
Before stating the main results of this section we illustrate the techniques with an example. We compute the number of positive and negative eigenvalues of $\Phi_{\Lambda_2}(z, \bar{z})$. Expanding the product in the definition we get
\begin{eqnarray*}
\lefteqn{\Phi_{\Lambda_2} =}\\
& & z_1^4 \bar{z_1}^4+z_2^4 \bar{z_1}^4-z_1^4 z_2^4 \bar{z_1}^8+4
z_1^5 z_2 \bar{z_1}^5 \bar{z_2}-4 z_1 z_2^5 \bar{z_1}^5 \bar{z_2}+12
z_1^2 z_2^2 \bar{z_1}^2 \bar{z_2}^2\\
& & +2 z_1^6 z_2^2 \bar{z_1}^6 \bar{z_2}^2+2
z_1^2 z_2^6 \bar{z_1}^6 \bar{z_2}^2+z_1^4 \bar{z_2}^4+z_2^4 \bar{z_2}^4-z_1^8
\bar{z_1}^4 \bar{z_2}^4-4 z_1^4 z_2^4 \bar{z_1}^4 \bar{z_2}^4\\
& &-z_2^8
\bar{z_1}^4 \bar{z_2}^4-4 z_1^5 z_2 \bar{z_1} \bar{z_2}^5+4 z_1
z_2^5 \bar{z_1} \bar{z_2}^5+2 z_1^6 z_2^2 \bar{z_1}^2 \bar{z_2}^6+2
z_1^2 z_2^6 \bar{z_1}^2 \bar{z_2}^6-z_1^4 z_2^4 \bar{z_2}^8.\\
\end{eqnarray*}
Notice that in contrast to the cyclic case, we get off-diagonal terms, so it is not enough to simply count the number of terms to get the number of eigenvalues. Rewriting in terms of polynomials invariant under the $Q_2$-action, we get
\begin{eqnarray*}
\lefteqn{\Phi_{\Lambda_2} =}\\
& & (z_1^4+z_2^4)(\bar{z_1}^4 +\bar{z_2}^4)-4 z_1^4 z_2^4 \bar{z_1}^4 \bar{z_2}^4 +12z_1^2 z_2^2 \bar{z_1}^2 \bar{z_2}^2 +4 z_1 z_2 (z_1^4 -z_2^4) \bar{z_1} \bar{z_2} (\bar{z_1}^4 -\bar{z_2}^4)\\
& & +2 z_1^2 z_2^2 (z_1^4 +z_2^4) \bar{z_1}^2 \bar{z_2}^2 (\bar{z_1}^4 +\bar{z_2}^4)-z_1^4 z_2^4 (\bar{z_1}^8+\bar{z_2}^8)-\bar{z_1}^4 \bar{z_2}^4 (z_1^8+z_2^8).\\
\end{eqnarray*}
Equivalently we get $$\Phi_{\Lambda_2} =
\begin{pmatrix}
\bar{z_1}^4 + \bar{z_2}^4\\
\bar{z_1} \bar{z_2} (\bar{z_1}^4 -\bar{z_2}^4)\\
\bar{z_1}^2 \bar{z_2}^2 (\bar{z_1}^4 +\bar{z_2}^4)\\
\bar{z_1}^2 \bar{z_2}^2\\
\bar{z_1}^4 \bar{z_2}^4\\
\bar{z_1}^8+\bar{z_2}^8\\
\end{pmatrix}^{T}
\begin{pmatrix}
1 & 0 & 0 & 0 & 0 & 0\\
0 & 4 & 0 & 0 & 0 & 0\\
0 & 0 & 2 & 0 & 0 & 0\\
0 & 0 & 0 & 12 & 0 & 0\\
0 & 0 & 0 & 0 & -4 & -1\\
0 & 0 & 0 & 0 & -1 & 0\\
\end{pmatrix}
\begin{pmatrix}
z_1^4 + z_2^4\\
z_1 z_2 (z_1^4 -z_2^4)\\
z_1^2 z_2^2 (z_1^4 +z_2^4)\\
z_1^2 z_2^2\\
z_1^4 z_2^4\\
z_1^8+z_2^8\\
\end{pmatrix}.$$
Hence the eigenvalues of $\Phi_{\Lambda_2}$ are 1, 4, 2, 12, $-2+\sqrt{5}$, $-2-\sqrt{5}$. Therefore $S(\Lambda_2)=(5,1)$.
For clarity, we explicitly write $\Phi_{\Lambda_2}$ as a difference of squared norms. Let
$$A(z)=\begin{pmatrix}
z_1^4+z_2^4\\
2 z_1 z_2 (z_1^4-z_2^4)\\
\sqrt{2} z_1^2 z_2^2 (z_1^4 +z_2^4)\\
\sqrt{12} z_1^2 z_2^2\\
(\sqrt{-2 + \sqrt{5}}) (z_1^8 + (2 - \sqrt{5}) z_1^4 z_2^4 + z_2^8\\
\end{pmatrix},$$ and $$B(z)=\left( (\sqrt{2 + \sqrt{5}}) (z_1^8 + (2 + \sqrt{5}) z_1^4 z_2^4 + z_2^8)\right).$$ Then we have $\Phi_{\Lambda_2}=||A(z)||^2-||B(z)||^2$.
We proceed along these lines for general $p$. First we prove a theorem relating $\Phi_{\Lambda_p}$ to the cyclic case. Since the elements of $\Lambda_p$ split equally into diagonal and anti-diagonal matrices, we can prove a theorem analogous to Theorem \ref{Dihedral} for this case.
\begin{proposition} \label{Quaternion} The invariant polynomial corresponding to the representation $\eta$ satisfies:
\begin{eqnarray*}\Phi_{\Lambda_p} = f_{2p,2p-1}(\left|z_1 \right|^2, \left|z_2 \right|^2)+
f_{2p,2p-1}(z_2 \bar{z_1}, -z_1 \bar{z_2})\\
\qquad-f_{2p,2p-1}(\left|z_1 \right|^2,\left|z_2 \right|^2)f_{2p,2p-1}(z_2 \bar{z_1}, -z_1 \bar{z_2}).
\end{eqnarray*}
\end{proposition}
\begin{proof} As we alluded to above, the key idea is to notice that $$\Lambda_p=\left\{\begin{pmatrix}
\omega^j & 0\\
0 & \omega^{-j}
\end{pmatrix},\begin{pmatrix}
0 & \omega^{j}\\
-\omega^{-j} & 0
\end{pmatrix}|\; j=0, \cdots , 2p-1 \right\}.$$ Then the result follows from the calculation below
\begin{eqnarray*}
\lefteqn{\Phi_{\Lambda_p}(z,\bar{z}) = 1- \prod_{\gamma \in \Lambda_p}{\left( 1-\left< \gamma z, z \right> \right)}}\\
&= & 1- \left( \prod_{j=0}^{2p-1}{\left( 1-\left< \begin{pmatrix}
\omega^j & 0\\
0 & \omega^{-j}
\end{pmatrix} z, z \right> \right)}\right)\left( \prod_{j=0}^{2p-1}{\left( 1-\left< \begin{pmatrix}
0 & \omega^{j}\\
-\omega^{-j} & 0
\end{pmatrix}z, z \right> \right)}\right)\\
&= & 1- \left( \prod_{j=0}^{2p-1}{\left( 1 - \omega^j z_1 \bar{z_1} - \omega^{-j} z_2 \bar{z_2} \right)}\right)\left( \prod_{j=0}^{2p-1}{\left( 1 - \omega^j z_2 \bar{z_1} + \omega^{-j} z_1 \bar{z_2} \right)}\right)\\
& = & 1- \left( 1- f_{2p,2p-1}(|z_1|^2, |z_2|^2)\right)\left( 1-f_{2p,2p-1}(z_2 \bar{z_1}, -z_1\bar{z_2})\right)\\
&= & f_{2p,2p-1}(\left|z_1 \right|^2, \left|z_2 \right|^2)+
f_{2p,2p-1}(z_2 \bar{z_1}, -z_1 \bar{z_2})\\
& & \qquad-f_{2p,2p-1}(\left|z_1 \right|^2,\left|z_2 \right|^2)f_{2p,2p-1}(z_2 \bar{z_1}, -z_1 \bar{z_2}).
\end{eqnarray*}
\end{proof}
Now we proceed by using the previous theorem to express the polynomial $\Phi_{\Lambda_p}$ in terms of the $f_{2p,2p-1}$ polynomials. The goal then is to express $\Phi_{\Lambda_p}$ in terms of the following linearly independent $Q_p$-invariant polynomials: $$z_1^{2p} + z_2^{2p},\; z_1^j z_2^j (z_1^{2p}+ (-1)^j z_2^{2p}),\; (z_1 z_2)^{2j}, z_1^{4p}+z_2^{4p}$$ for $j=1, \cdots, \, p$.
For the reader's convenience, we again recall $$f_{2p,2p-1}(x,y)= x^{2p}+y^{2p}+\sum_{j=1}^{p}{(-1)^{j-1}c_{2p,j}(xy)^j}.$$ Then by Proposition \ref{Quaternion}
\begin{eqnarray*}
\lefteqn{\Phi_{\Lambda_p}(z, \bar{z}) = } \\
& & (z_1 \bar{z_1})^{2p} + (z_2 \bar{z_2})^{2p} + \sum_{j=1}^{p}{(-1)^{j-1}c_{2p,j}(z_1 z_2 \bar{z_1} \bar{z_2})^j}+(z_2 \bar{z_1})^{2p}+(z_1 \bar{z_2})^{2p}\\
& &+ \sum_{j=1}^{p}{(-1) c_{2p,j} (z_1 z_2 \bar{z_1} \bar{z_2})^j}-z_1^{2p}z_2^{2p}\bar{z_1}^{4p}-z_1^{4p}\bar{z_1}^{2p}\bar{z_2}^{2p}-z_2^{4p}\bar{z_1}^{2p}\bar{z_2}^{2p}\\
& &-z_1^{2p}z_2^{2p}\bar{z_2}^{4p}+\sum_{j=1}^{p}{c_{2p,j}z_1^{2p+j}z_2^j\bar{z_1}^{2p+j}\bar{z_2}^j}+\sum_{j=1}^{p}{c_{2p,j}z_1^{j}z_2^{2p+j}\bar{z_1}^{j}\bar{z_2}^{2p+j}}\\
& &+\sum_{j=1}^{p}{(-1)^{j}c_{2p,j}z_1^{j}z_2^{2p+j}\bar{z_1}^{2p+j}\bar{z_2}^j}+\sum_{j=1}^{p}{(-1)^{j}c_{2p,j}z_1^{2p+j}z_2^j\bar{z_1}^{j}\bar{z_2}^{2p+j}}\\
& & -\left( \sum_{j=1}^{p}{(-1)^{j-1}c_{2p,j}(z_1 z_2 \bar{z_1} \bar{z_2})^j} \right) \left( \sum_{j=1}^{p}{(-1) c_{2p,j} (z_1 z_2 \bar{z_1} \bar{z_2})^j} \right).\\
\end{eqnarray*}
Notice that all the odd power terms drop from the product, and we get the following simplification.
\begin{eqnarray*}
\lefteqn{\left( \sum_{j=1}^{p}{(-1)^{j-1}c_{2p,j}(z_1 z_2 \bar{z_1} \bar{z_2})^j} \right) \left( \sum_{j=1}^{p}{(-1) c_{2p,j} (z_1 z_2 \bar{z_1} \bar{z_2})^j} \right) = } \\
& &\sum_{j=1}^{p}{\left( 2 \sum_{k=j+1}^{\text{min}(2j-1,p)}{(-1)^{k-1} c_{2p,k}c_{2p,2j-k}} +(-1)^{j-1}c_{2p,j}^2\right)(z_1 z_2 \bar{z_1} \bar{z_2})^{2j}}.
\end{eqnarray*}
With some additional effort, we write the previous expression in terms of the invariant polynomials given above,
\begin{eqnarray*}
\lefteqn{\Phi_{\Lambda_p}(z, \bar{z}) = }\\
& & (z_1^{2p}+z_2^{2p})(\bar{z_1}^{2p}+\bar{z_2}^{2p})+\sum_{j=1}^{p}{c_{2p,i}z_1^j z_2^i(z_1^{2p}+(-1)^j z_2^{2p})\bar{z_1}^j \bar{z_2}^j(\bar{z_1}^{2p}+(-1)^j \bar{z_2}^{2p})}\\
& & + \sum_{j=1}^{\left \lfloor \frac{p}{2} \right \rfloor}{\left( 2 \sum_{k=j+1}^{2j-1}{(-1)^{k-1} c_{2p,k}c_{2p,2j-k}} +(-1)^{j-1}c_{2p,j}^2-2c_{2p,2j}\right) (z_1 z_2 \bar{z_1} \bar{z_2})^{2j}}\\
& & + \sum_{j=\left \lfloor \frac{p}{2}\right \rfloor+1}^{p}{\left( 2 \sum_{k=j+1}^{p}{(-1)^{k-1} c_{2p,k}c_{2p,2j-k}} +(-1)^{j-1}c_{2p,j}^2\right) (z_1 z_2 \bar{z_1} \bar{z_2})^{2j}}\\
& & - z_1^{2p}z_2^{2p}(\bar{z_1}^{4p}+\bar{z_2}^{4p})-\bar{z_1}^{2p}\bar{z_2}^{2p}(z_1^{4p}+z_2^{4p}).\\
\end{eqnarray*}
The goal now is to determine the signs of the coefficients in the above expression. First we take care of the obvious cases. The coefficient of $|z_1^j z_2^j (z_1^{2p}+ (-1)^j z_2^{2p})|^2$ is $c_{2p,j}$ which is positive. The coefficient of $|z_1^{2p}+z_2^{2p}|^2$ is positive. We now consider the coefficient of $|z_1 z_2|^{4j}$.
\begin{definition}
Define $d_{k}$ to be the coefficient of $(z_1 \overline{z_1} z_2 \overline{z_2})^{2k}$ in $\Phi_{\Lambda_p}(z, \bar{z})$. Further we define the polynomial $D_p$ by $$D_{p}(t)=\sum_{j=1}^p{d_{k}t^{2k}}.$$
\end{definition}
We give an exact formula for $D_p(t)$ in the next proposition.
\begin{proposition} The polynomial $D_{p}(t)$ is given by
\begin{eqnarray*}
D_{p}(t)=1&-&\frac{1}{4^p}\big( \left(1+ a + b + a b\right)^{2p}+ \left(1- a + b - a b\right)^{2p}\\
&+&\left(1+ a - b - a b\right)^{2p}+ \left(1- a - b + a b\right)^{2p}\big)
\end{eqnarray*} where $a=\sqrt{1-4t}$ and $b=\sqrt{1+4t}$.
\end{proposition}
\begin{proof}
By Proposition \ref{Quaternion} and Theorem \ref{thm:cyclicproposition},
\begin{eqnarray*}
\lefteqn{\Phi_{\Lambda_p} =}\\
& & 1+|z_1|^{4p} +|z_2|^{4p}-\left( \frac{1+\sqrt{1-4 |z_1 z_2|^2}}{2} \right)^{2p} - \left( \frac{1-\sqrt{1-4|z_1 z_2|^2}}{2}\right)^{2p}\\
&+&1+(z_2 \bar{z_1})^{2p} +(z_1 \bar{z_2})^{2p}-\left( \frac{1+\sqrt{1+4 |z_1 z_2|^2}}{2} \right)^{2p} - \left( \frac{1-\sqrt{1+4|z_1 z_2|^2}}{2}\right)^{2p}\\
&-&\left(1+|z_1|^{4p} +|z_2|^{4p}-\left( \frac{1+\sqrt{1-4 |z_1 z_2|^2}}{2} \right)^{2p} - \left( \frac{1-\sqrt{1-4|z_1 z_2|^2}}{2}\right)^{2p}\right)\\
& \times&\left(1+(z_2 \bar{z_1})^{2p} +(z_1 \bar{z_2})^{2p}-\left( \frac{1+\sqrt{1+4 |z_1 z_2|^2}}{2} \right)^{2p} - \left( \frac{1-\sqrt{1+4|z_1 z_2|^2}}{2}\right)^{2p}\right).
\end{eqnarray*}
Next we let $t=|z_1 z_2|^2$. Take all the terms involving $t$ in the previous expression to get the following:
\begin{eqnarray*}
D_p(t)& =& 1-\left( \frac{1+\sqrt{1-4 t}}{2} \right)^{2p} - \left( \frac{1-\sqrt{1-4t}}{2}\right)^{2p}\\
&+&1-\left( \frac{1+\sqrt{1+4t}}{2} \right)^{2p} - \left( \frac{1-\sqrt{1+4t}}{2}\right)^{2p}\\
&-&\left(1-\left( \frac{1+\sqrt{1-4t}}{2} \right)^{2p} - \left( \frac{1-\sqrt{1-4t}}{2}\right)^{2p}\right)\\
&\times &\left(1-\left( \frac{1+\sqrt{1+4 t}}{2} \right)^{2p} - \left( \frac{1-\sqrt{1+4t}}{2}\right)^{2p}\right).\\
\end{eqnarray*}
Multiply this expression out to get the desired result:
\begin{eqnarray*}
D_p(t) & = & 1- \frac{1}{4^p}\Bigg( \left( 1+\sqrt{1+4 t} \right)^{2p}\left( 1+\sqrt{1-4 t} \right)^{2p} + \left( 1+\sqrt{1-4 t} \right)^{2p}\left( 1-\sqrt{1+4 t} \right)^{2p}\\
& +& \left( 1+\sqrt{1+4 t} \right)^{2p}\left( 1-\sqrt{1-4 t} \right)^{2p}+ \left( 1-\sqrt{1-4 t} \right)^{2p}\left( 1-\sqrt{1+4 t} \right)^{2p} \Bigg).
\end{eqnarray*}
\end{proof}
\begin{lemma} \label{l:product} The following identity holds:
\begin{align*}
&\left(1+ a + b + a b\right)^{2p}+ \left(1- a + b - a b\right)^{2p}+\left(1+ a - b - a b\right)^{2p}+ \left(1- a - b + a b\right)^{2p}\\
&=4 \sum_{j=0}^{p}{\sum_{k=0}^p{\binom{2p}{2j}\binom{2p}{2k}a^{2j}b^{2k}}}\\
\end{align*}
\end{lemma}
\begin{proof} We begin by factoring and using the binomial theorem.
\begin{align*}
&\left(1+ a + b + a b\right)^{2p}+ \left(1- a + b - a b\right)^{2p}+\left(1+ a - b - a b\right)^{2p}+ \left(1- a - b + a b\right)^{2p}\\
&=\left((1+ a)(1+b)\right)^{2p}+ \left((1- a)(1+ b)\right)^{2p}+\left((1+ a)(1-b)\right)^{2p}+ \left((1- a)(1-b)\right)^{2p}\\
&=\left( \sum_{j=0}^{2p}{\binom{2p}{j}a^{j}}\right)\left( \sum_{k=0}^{2p}{\binom{2p}{k}b^{k}}\right)+
\left( \sum_{j=0}^{2p}{\binom{2p}{j}(-1)^{j}a^{j}}\right)\left( \sum_{k=0}^{2p}{\binom{2p}{k}b^{k}}\right)\\
&+\left( \sum_{j=0}^{2p}{\binom{2p}{j}a^{j}}\right)\left( \sum_{k=0}^{2p}{\binom{2p}{k}(-1)^{k}b^{k}}\right)+
\left( \sum_{j=0}^{2p}{\binom{2p}{j}(-1)^{j}a^{j}}\right)\left( \sum_{k=0}^{2p}{\binom{2p}{k}(-1)^{k}b^{k}}\right).\\
\end{align*}
After multiplying out the right hand side and collecting terms we get
\begin{align*}
\sum_{j=0}^{2p}{\sum_{k=0}^{2p}{\binom{2p}{j}\binom{2p}{k}a^j b^k \left( 1+(-1)^j+(-1)^k+(-1)^{j+k}\right)}}.\\
\end{align*}
\noindent Since $$\left( 1+(-1)^j+(-1)^k+(-1)^{j+k}\right)=\begin{cases}4 & \text{if $j$ and $k$ are both even,}\\
0 & \text{otherwise,}\\\end{cases}$$ the identity follows after reindexing.
\end{proof}
Let $a=\sqrt{z}$ and $b=\sqrt{\bar{z}}$ in the Lemma \ref{l:product}, then we have the following:
$$4 \sum_{j=0}^{p}{\sum_{k=0}^p{\binom{2p}{2j}\binom{2p}{2k}z^{j}\bar{z}^{k}}}=\left| 2\sum_{k=0}^p{\binom{2p}{2k}z^k} \right|^2.$$
\begin{lemma} Given a polynomial $p(x+i y)$ with all negative real roots, then $\left| p(x+i y) \right|^2$ has positive coefficients.
\end{lemma}
\begin{proof} Let $a_0$, $\cdots$, $a_d$ be the absolute values of the roots of $p$. Then expanding and simplifying $p$, we get
\begin{eqnarray*}
\left| p(x+i y) \right|^2 &= &\left| \prod_{j=0}^d{x+iy+a_j}\right|^2\\
&=&\prod_{j=0}^d{\left(x+iy+a_j\right)\left(x-iy+a_j\right)}=\prod_{j=0}^d{\left( x^2+2x a_j+y^2+a_j^2\right)}.\\
\end{eqnarray*}
In the last expression only positive real coefficients occur, so after expanding the product we get the desired result.
\end{proof}
D'Angelo provided me the statement and proof of the following lemma.
\begin{lemma} The following identity holds: $$P(z)=2\sum_{k=0}^{p}{\binom{2p}{2k}z^k}=\prod_{j=0}^{p-1}{\left(z+\tan^2{\left(\frac{(2j+1)\pi}{4p}\right)}\right)},$$ and hence all the roots of $P$ are negative.
\end{lemma}
\begin{proof} Taking proper care of the choice of square root, we can rewrite the given polynomial in the following way: $$P(z)=2\sum_{k=0}^{p}{\binom{2p}{2k}z^k}=\left( 1-\sqrt{z} \right)^{2p}+\left( 1+\sqrt{z} \right)^{2p}.$$
Setting the right hand side equal to zero yields
\begin{equation*}
\left( \frac{1-\sqrt{z}}{1+\sqrt{z}}\right)^{2p}=-1.
\end{equation*}
Taking $2p$-th roots gives
\begin{equation*}
\frac{1-\sqrt{z}}{1+\sqrt{z}}= e^{\frac{(2n+1)\pi i}{2p}}
\end{equation*}
for $n=0, \cdots, 2p-1$.
We solve for $\sqrt{z}$:
\begin{align*}
\sqrt{z} &= \frac{1-e^{\frac{(2n+1)\pi i}{2p}}}{1+e^{\frac{(2n+1)\pi i}{2p}}}
= \frac{e^{-\frac{(2n+1)\pi i}{4p}}-e^{\frac{(2n+1)\pi i}{4p}}}{e^{-\frac{(2n+1)\pi i}{4p}}+e^{\frac{(2n+1)\pi i}{4p}}}= i \tan{\left( \frac{(2n+1)\pi}{4p}\right)}.
\end{align*}
The roots of $P$ are therefore $-\tan^2{\left(\frac{(2j+1)\pi}{4p}\right)}$, and hence the identity follows.
\end{proof}
Finally we combine the previous lemmas to determine the sign of $d_{k}$.
\begin{proposition} \label{proposition:dksign} For $1 \leq k \leq p$,
\begin{enumerate}
\item $d_{k} > 0$ for $k$ odd.
\item $d_{k} < 0$ for $k$ even.
\end{enumerate}
\end{proposition}
\begin{proof} We have the following relationship between $D_{p}(t)$ and $P(z)$: $$D(i t)= 1-\frac{1}{4^p} P(1+4 i t).$$ By the previous lemmas $P(z)$ has all positive coefficients, thus $D_{p}(it)$ must have all negative coefficients. Since $D(t)$ is a polynomial with only even powers, the transformation $t \mapsto i t$ changes the sign of $d_{k}$ for odd $k$ and does not change the sign of $d_{k}$ when $k$ is even. Therefore $d_{k}$ must be positive for $k$ odd and negative for $k$ even.
\end{proof}
We rephrase the results in terms of matrices: $$\Phi_{\Lambda_p}(z, \bar{z})= d^{*}M_p d$$ where
$$d=\begin{pmatrix}
z_1^{2p}+z_2^{2p}\\
z_1 z_2(z_1^{2p}+(-1) z_2^{2p})\\
\vdots\\
z_1^p z_2^p(z_1^{2p}+(-1)^p z_2^{2p})\\
z_1^{2} z_2^{2}\\
\vdots\\
z_1^{2p} z_2^{2p}\\
z_1^{4p}+z_2^{4p}\\
\end{pmatrix},$$ and $$M_p=\begin{pmatrix}
1 & 0 & 0 & 0\\
0 & E_{p,1} & 0 & 0\\
0 & 0 & E_{p,2} & 0\\
0 & 0 & 0 & E_{p,3}\\
\end{pmatrix}$$ where $E_{p,1}$ is the $p$ by $p$ matrix with $c_{2p,j}$ on the diagonal. Also $E_{p,2}$ is the square matrix of size $p-1$ with diagonal entries $$(E_{p,2})_{jj}=2 \sum_{k=j+1}^{2j-1}{(-1)^{k-1} c_{2p,k}c_{2p,2j-k}} +(-1)^{j-1}c_{2p,j}^2-2c_{2p,2j}$$ for $1 \leq j \leq \left \lfloor \frac{p}{2} \right \rfloor$, and $$(E_{p,2})_{jj}=2 \sum_{k=j+1}^{p}{(-1)^{k-1} c_{2p,k}c_{2p,2j-k}} +(-1)^{j-1}c_{2p,j}^2$$ for $\left \lfloor \frac{p}{2} \right \rfloor < j \leq p-1$. Finally we have the 2 by 2 matrix $$E_{p,3}=\begin{pmatrix} (-1)^{p-1}c_{2p,p}^2 & -1 \\ -1 &0\\ \end{pmatrix}.$$
Now we are left with the task of computing the signature pair of $M_p$. We proceed by counting the number of eigenvalues of each sign in the submatrices.
\begin{proposition} For $1 \leq j \leq p-1$,
\begin{enumerate}
\item $(E_{p,2})_{jj}>0$ if $j$ is odd.
\item $(E_{p,2})_{jj}<0$ if $j$ is even.
\end{enumerate}
\end{proposition}
\begin{proof} Follows from Proposition \ref{proposition:dksign}.
\end{proof}
The diagonal matrices $E_{p,1}$ and $E_{p,2}$ have non-zero diagonal entries. The submatrix $E_{p,1}$ has $p$ eigenvalues, all of which are positive. Moreover, by the proposition, the diagonal entries in the matrices $E_{p,2}$ alternate sign. Also the matrix $E_{p,3}$ has one eigenvalue of each sign. Thus combining these results for the submatrices of $M_p$, we obtain one of our main results.
\begin{theorem} The signature pair of the binary dihedral group with $4p$ elements is given by $$S(\Lambda_p)=\left(2+p+\left \lfloor \frac{p}{2} \right \rfloor, 1+\left \lfloor \frac{p-1}{2} \right \rfloor \right).$$
\end{theorem}
Taking the limit as $p$ goes to infinity we get the following theorem.
\begin{theorem} The asymptotic positivity ratio for $\Lambda_p$ is $\frac{3}{4}$.
\end{theorem}
\subsection{Binary Tetrahedral Group}
The binary tetrahedral group is given by $$T:=\left< a, b \; | \; a^3=b^3=(a b)^2\right>$$ and has order 24. We represent $T$ in $SU(2)$ using the Springer description \cite{Sp77}. Let
\begin{equation} \label{tetrahedralgens}
r =
\begin{pmatrix}
\epsilon & 0\\
0 & \epsilon^{-1}
\end{pmatrix} \; \;
s =
\begin{pmatrix}
0 & 1\\
-1 & 0
\end{pmatrix} \; \;
t =
\frac{1}{\sqrt{2}}\begin{pmatrix}
\epsilon^{-1} & \epsilon^{-1}\\
-\epsilon & \epsilon
\end{pmatrix}
\end{equation}
where $\epsilon=e^{\frac{\pi i}{4}}$.
Define the faithful unitary representation $\kappa : T \to SU(2)$ by $$\kappa(a)= st^{-1} \;\text{ and } \; \kappa(b)= t.$$
Let $\Gamma = \kappa(T)$. For all $\gamma \in \Gamma$ we can represent $\gamma$ in the following way:$$\gamma= r^{2j} s^k t^l$$ for some $0 \leq j < 3$, $0 \leq k < 2$, and $0\leq l < 3$. We remark that all faithful representations of $T$ in $SU(2)$ are equivalent to the representation given by $\kappa$.
We express $\Phi_\Gamma$ in terms in $\Gamma$-invariant polynomials. The following 14 linearly independent $\Gamma$-invariant polynomials appear:
\begin{tabular}{lcl}
$C_1$ & $=$ & $z_1^{16}+28 z_1^{12} z_2^4+198 z_1^8 z_2^8+28 z_1^4 z_2^{12}+z_2^{16}$ \\[2pt]
$C_2$ & $=$ & $z_1^{20}-19 z_1^{16} z_2^4-494 z_1^{12} z_2^8-494 z_1^8 z_2^{12}-19 z_1^4 z_2^{16}+z_2^{20}$
\\[2pt]
$C_3$ & $=$ & $z_1^{18} z_2^2+12 z_1^{14} z_2^6-26 z_1^{10} z_2^{10}+12 z_1^6 z_2^{14}+z_1^2 z_2^{18}$
\\[2pt]
$C_4$ & $=$ & $-z_1^{21} z_2-27 z_1^{17} z_2^5-170 z_1^{13} z_2^9+170 z_1^9 z_2^{13}+27 z_1^5 z_2^{17}+z_1
z_2^{21}$ \\[2pt]
$C_5$ & $=$ & $-z_1^{15} z_2^3+3 z_1^{11} z_2^7-3 z_1^7 z_2^{11}+z_1^3 z_2^{15}$ \\[2pt]
$C_6$ & $=$ & $z_1^{12}-33 z_1^8 z_2^4-33 z_1^4 z_2^8+z_2^{12}$ \\[2pt]
$C_7$ & $=$ & $-z_1^{13} z_2-13 z_1^9 z_2^5+13 z_1^5 z_2^9+z_1 z_2^{13}$ \\[2pt]
$C_8$ & $=$ & $-z_1^{17} z_2+34 z_1^{13} z_2^5-34 z_1^5 z_2^{13}+z_1 z_2^{17}$ \\[2pt]
$C_9$ & $=$ & $z_1^8+14 z_1^4 z_2^4+z_2^8$ \\[2pt]
$C_{10}$ & $=$ & $z_1^{24}+\left(-\frac{4692}{35}+\frac{1}{35} \left(2382+\sqrt{119948010}\right)\right) z_1^{20} z_2^4$\\[2pt]
& $+$& $\left(\frac{45333}{35}+\frac{4}{35}\left(-2382-\sqrt{119948010}\right)\right) z_1^{16} z_2^8$\\[2pt]
& $+$ &$\left(\frac{62008}{35}-\frac{6}{35} \left(-2382-\sqrt{119948010}\right)\right)z_1^{12} z_2^{12}$\\[2pt]
& $+$& $\left(\frac{45333}{35}+\frac{4}{35} \left(-2382-\sqrt{119948010}\right)\right) z_1^8 z_2^{16}$\\[2pt]
& $+$& $\left(-\frac{4692}{35}+\frac{1}{35}\left(2382+\sqrt{119948010}\right)\right) z_1^4 z_2^{20}+z_2^{24}$ \\[2pt]
$C_{11}$ & $=$ & $z_1^{10} z_2^2-2 z_1^6 z_2^6+z_1^2 z_2^{10}$ \\[2pt]
$C_{12}$ & $=$ & $z_1^{24}+\left(-\frac{4692}{35}+\frac{1}{35} \left(2382-\sqrt{119948010}\right)\right) z_1^{20} z_2^4$\\[2pt]
&$+$& $\left(\frac{45333}{35}+\frac{4}{35}\left(-2382+\sqrt{119948010}\right)\right) z_1^{16} z_2^8$\\[2pt]
&$+$&$\left(\frac{62008}{35}-\frac{6}{35} \left(-2382+\sqrt{119948010}\right)\right)z_1^{12} z_2^{12}$\\[2pt]
&$+$&$\left(\frac{45333}{35}+\frac{4}{35} \left(-2382+\sqrt{119948010}\right)\right) z_1^8 z_2^{16}$\\[2pt]
& $+$&$\left(-\frac{4692}{35}+\frac{1}{35}
\left(2382-\sqrt{119948010}\right)\right) z_1^4 z_2^{20}+z_2^{24}$ \\[2pt]
$C_{13}$ & $=$ & $z_1^{22} z_2^2-35 z_1^{18} z_2^6+34 z_1^{14} z_2^{10}+34 z_1^{10} z_2^{14}-35 z_1^6 z_2^{18}+z_1^2
z_2^{22}$ \\[2pt]
$C_{14}$ & $=$ & $-z_1^5 z_2+z_1 z_2^5$.\\
\end{tabular}
Define
$$A(z)=\begin{pmatrix}
\sqrt{\frac{305805}{128}} C_1\\
\sqrt{\frac{122199}{64}} C_2\\
\sqrt{\frac{14815}{16}} C_4\\
\sqrt{740} C_5\\
\sqrt{\frac{2725}{4}} C_6\\
\sqrt{\frac{495}{4}} C_9\\
\sqrt{\frac{1}{128} (-2382 + \sqrt{119948010})} C_{12}\\
\sqrt{\frac{1191}{32}} C_{13}\\
\sqrt{24} C_{14}
\end{pmatrix}$$ and $$B(z)=\begin{pmatrix}
\sqrt{\frac{48783}{32}}C_3\\
\sqrt{680} C_7\\
\sqrt{\frac{1157}{2}} C_8\\
\sqrt{\frac{1}{128} (2382 + \sqrt{119948010})} C_{10}\\
\sqrt{\frac{171}{2}} C_{11}
\end{pmatrix}.$$
Using Mathematica \cite{Mathematica} one can verify that $\Phi_\Gamma$ decomposes in the following way $$\Phi_{\Gamma}=||A(z)||^2-||B(z)||^2.$$ Therefore $S(\Gamma)=(9,5)$.
\begin{remark} If $\Gamma<SU(2)$, and $\Gamma$ is isomorphic to $T$, then $S(\Gamma)=\left( 9,5 \right)$.
\end{remark}
\subsection{Binary Octahedral Group}
The binary octahedral group is given by $$O:=\left< a, b \; | \; a^4=b^3=(a b)^2\right>$$ and has order 48. We again represent $O$ in $SU(2)$ using the Springer description \cite{Sp77}. Recall the generators of the binary tetrahedral group $r$, $s$, and $t$ given above in \eqref{tetrahedralgens}. Let $\tau : O \to SU(2)$ be a faithful unitary representation generated by $$\tau(a)= rt \;\text{ and } \; \tau(b)= t.$$ Notice that $$(rt)^4=t^3=(rt^2)^2=-1.$$
Let $\Gamma = \tau(O)$. For all $\gamma\in \Gamma$ we can represent $\gamma$ in the following way:$$\gamma= r^{j} s^k t^l$$ for some $0 \leq j < 8$, $0 \leq k < 2$, and $0\leq l < 3$. We remark that all faithful representations of $O$ in $SU(2)$ are equivalent to the representation given by $\tau$.
The $\Gamma$-invariant polynomial $\Phi_\Gamma$ has 1143 terms. Using Mathematica we decompose $$\Phi_\Gamma=d^{*}Md$$ where $M$ is the Hermitian coefficient matrix and $d$ is the vector of 135 monomials that appear in $\Phi_\Gamma$. Again using Mathematica we find that $M$ has rank 26 with 17 positive eigenvalues and 9 negative eigenvalues.
\begin{remark} Let $\Gamma<SU(2)$ such that $\Gamma$ is isomorphic to $O$, then $$S(\Gamma)=\left( 17,9 \right).$$
\end{remark}
\subsection{Binary Icosahedral Group}
The binary icosahedral group is given by $$I:=\left< a, b \; | \; a^5=b^3=(a b)^2\right>$$ and has order 120. We again represent $I$ in $SU(2)$ using the Springer description \cite{Sp77}. Let
\begin{equation} \label{icosahedralgens}
r =
-\begin{pmatrix}
\epsilon^3 & 0\\
0 & \epsilon^{2}
\end{pmatrix} \; \;
s =
\begin{pmatrix}
0 & 1\\
-1 & 0
\end{pmatrix} \; \;
t =
\frac{1}{\epsilon^2-\epsilon^{-2}}\begin{pmatrix}
\epsilon+\epsilon^{-1} & 1\\
1 & -\epsilon-\epsilon^{-1}
\end{pmatrix}
\end{equation}
where $\epsilon=e^{\frac{2\pi i}{5}}$. Then we define a representation of $I$ in $SU(2)$ by $a=r$ and $b=r^4ts$. Notice that $$(r)^5=(r^4ts)^3=(r^5ts)^2=-1.$$ In \cite{Sp77}, Springer describes the 120 elements in the binary icosahedral group as follows: $$\Gamma = \left\{r^h, s r^h, r^ht r^{j},r^h t s r^j | 0\leq h <10, 0 \leq j<5 \right\}.$$
The invariant polynomial $\Phi_\Gamma$ has about 500,000 terms in this case. Using Mathematica we decompose $$\Phi_\Gamma=d^{*}Md$$ where $M$ is the Hermitian coefficient matrix and $d$ is the vector of monomials that appear in $\Phi_\Gamma$. Again using Mathematica we find that $M$ has rank 62 with 40 positive eigenvalues and 22 negative eigenvalues.
\begin{remark} Let $\Gamma<SU(2)$ such that $\Gamma$ is isomorphic to $I$, then $$S(\Gamma)=\left( 40,22 \right).$$
\end{remark}
\section{The Asymptotic Positivity Ratio for the Cyclic Case}
In this section we study the signs of the coefficients of the polynomial $f_{p,q}$. Since $\Gamma(p,q)$ is a diagonal subgroup, the sign of a coefficient of $f_{p,q}$ corresponds to the sign of an eigenvalue of the underlying matrix of $\Phi_{\Gamma(p,q)}$. When $q=1$ or $q=2$, we know the exact numbers of positive and negative coefficients. For general $q$ however, it becomes difficult to determine these numbers exactly. Instead, we find upper and lower bounds for the number of coefficients of each sign. We use these bounds to compute the asymptotic positivity ratio as a rational function of $q$. Then we take the limit as $q$ goes to infinity to show that for the $\Gamma(p,q)$ the asymptotic positivity ratio is $\frac{3}{4}$.
Suppose $$f_{p,q}=\sum_{0\leq r,s \leq p}{c_{r,s}x^r y^s}.$$ Since $f_{p,q}$ is $\Gamma(p,q)$-invariant and the degree is at most $p$, we have $r+qs=kp$ for some $k\in \{1,\cdots,q \}$ and $r+s\leq p$. In \cite{D2} D'Angelo shows that $c_{r,s}$ is a non-zero integer whenever $x^r y^s$ is an invariant monomial, so the above question translates to determining the number of non-negative integer solutions to the equations $r+qs=kp$ for $k\in \{1,\cdots,q \}$ when $0<r+s \leq p$. For clarity we formally introduce the following notation.
\begin{notation} The number of weight $k$ terms in $f_{p,q}$ is $N_{k}(\Gamma(p,q))$. Denote the number of terms of odd weight by $N_{odd}(\Gamma(p,q))$, and the number of terms of even weight by $N_{even}(\Gamma(p,q))$.
\end{notation}
Also notice that the number of terms of $f_{p,q}$ is the same as the number of eigenvalues since we are using the diagonally generated cyclic group $\Gamma(p,q)$.
The following two lemmas estimate the number of terms of each weight and the total number of terms.
\begin{lemma} \label{lemmaNWt} The following inequality holds: $$\left|N_k(\Gamma(p,q))-\frac{q-k}{q-1}\cdot\frac{p}{q}\right| \leq 1. $$
\end{lemma}
\begin{proof} Fix $p$ and $q$. We want to count the number of non-negative integer solutions $(r,s)$ such that $r+qs=kp$ and $r+s\leq p$ where $1\leq k\leq q$. Notice that $r=kp-qs$, so $r$ is an integer whenever $s$ is an integer. Further notice that the two lines $r+qs=kp$ and $r+s = p$ intersect at the point $\left( \frac{p(q-k)}{q-1},\frac{(k-1)p}{q-1}\right)$. Projecting onto the $s$ coordinate, we observe that $N_k(p,q)$ is equal to the number of integers $s$ such that $\frac{(k-1)p}{q-1} \leq s \leq \frac{kp}{q}$. Thus $N_k(\Gamma(p,q))$ is within 1 of $\left \lfloor \frac{kp}{q} - \frac{(k-1)p}{q-1} \right \rfloor=\left \lfloor \frac{q-k}{q-1}\cdot\frac{p}{q} \right \rfloor$.
\end{proof}
\begin{remark} For $k=1$ the total number of solutions $N_1(\Gamma(p,q))$ is $\left \lfloor \frac{p}{q}\right \rfloor+1$.\\
For $k=q$ we have $$ N_q(\Gamma(p,q))=1.$$
\end{remark}
\begin{lemma} \label{lemmaN} The following inequality holds: $$\left| N(\Gamma(p,q)) - \frac{p}{2} \right| \leq q.$$
\end{lemma}
\begin{proof} By Lemma \ref{lemmaNWt}, $$\frac{q-k}{q-1}\cdot\frac{p}{q}-1\leq N_k(\Gamma(p,q))\leq \frac{q-k}{q-1}\cdot\frac{p}{q}+ 1,$$ and by definition $N(\Gamma(p,q))=\sum_{k=1}^{q}{N_k(\Gamma(p,q))}$. Therefore applying Lemma \ref{lemmaNWt} $q$ times yields $$\sum_{k=1}^{q}{\frac{q-k}{q-1}\cdot\frac{p}{q}}-q\leq N(\Gamma(p,q)) \leq \sum_{k=1}^{q}{\frac{q-k}{q-1}\cdot\frac{p}{q}}+q.$$ Factoring and rearranging the sum gives
\begin{eqnarray*}
\sum_{k=1}^{q}{\frac{q-k}{q-1}\cdot\frac{p}{q}} & = & \frac{p}{q(q-1)}\sum_{k=1}^{q}{(q-k)}\\
& = & \frac{p}{q(q-1)}(q^2-\frac{q(q+1)}{2}) = \frac{p}{2}.
\end{eqnarray*}
Thus combining the last two calculations gives the result $$\frac{p}{2}-q\leq N(\Gamma(p,q)) \leq \frac{p}{2}+q.$$
\end{proof}
Next we show in the limit that the ratio of the number of terms of odd weight to the total number of terms equals the ratio of the number of terms of even weight to the total number of terms.
\begin{lemma} \label{lemma2} The following limit holds: $$\lim_{q\to\infty}{\lim_{p\to \infty}{\frac{N_{odd}(\Gamma(p,q))}{N(\Gamma(p,q))}}}=\lim_{q\to\infty}{\lim_{p\to \infty}{\frac{N_{even}(\Gamma(p,q))}{N(\Gamma(p,q))}}}=\frac{1}{2}$$
\end{lemma}
\begin{proof} There are four similar cases depending on the residue of $q$ modulo 4. We consider the case where $q=4r$. Recall $$N_{odd}(\Gamma(p,q))=\sum_{k=1}^{2r}{N_{2k-1}(\Gamma(p,q))}.$$ By Lemma \ref{lemmaNWt}
\begin{eqnarray*}
\sum_{k=1}^{2r}{\frac{q-(2k-1)}{q-1}\cdot\frac{p}{q}}-2r & \leq & N_{odd} \leq \sum_{k=1}^{2r}{\frac{q-(2k-1)}{q-1}\cdot\frac{p}{q}}+2r.
\end{eqnarray*}
Rearranging and simplifying yields
\begin{eqnarray*}
\frac{p}{q-1}\cdot\frac{q}{4}-\frac{q}{2} & \leq & N_{odd} \leq \frac{p}{q-1}\cdot\frac{q}{4}+\frac{q}{2}.
\end{eqnarray*}
Next apply Lemma \ref{lemmaN} to get
\begin{eqnarray*}
\frac{\frac{p}{q-1}\cdot\frac{q}{4}-\frac{q}{2}}{\frac{p}{2}+\frac{q}{2}} & \leq & \frac{N_{odd}}{N} \leq \frac{\frac{p}{q-1}\cdot\frac{q}{4}+\frac{q}{2}}{\frac{p}{2}-\frac{q}{2}}\\
\frac{pq - 2q(q-1)}{2(p+q)(q-1)} &\leq& \frac{N_{odd}}{N} \leq \frac{pq + 2q(q-1)}{2(p-q)(q-1)}.
\end{eqnarray*}
Take limit as $p \to \infty$
\begin{eqnarray*}
\frac{q}{2(q-1)} &\leq& \lim_{p\to \infty}{\frac{N_{odd}}{N}} \leq \frac{q}{2(q-1)}.
\end{eqnarray*}
Thus
\begin{eqnarray*}
\lim_{p\to \infty}{\frac{N_{odd}}{N}} = \frac{q}{2(q-1)}.
\end{eqnarray*}
We proceed similarly for the even case. First $$N_{even}(\Gamma(p,q))=\sum_{k=1}^{2r}{N_{2k}(\Gamma(p,q))}.$$ Again we apply Lemma \ref{lemmaNWt} to get
\begin{eqnarray*}
\sum_{k=1}^{2r}{\frac{q-(2k)}{q-1}\cdot\frac{p}{q}}-2r & \leq & N_{even} \leq \sum_{k=1}^{2r}{\frac{q-(2k)}{q-1}\cdot\frac{p}{q}}+2r.
\end{eqnarray*}
Rearranging and simplifying yields
\begin{eqnarray*}
\frac{p}{q-1}\cdot\frac{q-2}{4}-\frac{q}{2} & \leq & N_{even} \leq \frac{p}{q-1}\cdot\frac{q-2}{4}+\frac{q}{2}.
\end{eqnarray*}
Next apply Lemma \ref{lemmaN} to get
\begin{eqnarray*}
\frac{\frac{p}{q-1}\cdot\frac{q-2}{4}-\frac{q}{2}}{\frac{p}{2}+\frac{q}{2}} & \leq & \frac{N_{even}}{N} \leq \frac{\frac{p}{q-1}\cdot\frac{q-2}{4}+\frac{q}{2}}{\frac{p}{2}-\frac{q}{2}}\\
\frac{p(q-2) - 2q(q-1)}{2(p+q)(q-1)} &\leq& \frac{N_{even}}{N} \leq \frac{p(q-2) + 2q(q-1)}{2(p-q)(q-1)}.
\end{eqnarray*}
Take limit as $p \to \infty$
\begin{eqnarray*}
\frac{q-2}{2(q-1)} &\leq& \lim_{p\to \infty}{\frac{N_{even}}{N}} \leq \frac{q-2}{2(q-1)}.
\end{eqnarray*}
Thus
\begin{eqnarray*}
\lim_{p\to \infty}{\frac{N_{odd}}{N}} = \frac{q-2}{2(q-1)}.
\end{eqnarray*}
While we have shown only one case, the others are similar; in Table \ref{EvenOddTable} we summarize the other cases.
\begin{table}[ht]
\caption{Summary of other cases.}
{\begin{tabular}{|c|c|c|}
\hline
$q$ & $\lim_{p}{\frac{N_{even}}{N}}$ & $\lim_{p}{\frac{N_{odd}}{N}}$\\
\hline
$0 \pmod{4}$ & $\frac{q-2}{2(q-1)}$& $\frac{q}{2(q-1)}$\\
$1 \pmod{4}$ & $\frac{q-1}{2q}$& $\frac{q+1}{2q}$\\
$2 \pmod{4}$ & $\frac{q-2}{2(q-1)}$& $\frac{q}{2(q-1)}$\\
$3 \pmod{4}$ & $\frac{q-1}{2q}$& $\frac{q+1}{2q}$\\
\hline
\end{tabular}}
\label{EvenOddTable}
\end{table}
Taking the limit as $q$ goes to infinity in all cases gives the desired result $$\lim_{q\to \infty}{\lim_{p\to \infty}{\frac{N_{odd}}{N}}} = \frac{1}{2}.$$
Hence the limit the ratio of the number of even weight terms to the total number of terms equals the limit of the ratio of the number of odd weight terms to the total number of terms.
\end{proof}
So far we have ignored the sign of coefficients. Using our notion of weight, we restate a theorem from \cite{LWW}.
\begin{theorem}{(Loehr, Warrington, Wilf \cite{LWW})} \label{LWW} The coefficient $c_{r,s}$ of the weight $w$ monomial $x^r y^s$ in $f_{p,q}$ is positive when $\gcd \left( r,s,w \right)$ is odd, the coefficient is negative when $\gcd \left( r,s,w \right)$ is even.
\end{theorem}
\begin{corollary} \label{Cor1} The odd weight terms in $f_{p,q}$ are all positive, and the even weight terms in $f_{p,q}$ alternate sign.
\end{corollary}
\begin{proof} For odd weight terms $\gcd \left( r,s,w \right)$ is always odd, thus by Theorem \ref{LWW}, the coefficients are all positive.
When $w$ is even, $\gcd \left( r,s,w \right)$ is even whenever both $r$ and $s$ are even. But $r=wp-qs$, so $r$ is even if $s$ and $w$ are even. Finally for fixed weight $w$ the possible $s$-values are consecutive integers. Hence for $w$ even, $s$ will alternate between even and odd values, thereby making $\gcd \left( r,s,w \right)$ alternate between even and odd values. By Theorem \ref{LWW} the terms of odd weight in $f_{p,q}$ will alternate signs.
\end{proof}
For ease of notation we make the following definition.
\begin{definition} Let $T(q)$ denote the asymptotic positivity ratio for $\Gamma(p,q)$, then $$T(q)=\lim_{p\to \infty}{L(\Gamma(p,q))}.$$
\end{definition}
The sequence $T(q)$ is of interest. We list the first few terms:
$$T(q)=\left( 1, \; 1, \; \frac{5}{6}, \; \frac{5}{6}, \; \frac{4}{5}, \; \frac{4}{5}, \; \frac{11}{14}, \; \frac{11}{14}, \; \frac{7}{9}, \cdots \right).$$
In Corollary \ref{c:apvp} we show that the sequence $T(q)$ is monotone non-increasing, and each value repeats twice.
Now we combine Theorem \ref{LWW} with our previous estimates on the number of terms of each weight to compute the asymptotic positivity ratio.
\begin{proposition} \label{lemmaSq} The limit in the definition of the asymptotic positivity ratio exists, and
\begin{equation}\label{e:rational}T(q)=\begin{cases}
\frac{3q+1}{4q} & \text{if $q$ is odd},\\
\frac{3q-2}{4(q-1)} & \text{if $q$ is even}.
\end{cases}\end{equation}
\end{proposition}
\begin{proof} First consider the even and odd weights separately.
\begin{eqnarray}
T(q)=\lim_{p\to \infty}{\frac{N^{+}(\Gamma(p,q))}{N(\Gamma(p,q))}} & = & \lim_{p\to \infty}{\frac{N_{odd}^{+}(\Gamma(p,q))+N_{even}^{+}(\Gamma(p,q))}{N(\Gamma(p,q))}}\\
& = & \lim_{p\to \infty}{\frac{N_{odd}^{+}(\Gamma(p,q))}{N(\Gamma(p,q))}}+\lim_{p\to \infty}{\frac{N_{even}^{+}(\Gamma(p,q))}{N(\Gamma(p,q))}}. \label{eq19}
\end{eqnarray}
By Corollary \ref{Cor1} all odd weight terms are positive and the even weight terms alternate in sign. Hence $$N_{odd}=N_{odd}^{+},$$ and
\begin{eqnarray}{\lim_{p\to \infty}{\frac{N_{even}^{+}(\Gamma(p,q))}{N(\Gamma(p,q))}}}=\frac{1}{2}\cdot{\lim_{p\to \infty}{\frac{N_{even}(\Gamma(p,q))}{N(\Gamma(p,q))}}}. \label{eq20}
\end{eqnarray}
When $q$ is even, combining equations \ref{eq19} and \ref{eq20} with table \ref{EvenOddTable} in Lemma \ref{lemma2} yields $$T(q)=\lim_{p\to \infty}{L(\Gamma(p,q))}=\frac{q}{2(q-1)}+\frac{1}{2}\cdot\frac{q-2}{2(q-1)}=\frac{3q-2}{4(q-1)}.$$
Similarly when $q$ is odd, we have $$T(q)=\lim_{p\to \infty}{L(\Gamma(p,q))}=\frac{q+1}{2q}+\frac{1}{2}\cdot\frac{q-1}{2q}=\frac{3q+1}{4q}.$$
\end{proof}
\begin{corollary} \label{c:apvp}
\mbox{}\\
\noindent (i) The sequence $T(q)$ is monotone.\\
\noindent (ii) The limit as $q$ goes to infinity of $T(q)$ exists.\\
\noindent (iii) $T(2r-1)=T(2r)$ for all $r\in \mathbb{Z}^{+}$.\\
\end{corollary}
Now taking the limit as $q$ goes to infinity in Proposition \ref{lemmaSq} gives one of our main results.
\begin{theorem} Let $T(q)$ denote the asymptotic positivity ratio of $\Gamma(p,q)$, then
$$\lim_{q\to \infty}{T(q)}=\frac{3}{4}.$$
\end{theorem}
\section{Dihedral Group}
The analysis can be extended to other groups in $U(2)$; for example, in this section, we define families of unitary representations of dihedral groups, and we determine the asymptotic positivity ratios to be $\frac{1}{2}$.
Let $D_p$ denote the dihedral group with $2p$ elements; namely, $$D_p:=\left< a, b \; | \; a^p=b^2=1, bab=a^{-1} \right>.$$ Without loss of generality, let $\iota : D_p \to U(2)$ be the faithful representation generated by
\begin{eqnarray*}
\iota(a) & = &
\begin{pmatrix}
\omega & 0\\
0 & \omega^{-1}
\end{pmatrix}\\
\iota(b) & = &
\begin{pmatrix}
0 & 1\\
1 & 0
\end{pmatrix}.\\
\end{eqnarray*} Here the $a$ corresponds to rotation, and the $b$ corresponds to reflection. Let $$\Delta_p = \iota(D_p).$$
Before stating the main results of this section we begin with an example. We compute the number of positive and negative eigenvalues of $\Phi_{\Delta_3}(z, \bar{z})$. Expanding the product in the definition we get
\begin{eqnarray*}
\lefteqn{\Phi_{\Delta_3} =}\\
& & z_1^3 \bar{z_1}^3+z_2^3\bar{z_1}^3-z_1^3 z_2^3 \bar{z_1}^6+6z_1 z_2 \bar{z_1} \bar{z_2} - 3z_1^4 z_2 \bar{z_1}^4 \bar{z_2} - 3z_1 z_2^4 \bar{z_1}^4 \bar{z_2}-9z_1^2 z_2^2 \bar{z_1}^2 \bar{z_2}^2 \\
& &+ z_1^3 \bar{z_2}^3+z_2^3\bar{z_2}^3-z_1^6\bar{z_1}^3 \bar{z_2}^3-z_2^6 \bar{z_1}^3 \bar{z_2}^3 - 3 z_1^4 z_2 \bar{z_1} \bar{z_2}^4 -3 z_1 z_2^4 \bar{z_1} \bar{z_2}^4 -z_1^3 z_2^3 \bar{z_2}^6.\\
\end{eqnarray*}
In contrast to the cyclic case we get off-diagonal terms, and hence it is not enough to simply count the number of terms to get the number of eigenvalues. Rewriting in terms of a polynomials invariant under the $D_3$-action, we get
\begin{eqnarray*}
\lefteqn{\Phi_{\Delta_3} =}\\
& & (z_1^3+z_2^3)(\bar{z_1}^3 +\bar{z_2}^3)- z_1^3 z_2^3 (\bar{z_1}^6+\bar{z_2}^6)- \bar{z_1}^3 \bar{z_2}^3 (z_1^6 +z_2^6)+ 6z_1 z_2 \bar{z_1} \bar{z_2}\\
& &-9z_1^2 z_2^2 \bar{z_1}^2 \bar{z_2}^2-3 z_1 z_2 (z_1^3 +z_2^3) \bar{z_1} \bar{z_2} (\bar{z_1}^3 +\bar{z_2}^3).\\
\end{eqnarray*}
Equivalently we get $$\Phi_{\Delta_3} =
\begin{pmatrix}
\bar{z_1}^3 + \bar{z_2}^3\\
\bar{z_1} \bar{z_2} (\bar{z_1}^3 +\bar{z_2}^3)\\
\bar{z_1} \bar{z_2}\\
\bar{z_1}^2 \bar{z_2}^2\\
\bar{z_1}^3 \bar{z_2}^3\\
\bar{z_1}^6+\bar{z_2}^6\\
\end{pmatrix}^{T}
\begin{pmatrix}
1 & 0 & 0 & 0 & 0 & 0\\
0 & -3 & 0 & 0 & 0 & 0\\
0 & 0 & 6 & 0 & 0 & 0\\
0 & 0 & 0 & -9 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & -1\\
0 & 0 & 0 & 0 & -1 & 0\\
\end{pmatrix}
\begin{pmatrix}
z_1^3 + z_2^3\\
z_1 z_2 (z_1^3 +z_2^3)\\
z_1 z_2\\
z_1^2 z_2^2\\
z_1^3 z_2^3\\
z_1^6+z_2^6\\
\end{pmatrix}.$$
Hence the eigenvalues of $\Phi_{\Delta_3}$ are 1, -3, 6, -9, 1, -1. Then $$S(\Delta_3)=(3,3).$$
We proceed along these lines for general $p$. We compute the asymptotic positivity ratio in the following theorem.
\begin{theorem} Let $\Delta_p$ be a dihedral group of order $2p$ in $U(2)$, then
$$\lim_{p\to \infty}{L(\Delta_p)}=\frac{1}{2}.$$
\end{theorem}
\begin{proof} In order to prove this theorem we first invoke Theorem \ref{Dihedral} relating $\Phi_{\Delta_p}$ to the more familiar $f_{p,p-1}$. In Lemma \ref{l:DNumTerms} below we count the number of positive and negative eigenvalues. Below in Corollary \ref{c:DRatio}, we compute the positivity ratio, and we show that the limit as $p$ goes to infinity exists. The conclusion of this theorem follows by taking the limit as $p$ goes to infinity in Corollary \ref{c:DRatio}.
\end{proof}
D'Angelo \cite{D4} proves the following result relating $\Phi_{\Delta_p}$ to $f_{p,p-1}$. The key idea is that the elements of $\Delta_p$ are either diagonal matrices or anti-diagonal matrices, and hence we can consider them separately as $f_{p,p-1}$ evaluated at different points.
\begin{theorem}{({D'Angelo} \cite{D4})} \label{Dihedral} The invariant polynomial corresponding to the representation $\iota$ satisfies: $$\Phi_{\Delta_p} = f_{p,p-1}(\left|z_1 \right|^2, \left|z_2 \right|^2)+
f_{p,p-1}(z_2 \bar{z_1}, z_1 \bar{z_2})-f_{p,p-1}(\left|z_1 \right|^2,\left|z_2 \right|^2)f_{p,p-1}(z_2 \bar{z_1}, z_1 \bar{z_2}).$$
\end{theorem}
Unlike the general cyclic case, here we can exactly determine the numbers of positive and negative eigenvalues.
\begin{lemma} \label{l:DNumTerms} The total number of eigenvalues is
$$N(\Delta_{p})=p+\left \lfloor \frac{p}{2} \right \rfloor + 2.$$
The number of positive eigenvalues is $$N^{+}(\Delta_{p})=\left \lfloor \frac{p}{2} \right \rfloor +\left \lfloor \frac{p}{4} \right \rfloor +2.$$
\end{lemma}
\begin{proof} Recall \begin{equation} \label{e:fpminus1}f_{p,p-1}(x,y)=x^p + y^p + \sum_{j=1}^{\left \lfloor \frac{p}{2} \right \rfloor}{(-1)^j c_{p,j} x^j y^j}.\end{equation} Let $$B_p(x,y)=\sum_{j=1}^{\left \lfloor \frac{p}{2} \right \rfloor}{(-1)^j c_{p,j} x^j y^j}.$$
We invoke Theorem \ref{Dihedral} to decompose $\Phi_{\Delta_p}$; namely,
\begin{equation} \label{e:decomp}
\begin{split}
\Phi_{\Delta_p}(z,\bar{z}) &= (z_1 \bar{z_1})^p +(z_2 \bar{z_2})^p + B_p(z_1 \bar{z_1},z_2 \bar{z_2})+(z_2 \bar{z_1})^p +(z_1 \bar{z_2})^p + B_p(z_2 \bar{z_1},z_1 \bar{z_2}) \\
& -(z_1 \bar{z_1})^p (f_{p,p-1}(z_2 \bar{z_1}, z_1 \bar{z_2}))-(z_2 \bar{z_2})^p(f_{p,p-1}(z_2 \bar{z_1}, z_1 \bar{z_2}))\\
&- (z_2 \bar{z_1})^pB_p(z_1 \bar{z_1}, z_2 \bar{z_2}) -(z_1 \bar{z_2})^p B_p(z_1 \bar{z_1}, z_2 \bar{z_2})\\
&- B_p(z_1 \bar{z_1}, z_2 \bar{z_2})B_p(z_2 \bar{z_1}, z_1 \bar{z_2}).
\end{split}
\end{equation}
First notice $$B_p(z_1 \bar{z_1},z_2 \bar{z_2})+B_p(z_2 \bar{z_1},z_1 \bar{z_2})=2 B_p(z_1 \bar{z_1},z_2 \bar{z_2}).$$
Second we expand the last term in \eqref{e:decomp} to get
$$B_p(z_1 \bar{z_1}, z_2 \bar{z_2})B_p(z_2 \bar{z_1}, z_1 \bar{z_2})= \sum_{k=2}^{2 \left \lfloor \frac{p}{2} \right \rfloor}{(-1)^k \sum_{\substack{a+b=k\\ 1\leq a,b \leq \left \lfloor \frac{p}{2} \right \rfloor}}{c_{p,a}c_{p,b}}(z_1 \bar{z_1} z_2 \bar{z_2})^k}.$$
Define $$E_k=
\sum_{\substack{a+b=k\\ 1\leq a,b \leq \left \lfloor \frac{p}{2} \right \rfloor}}{c_{p,a}c_{p,b}}+2c_{p,k}$$ for $1 \leq k \leq \left \lfloor \frac{p}{2} \right \rfloor$, and $$E_k=
\sum_{\substack{a+b=k\\ 1\leq a,b \leq \left \lfloor \frac{p}{2} \right \rfloor}}{c_{p,a}c_{p,b}}$$ for $\left \lfloor \frac{p}{2} \right \rfloor < k \leq 2 \left \lfloor \frac{p}{2} \right \rfloor$.
Observe that $$E_k>0$$ for all $1 \leq k \leq 2 \left \lfloor \frac{p}{2} \right \rfloor$.
Now we want to write $\Phi_{\Delta_p}$ in terms of invariant polynomials. The polynomials $$z_1^p+z_2^p,\; z_1^j z_2^j,\; z_1^j z_2^j(z_1^p+z_2^p), \; z_1^{2p}+z_2^{2p}$$ are linearly independent and invariant under the $D_p$-action. Writing $\Phi_{\Delta_p}$ in terms of these invariant polynomials we get
\begin{eqnarray*}
\Phi_{\Delta_p}(z,\bar{z}) &=& (z_1^p+z_2^p)(\bar{z_1}^p+\bar{z_2}^p) + \sum_{k=1}^{2 \left \lfloor \frac{p}{2} \right \rfloor}{(-1)^{k+1}E_k(z_1 z_2 \bar{z_1} \bar{z_2})^k}-z_1^p z_2^p(\bar{z_1}^{2p}+\bar{z_2}^{2p})\\
& & - \bar{z_1}^p \bar{z_2}^p (z_1^{2p}+z_2^{2p})+\sum_{j=1}^{\left \lfloor \frac{p}{2} \right \rfloor}{(-1)^{j} c_{p,j} z_1^j z_2^j (z_1^p +z_2^p)\bar{z_1}^j \bar{z_2}^j(\bar{z_1}^p+\bar{z_2}^p)}.\\
\end{eqnarray*}
The underlying Hermitian matrix is nearly diagonal; we have only 2 non-diagonal terms. We now explicitly write out the polynomial in matrix form. Let
$$b=\begin{pmatrix}
z_1^{p}+z_2^{p}\\
z_1^j z_2^j(z_1^{p}+z_2^{p})\\
z_1^{k} z_2^{k}\\
z_1^{2p}+z_2^{2p}\\
\end{pmatrix}$$ for $1\leq j \leq \left \lfloor \frac{p}{2} \right \rfloor$ and $1\leq k \leq p$. Also let $$H_p=\begin{pmatrix}
1 & 0 & 0 & 0\\
0 & A_{p,1} & 0 & 0\\
0 & 0 & A_{p,2} & 0\\
0 & 0 & 0 & A_{p,3}\\
\end{pmatrix}.$$ The $\left \lfloor \frac{p}{2} \right \rfloor$ by $\left \lfloor \frac{p}{2} \right \rfloor$ diagonal matrix $A_{p,1}$ has diagonal entries given by $$(A_{p,1})_{jj}=(-1)^{j} c_{p,j}.$$ Next the matrix $A_{p,2}$ is $p$ by $p$ diagonal with diagonal entries given by $$(A_{p,2})_{kk}=(-1)^{k+1}E_k.$$ Finally the matrix
$$A_{p,3}=\begin{pmatrix}
0 & -1\\
-1& 0\\
\end{pmatrix}$$ when $p$ is odd, and
$$A_{p,3}=\begin{pmatrix}
(-1)E_p& -1\\
-1& 0\\
\end{pmatrix}$$ when $p$ is even.
Thus $$\Phi_{\Delta_p}(z,\bar{z})=b^{*}H_pb.$$
Now we just count the eigenvalues of each diagonal submatrix. The matrix $A_{p,1}$ has $\left \lfloor \frac{p}{2} \right \rfloor$ eigenvalues and $\left \lfloor \frac{p}{4} \right \rfloor$ positive eigenvalues. The matrix $A_{p,2}$ has $p$ eigenvalues and $\left \lfloor \frac{p}{2} \right \rfloor$ positive eigenvalues. In either case the matrix $A_{p,3}$ has 1 positive eigenvalue and 1 negative eigenvalue. Finally adding up the eigenvalues for each submatrix we get the desired result.
\end{proof}
\begin{corollary} \label{c:DRatio} The following positivity ratios hold for the dihedral group: $$L(\Delta_p)=\begin{cases}
\frac{1}{2}+\frac{2}{3p+4} & p \equiv 0 \bmod 4,\\
\frac{1}{2}+\frac{1}{3p+3} & p \equiv 1 \bmod 4,\\
\frac{1}{2}+\frac{1}{3p+4} & p \equiv 2 \bmod 4,\\
\frac{1}{2} & p \equiv 3 \bmod 4.
\end{cases}$$ Moreover, the limit as $p$ goes to infinity of the ratio equals $\frac{1}{2}$.
\end{corollary}
\begin{proof} The four cases are similar. We consider only the case where $p\equiv 0 \bmod 4$. By Lemma \ref{l:DNumTerms} it follows that
\begin{equation*}
N(\Delta_p)=p +\frac{p}{2} +2=\frac{3p+4}{2}
\end{equation*}
and
\begin{equation*}
N^{+}(\Delta_p)=\frac{p}{2} +\frac{p}{4}+2=\frac{3p+8}{4}.
\end{equation*}
Hence the ratio is:
$$L(\Delta_p)=\frac{N^{+}(\Delta_p)}{N(\Delta_p)}=\frac{3p+8}{2(3p+4)}=\frac{1}{2}+\frac{2}{3p+4}.$$
\end{proof}
\section{Orbit Polynomial and Chern Orbit Classes}
The goal of this section is to describe how the group-invariant Hermitian polynomials $\Phi_\Gamma$ arise in the context of representation theory. Theorem \ref{thm:chern} will express the invariant polynomial $\Phi_\Gamma$ in terms of the orbit Chern classes. We begin by recalling some basic definitions. In particular, we define the orbit polynomial and orbit Chern classes as in \cite{S0}.
Let $\pi : G \to U(n)$ be a unitary representation of a finite group $G$. Let $\mathbb{C}[z_1,\cdots, z_n]$ denote the polynomial algebra in $n$ variables over $\mathbb{C}$. We define a group action on the polynomial algebra by \[(g \cdot h)(z_1,\cdots,z_n) = h(\pi(g^{-1})(z_1,\cdots,z_n))\] where $h\in \mathbb{C}[z_1,\cdots, z_n]$ and $g \in G$. The set of fixed points of this action is the set of group-invariant polynomials in $\mathbb{C}[z_1,\cdots,z_n]$. Denote the set of fixed points of the action by $$\mathbb{C}[z_1,\cdots,z_n]^{G} = \left\{ h\in \mathbb{C}[z_1,\cdots,z_n] : g \cdot h = h \; \forall g \in G \right\}.$$ Despite this notation the set of fixed points depends on the representation of the group.
Define $G \cdot h$ to be the $G$-orbit corresponding to $h \in \mathbb{C}[z_1, \cdots,z_n]$. Following \cite{S0} define the orbit polynomial of $G \cdot h$ by $$\phi_{G\cdot h}(X)= \prod_{b \in G \cdot h}{\left( X+b \right)}.$$ Expanding the product we get $$\phi_{G \cdot h}(X)=\sum_{a+b=\left| G \right|}{c_a(G \cdot h)X^b}$$ where $c_a(G \cdot h) \in \mathbb{C}[z_1,\cdots,z_n]^{G}$ are called the orbit Chern classes of the orbit $G \cdot h$. The definition of orbit Chern class agrees with the usual topological definition of Chern class; this construction is given in \cite{SS}.
Now we restrict our attention to the case $n=2$, $G$ is a cyclic group of order $p$, and the representation is given by $\pi(G)= \Gamma(p,q)$. Let $h=-(z_1+z_2)$, then $G \cdot h = \left\{ -\omega^j z_1 - \omega^{q j} z_2: j = 0, \cdots p-1 \right\}$. The orbit polynomial $\phi_{G\cdot h}(X)= \prod_{j=0}^{p-1}{\left( X-\omega^j z_1 - \omega^{q j} z_2 \right)}.$ Thus the orbit polynomial evaluated at 1 is $f_{p,q}$. Taking the total Chern class of the orbit $G\cdot h$ is exactly $f_{p,q}$.
We can polarize $\Phi_{\Gamma}$; we treat $z$ and $\bar{z}$ as independent variables. Thus we write $$\Phi_{\Gamma}(z, \bar{w})=1-\prod_{\gamma \in \Gamma}{\left(1-\langle \gamma z, w \rangle \right)}.$$ If we set $w= (1,1,\cdots,1)$, then $\Phi_{\Gamma}(z, \bar{w})=1-\prod_{\gamma \in \Gamma}{\left(1-\sum_{j=1}^{n}{\gamma z_j \bar{w_j}} \right)},$ which is exactly the alternating sum of orbit Chern classes of the orbit corresponding to $z_1+z_2+\cdots+z_n$. Therefore we have the following theorem.
\begin{theorem} \label{thm:chern} Let $\pi : G \to U(n)$ be a faithful, unitary representation of the finite group $G$. Put $\Gamma=\pi(G)$. Then $$\Phi_{\Gamma}(z, \bar{z})=\sum_{j=1}^{p}{(-1)^{j-1} c_j(G \cdot (z_1+\cdots +z_n))}.$$
\end{theorem}
In particular we have the following corollary.
\begin{corollary} \label{c:last} Suppose $\pi : G \to \Gamma(p,q)$ is the representation given above, then $$f_{p,q}(x,y)=\sum_{j=1}^{p}{(-1)^{j-1} c_j(G \cdot (x+y))}.$$
\end{corollary}
\newpage
\section{Source Code for Computing Signature Pairs in Mathematica}
Let $\Gamma$ be a finite subgroup of $U(2)$, and recall \begin{equation*}\Phi_{\Gamma}(z, \bar{w})=1-\prod_{\gamma \in \Gamma}{\left(1-\langle \gamma z, w \rangle \right)}.\end{equation*} We introduce the Mathematica \cite{Mathematica} function GroupSignaturePair. This function uses standard Mathematica commands to compute the eigenvalues of $\Phi_{\Gamma}$. This function takes a list of the group elements of $\Gamma$ and returns a list of the eigenvalues of the underlying hermitian matrix of the polynomial $\Phi_{\Gamma}$. Computing signature pairs in this way is very memory intensive. To improve performance, one can use the Mathematica command $N[]$ to numerically find the eigenvalues.
\begin{verbatim}
GroupSignaturePair[group_] :=
Module[{matrix, hermitianmatrix, poly, order, eigenvalues},
poly = First[
Expand[1 - Product[1 -
Transpose[(L.{{z1}, {z2}})].{w1, w2}, {L, group}]]];
order = Length[group];
matrix = CoefficientList[poly, {z1, z2, w1, w2}];
hermitianmatrix = Partition[Flatten[matrix], (order + 1)^2];
eigenvalues = Eigenvalues[hermitianmatrix];
Return[eigenvalues]]
\end{verbatim} |
1807.11215 | \section{Conclusion}
This work proposes a comprehensive analyze on how a DNN can describe emotional states. To this purpose, we first studied how many dimensions are sufficient to accurately represent an emotion resulting from a facial expression. We then conclude that three dimensions are a good trade-off between accuracy and compactness, agreeing with the arousal-valence-dominance~\cite{russell_circumplex_1980}\cite{mehrabian1996pleasure} psychologist model.
Thereby, we came up with a DNN providing a 3-dimensional compact representation of emotion, learned in a multi-domain fashion on RAF~\cite{li_reliable_2017}, SFEW~\cite{dhall_static_2011} and AffecNet~\cite{mollahosseini_affectnet:_2017}. We set up a comparison with the state-of-the-arts and showed that our model can compete with models having much larger feature sizes. It proves that bigger representations are not necessary for emotion recognition. In addition, we implemented a visualization process enabling to qualitatively evaluate the consistency of the compact features extracted from emotion faces by our model. We thus showed that DNN trained on emotion recognition are naturally learning an arousal-valence-like~\cite{russell_circumplex_1980} encoding of the emotion. As a future work we plan to also apply state-of-the-art techniques -- as Deep Locality Preserving Loss~\cite{li_reliable_2017} or Covariance Pooling~\cite{acharya_covariance_2018} -- to enhance our compact representation. In addition, nothing warranty that the learned CAKE bears the same semantic meanings as arousal-valence-dominance does: further interpreting the perceived semantic of the dimensions would therefore be an interesting piece of work.
\section{Introduction}
\begin{figure}
\begin{floatrow}
\ffigbox[0.45\textwidth]{\includegraphics[width=\linewidth]{Visualizations/EmotionArouval_TEST_bigger-cropped.pdf}}{\caption{Comparison of the discrete and continuous (arousal-valence) representations using AffectNet's annotations~\cite{mollahosseini_affectnet:_2017}.}\label{fig:affecnet_arouval}}
\ffigbox[.45\textwidth]{\input{plots/courbeAVdim.tex}}{\caption{Influence of adding supplementary dimensions to arousal-valence when predicting emotion on AffectNet~\cite{mollahosseini_affectnet:_2017}.}\label{fig:courbeAVdim}}
\end{floatrow}
\end{figure}
Facial expression is one of the most used human means of communication after language. Thus, the automated recognition of facial expressions -- such as emotions -- has a key role in affective computing, and its development could benefit human-machine interactions.
Different models are used to represent human emotion states. Ekman~\textit{et al.}~\cite{ekman_constants_1971} propose to classify the human facial expression resulting from an emotion into six classes (\textit{resp.} happiness, sadness, anger, disgust, surprise and fear) supposed to be independent across the cultures. This model has the benefit of simplicity but could be not sufficient to address the whole complexity of human affect. Moreover it suffers from serious intra-class variations as, for instance, soft smile and laughing equally belong to \textit{happiness}. That is why Ekman's emotion classes are sometimes assembled into compound emotions~\cite{du2014compound} (\textit{e.g.} happily surprised).
Others have chosen to represent emotion with an n-dimensional continuous space, as opposite to the Ekman's discrete classes. Russel has built the \textit{Circumplex Model of Affect}~\cite{russell_circumplex_1980} in which emotion states are described by two values: arousal and valence.
\textit{Arousal} represents the excitation rate -- the higher the arousal is, the more intense the emotion is -- and \textit{valence} defines whether the emotion has a positive or a negative impact on the subject. Russels suggests in \cite{russell_circumplex_1980} that all Ekman's emotions~\cite{ekman_constants_1971} and compound emotions could be mapped in the \textit{circumplex model of affect}. Furthermore, this two-dimensional approach allows a more accurate specification of the emotional state, especially by taking its intensity into account.
A third dimension has been added by Mehrabian~\textit{et al.}~\cite{mehrabian1996pleasure} -- the \textit{dominance} -- which depends on the degree of control exerted by a stimulus. Last, Ekman and Friesen~\cite{ekman_measuring_1976} have come up with the \textit{Facial Action Code System} (FACS) using anatomically based action units. Developed for measuring facial movements, FACS is well suited for classifying facial expressions resulting from an affect.
Based on these emotion representations, several large databases of face images have been collected and annotated according to emotion. EmotioNet~\cite{benitez-quiroz_emotionet:_2016} gathers faces annotated with Action Units~\cite{ekman_measuring_1976}; SFEW~\cite{dhall_static_2011}, FER-13~\cite{goodfellow_challenges_2013} and RAF~\cite{li_reliable_2017} propose images in the wild annotated in basic emotions; AffecNet~\cite{mollahosseini_affectnet:_2017} is a database annotated in both discrete emotion~\cite{ekman_constants_1971} and arousal-valence~\cite{russell_circumplex_1980}.
The emergence of these large databases has allowed to develop automatic emotion recognition systems, such as the recent approaches based on Deep Neural Networks (DNN). AffectNet's authors~\cite{mollahosseini_affectnet:_2017} use three AlexNet~\cite{krizhevsky_imagenet_2017} to learn respectively emotion classes, arousal and valence. In \cite{ng_deep_2015}, the authors make use of transfer learning to counteract the smallness of the SFEW~\cite{dhall_static_2011} dataset, by pre-training their model on ImageNet~\cite{deng_imagenet:_2009} and FER~\cite{goodfellow_challenges_2013}. In \cite{acharya_covariance_2018} authors implement {\em Covariance Pooling} using second order statistics when training on emotion recognition (on RAF~\cite{li_reliable_2017} and SFEW~\cite{dhall_static_2011}).
Emotion labels, FACS and continuous representations have their own benefits -- simplicity of the emotion classes, accuracy of the arousal-valence, objectivity of the FACS, \textit{etc.} -- but also their own drawbacks -- imprecision, complexity, ambiguity, \textit{etc}.
Therefore several authors have tried to leverage the benefits of all these representations. Khorrami \textit{et al.}~\cite{khorrami_deep_2015} first showed that neural networks trained for expression recognition implicitly learn facial action units.
Contributing to highlighting the close relation between emotion and Action Units, Pons \textit{et al.}~\cite{pons_multi-task_2018} learned a multitask and multi-domain ResNet~\cite{he_deep_2015} on both discrete emotion classes (SFEW~\cite{dhall_static_2011}) and Action Units (EmotioNet~\cite{benitez-quiroz_emotionet:_2016}).
Finally, Li \textit{et al.}~\cite{li_reliable_2017} proposed a "\textit{Deep Locality-Preserving Learning}" to handle the variability inside an emotion class, by making classes as compact as possible.
In this context, this paper focuses on the links between arousal-valence and discrete emotion representations for image-based emotion recognition. More specifically, the paper proposes a methodology for learning very compact embedding, with not more than 3 dimensions, performing very well on emotion classification task, making the visualization of emotions easy, and bearing similarity with the arousal-valence representation.
\section{Learning Very Compact Emotion Embeddings}
\label{methods}
\subsection{Some Intuitions About Emotion Representations}
\label{subsec:preliminary_study}
We first want to experimentally measure the dependence between emotion and arousal-valence as yielded in~\cite{russell_circumplex_1980}. We thus display each sample of the AffectNet~\cite{mollahosseini_affectnet:_2017} validation subset in the arousal-valence space and color them according to their emotion label (Figure~\ref{fig:affecnet_arouval}). For instance, a face image labelled as \textit{neutral} with an arousal and a valence of zero is located at the center of Figure~\ref{fig:affecnet_arouval} and colored in blue. It clearly appears that a strong dependence exists between discrete emotion classes and arousal-valence. Obviously, it is due in part to the annotations of the AffectNet~\cite{mollahosseini_affectnet:_2017} dataset, as the arousal-valence have been constrained to lie in a predefined confidence area based on the emotion annotation. Nevertheless, this dependence agrees with the \textit{Circumplex Model of Affect}~\cite{russell_circumplex_1980}.
To evaluate further how arousal-valence representation is linked to emotion labels, we train a classifier made of one fully connected layer\footnote{By "fully connected layer" we denote a linear layer with biases and without activation function.} (fc-layer) to infer emotion classes from arousal-valence values provided by AffectNet~\cite{mollahosseini_affectnet:_2017} dataset. We obtain the accuracy of 83\%, confirming that arousal-valence can be an excellent \textit{2-d} compact emotion representation.
This raises the question of the optimality of this 2-\textit{d} representation. Would adding a third dimension to arousal-valence make the classification performance better? To address this question, we used the 512-\textit{d} hidden representation of a ResNet-18~\cite{he_deep_2015} trained to predict discrete emotions on the AffectNet dataset~\cite{mollahosseini_affectnet:_2017}. This representation is then projected into a more compact space using a fc-layer outputting $k$ dimensions, which are concatenated with the arousal-valence values. On top of this representation, we add another fc-layer predicting emotion classes. The two fc-layers are finally trained using Adam optimizer~\cite{kingma2014adam}.
Adding 1 dimension to arousal-valence gives a gain of +3 points on the accuracy. It agrees with the assumption that a three-dimensional representation is more meaningful than a two-dimensional one~\cite{mehrabian1996pleasure}. The benefit of adding more than 1 dimension is exponentially decreasing; with +512 dimensions, the gain is only of +0.6 points compared to adding 1 dimension, as shown in Figure~\ref{fig:courbeAVdim}.
From these observations, the use of a compact representation seems to be consistent with discrete emotion classes, as it enables an accuracy of 83\% and 86\% -- respectively for a 2-\textit{d} and a 3-\textit{d} representation -- and it even may allow to describe affect states with more contrast and accuracy.
Even if arousal-valence is a good representation for emotion recognition, the question of its optimality has not been answered by these preliminary experiments. In other words, is it possible to learn 2-\textit{d} (or 3-\textit{d}) embedding better than those built on arousal-valence? We positively answer this question in Section~\ref{subsec:method}.
\subsection{Learning Compact and Accurate Representations of Emotions}
\label{subsec:method}
Based on the previous observations, this section proposes a methodology for learning a compact embedding for emotion recognition from images.
\paragraph{Features extraction}
The basic input of our model is an image containing one face displaying a given emotion. We first extract 512-\textit{d} features specialized in emotion recognition.
So as to, we detect the face, align its landmarks by applying an affine transform and crop the face region. The so-obtained face is then resized into $224\times224$ and fed to a ResNet-18~\cite{he_deep_2015} network (Figure~\ref{fig:cross_arch}, \textit{Features extraction}). The face image is augmented (\textit{e.g.} jittering, rotation), mostly to take the face detector noise into account. We also use cutout~\cite{devries_improved_2017} -- consisting in randomly cutting a $45\times45$ pixels sized patch from the image -- to regularize and improve the robustness of our model to facial occlusions.
Our ResNet outputs 512-\textit{d} features, on top of which a fc-layer can be added. At training time, we also use dropout~\cite{srivastava2014dropout} regularization.
The neural network can be learned from scratch on two given tasks: discrete emotion classification or arousal-valence regression.
\paragraph{Compact emotion encoding}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Diagrams/model_schema.pdf}
\caption{Our approach's overview. Left: we use a ResNet-18 previously trained for discrete emotion recognition or arousal valence regression to extract 512-d hidden representations from face images. Center: using these hidden representations, CAKE or AVk representations (center) are learned to predict discrete emotions. Right: the learning process is multi-domain, predicting emotions on three different datasets with three different classifiers. Gray blocks are non-trainable weights while blue blocks are optimized weights.}
\label{fig:cross_arch}
\end{figure}
Compact embedding is obtained by projecting the 512-\textit{d} features provided by the ResNet-18 (pretrained on discrete emotion recognition) into smaller k-dimensional spaces (Figure~\ref{fig:cross_arch}, \textit{Emotion Encoding}) in which the final classification is done.
The $k$ features may be seen as a compact representation of the emotion, and the performance of the classifier can be measured for different values of $k$. CAKE-2, CAKE-3, \textit{etc.}, denote such classifiers with $k=2$, $k=3$, \textit{etc}.
In the same fashion we can train the ResNet-18 using arousal-valence regression. In this case, the so-obtained arousal-valence regressor can be used to infer arousal-valence values from novel images and concatenate them to the $k$ features of the embedding. Thus we reproduce here the exact experiment done in Section~\ref{subsec:preliminary_study} in order to assess the benefit of a third (or more) dimension. The difference is that arousal-valence are not ground truth values but predicted ones. These methods are denoted as AV1, AV2, AV3, \textit{etc.} for the different values of $k$.
\paragraph{Domain independent embedding}
As we want to ensure a generic compact enough representation, independent of the datasets, we learn the previously described model jointly on several datasets, without any further fine-tuning.
Our corpus is composed of AffectNet~\cite{mollahosseini_affectnet:_2017}, RAF~\cite{li_reliable_2017} and SFEW~\cite{dhall_static_2011}, labelled with seven discrete emotion classes: \textit{neutral}, \textit{happiness}, \textit{sad}, \textit{surprise}, \textit{fear}, \textit{disgust} and \textit{anger}.
Our training subset is composed of those of AffectNet (283901 elts., \textit{95.9\%} of total), RAF (11271 elts., \textit{3.81\%} of total) and SFEW (871 elts., \textit{0.29\%} of total). Our testing subset is composed of the subsets commonly used for evaluation in the literature (\textit{validation} of SFEW and AffecNet, \textit{test} of RAF).
To ease the multi-domain training, we first pre-train our features extractor model on AffectNet and freeze its weights. Then we apply the same architectures as described before, but duplicate the last fc-layer in charge of emotion classification in three dataset-specific layers (Figure~\ref{fig:cross_arch}, \textit{multi-domain learning}). The whole model loss is a modified softmax cross entropy defined as follows:
\begin{equation}
Loss=\frac{1}{N} \sum_{i=1}^{N} \sum_{j=1}^{3} w_{class}^{i,j} w_{dataset}^{j} \ E(y^i,\hat{y}^{i,j})
\end{equation}
where $j$ is ranging in [AffectNet, RAF, SFEW], $y^i$ is the label of $i^{th}$ element, $\hat{y}^{i,j}$ is the prediction of the $j^{th}$ classifier on the $i^{th}$ element, E is the softmax cross entropy loss, $N$ is the number of elements in the batch, $w_{class}^i$ is a weight given to the $i^{th}$ element of the batch depending on its emotion class and $w_{dataset}^{j}$ is a weight given to the $j^{th}$ classifier prediction.
Each sample of the multi-domain dataset is identified according to its original database, allowing to choose the correct classifier's output when computing the softmax cross entropy.
The $ w_{class}$ weight is defined as:
$
w_{class}^{i,j} = \frac{N_{total}^j}{N_{class}^{i,j} \times nbclass}
$
where $N_{total}^j$ is the number of elements in the $j^{th}$ dataset, $N_{class}^{i,j}$ is the number of elements in the class of the $i^{th}$ element of the $j^{th}$ dataset and $nbclass$ is the number of classes (7 in our case). The goal here is to fix the important class imbalance in the dataset by forcing to fit the uniform distribution, as previously done by \cite{mollahosseini_affectnet:_2017}.
The $w_{dataset}$ weight permits to take the imbalance between dataset's sizes into account.
\begin{align}
w_{dataset}^{j}= \left\{
\begin{array}{cl}
\frac{1}{\log N_{total}^{j}} & sample \in j^{th} dataset\\
0 & sample \notin j^{th} dataset
\end{array}
\right.
\end{align}
We thus define a global loss enabling to optimize the last two layers of our model (namely \textit{Emotion Encoding} and \textit{Multi-domain Learning} in Figure~\ref{fig:cross_arch}) on the three datasets at the same time. The dimension $k$ (or $k+2$ in the case of the arousal-valence approach) can easily be changed and help to evaluate the interest of supplementary dimensions for emotion representation.
\section{Experiments}
\label{results}
\subsection{Evaluation Metrics}
We measure the classification performance with the \textit{accuracy} and the \textit{\textbf{macro} F1 Score}~\eqref{eq:f1}. \textit{Accuracy} measures the number of correctly classified samples. Instead of accuracy, we prefer \textit{macro F1 score} which gives the same importance to each class:
\begin{equation}\begin{aligned}
F_{1macro}=\frac{1}{N_c}\sum_{i}^{N_c}F_{1i} \quad
F_{1i}=2\frac{prec_i \cdot rec_i}{prec_i+rec_i} \quad
prec_i=\frac{tp_i}{tp_i+fp_i} \quad
rec_i=\frac{tp_i}{tp_i+fn_i} \quad
\label{eq:f1}
\end{aligned}
\end{equation}
where $i$ is the class index; $prec_i$, $rec_i$ and $F_{1i}$ are the precision, the recall and the F1-score of class $i$; $N_c$ is the number of classes; $tp$, $fp$ and $fn$ are the true positives, false positives and false negatives rates. All scores are averaged on 10 runs, with different initializations, and given with associated standard deviations, on our multi-domain testing subset.
\subsection{Compactness of the Representation}
\label{subsec:compactness}
\begin{figure}
\begin{floatrow}
\ffigbox[.5\textwidth]{\input{plots/compare_dim.tex}
}{\caption{Influence of representation size on the multi-domain F1 score.}\label{fig:repsize}}
\capbtabbox{\input{plots/multiDomainRes.tex}}{\caption{Evaluation of compact representations on AffectNet, SFEW, RAF.}
\label{table:compact}}
\end{floatrow}
\end{figure}
We first evaluate the quality of the representations in a multi-domain setting. Table~\ref{table:compact} reports the F1-score of CAKE-2, AV, CAKE-3 and AV1 trained on three datasets with three different classifiers, each one being specialized on a dataset as explained in Section~\ref{methods}. Among the 2-\textit{d} models (AV and CAKE-2), AV is better, taking benefits from the knowledge transferred from the AffectNet dataset. This is not true anymore for the 3D models, where CAKE-3 is better than AV1, probably because of its greater number of trainable parameters.
To validate the hypothesis of the important gain brought by adding a third dimension, we run the "CAKE" and "AVk" experiments with different representation sizes. To simplify the analysis of the results, we plot in Figure~\ref{fig:repsize} a multi-domain F1-score, \textit{i.e.} the weighted average of the F1-scores according to the respective validation set sizes.
We observe that the gain in multi-domain F1-score is exponentially decreasing for both representations -- note that the representation size axis is in log scale -- and thus the performance gap between a representation of size 2 and size 3 is the more important. We also observe that "CAKE" representations still seem to yield better results than "AVk" when the representation size is greater than 2.
These first experiment shows that a very compact representation can yield good performances for emotion recognition. It also is in line with the "dominance" dimension hypothesis, as a third dimension brought the more significant gain in performance. After 3 dimensions, the gain is much less significant.
\subsection{Accuracy of the Representation}
To evaluate the efficiency of the CAKE-3 compact representation, we compare its accuracy with
state-of-the-art approaches (Table~\ref{table:comparison}) on the public datasets commonly used in the literature for evaluation (\textit{validation} of SFEW and AffecNet, \textit{test} of RAF). In order to get a fair comparison, we add a \textit{"Rep. Dim."} column corresponding to the size of the last hidden representation -- concretely, we take the penultimate fully connected output size.
We report the scores under the literature's metrics, namely the mean of the per class recall for RAF~\cite{li_reliable_2017} and the accuracy for SFEW~\cite{dhall_static_2011} and AffectNet~\cite{mollahosseini_affectnet:_2017}. To the best of the author's knowledge no other model has been evaluated before on the AffectNet's seven classes.
CAKE-3 is outperformed by Covariance Pooling~\cite{acharya_covariance_2018} and Deep Locality Preserving~\cite{li_reliable_2017}.
Nevertheless, it is still competitive as the emotion representation is far more compact -- 3-\textit{d} \textit{versus} 2000-\textit{d} -- and learned in a multi-domain fashion. Moreover, we gain 1 point on RAF when we compare to models of same size (2 millions parameters), \textit{e.g.} \textit{Compact Model}~\cite{kuo2018compact}. These results support the conclusion made in \ref{subsec:compactness}, as we show that a compact representation of the emotion learned by small models is competitive with larger representations. This finally underlines that facial expressions may be encoded efficiently into a 3-\textit{d} vector and that using a large embedding on small datasets may lead to exploit biases of the dataset more than to learn emotion recognition.
\begin{table}
\begin{tabular}{c|c|ccc|}
\cline{2-5}
& Rep. Dim. & \multicolumn{1}{c}{RAF~\cite{li_reliable_2017}} & \multicolumn{1}{c}{SFEW~\cite{dhall_static_2011}} & AffectNet~\cite{mollahosseini_affectnet:_2017} \\ \hline
\multicolumn{1}{|c|}{\multirow{2}{*}{Covariance Pooling~\cite{acharya_covariance_2018}}} & 2000 & 79.4 & - & - \\ \cline{2-5}
\multicolumn{1}{|c|}{} & 512 & - & 58.1 & - \\ \hline
\multicolumn{1}{|c|}{Deep Locality Preserving~\cite{li_reliable_2017}} & 2000 & 74.2 & 51.0 & - \\ \hline
\multicolumn{1}{|c|}{Compact Model~\cite{kuo2018compact}} & 64 & 67.6 & - & - \\ \hline
\multicolumn{1}{|c|}{VGG\cite{li_reliable_2017}} & 2000 & 58.2 & - & - \\ \hline
\multicolumn{1}{|c|}{Transfer Learning~\cite{ng_deep_2015}} & 4096 & - & 48.5 & - \\ \hline\hline
\multicolumn{1}{|c|}{{ours (CAKE-3)}} & {3} & {68.9} & {44.7} & {58.2} \\ \hline
\multicolumn{1}{|c|}{{ours (Baseline)}} & {512} & {71.7} & {48.7} & {61.7} \\ \hline
\end{tabular}
\caption{Accuracy of our model regarding state-of-the-art methods. The size of the representation is taken into account. Metrics are the average of per class recall for RAF and accuracy for SFEW and AffectNet.}
\label{table:comparison}
\end{table}
Our experiments also allow to perform a cross-database study as done in \cite{li_reliable_2017}. This study consists in evaluating a model trained on dataset B on a dataset A. Thereby we obtain Table~\ref{table:cross_f1} with the evaluation of each classifier on each dataset.
Results on SFEW~\cite{dhall_static_2011} -- trained or evaluated -- are constantly lower than others, with a higher standard deviation. This could be due to the insufficient number of samples in the SFEW training set or more probably to the possible ambiguity in the annotation of SFEW compared to AffectNet and RAF. Confirming this last hypothesis, the \textit{RAF classifier} has the better generalization among the datasets. It is in line with the claim of Li~\textit{et al.}~\cite{li_reliable_2017} that RAF has a really reliable annotation with a large consensus between different annotators. Finally, it also underlines the difficulty to find a reliable evaluation of an emotion recognition system because of the important differences between datasets annotations.
\begin{table}
\begin{tabular}{cl|l|l|l|}
\cline{3-5}
\multicolumn{1}{l}{} & & \multicolumn{3}{c|}{Dataset} \\ \cline{3-5}
\multicolumn{1}{l}{} & & AffectNet & SFEW & RAF \\ \hline
\multicolumn{1}{|c|}{\multirow{3}{*}{Classifier}} & AffectNet & \textbf{58.1 } \textit{($\pm$ 0.5)} & 27.6 \textit{($\pm$ 2.6)} & 53.8 \textit{($\pm$ 0.6)} \\ \cline{2-5}
\multicolumn{1}{|c|}{} & SFEW & 35.1 \textit{($\pm$ 2.1)} & \textbf{34.1 } \textit{($\pm$ 1.0)} & 47.3 \textit{($\pm$ 1.2)} \\ \cline{2-5}
\multicolumn{1}{|c|}{} & RAF & 51.8 \textit{($\pm$ 0.4)} & 31.5 \textit{($\pm$ 1.7)} & \textbf{64.4 } \textit{($\pm$ 0.6)} \\ \hline
\end{tabular}
\caption{Cross-database evaluation on CAKE-3 model (F1-Score).}
\label{table:cross_f1}
\end{table}
\subsection{Visualizing Emotion Maps}
Visualizations are essential to better appreciate how DNN performs classifications, as well as to visualize emotion boundaries and their variations across datasets. Our visualization method consists in densely sampling the compact representation space -- 2-\textit{d} or 3-\textit{d} -- into a mesh grid, and feeding it to a formerly trained model -- AV, CAKE-2 or CAKE-3 -- in order to compute a dense map of the predicted emotions. Not all the coordinates of the mesh grid belong to real emotions and some of them would never happen in real applications.
The construction of the mesh grid depends on the model to be used.
For the AV and the CAKE-2 models, we have simply built it using 2d vectors with all values ranging in intervals containing maximum and minimum values of the coordinates observed with real images. As the CAKE-3 model is dealing with a three-dimensional representation, it is not possible to visualize it directly on a plane figure.
To overcome this issue we modify CAKE-3 into a CAKE-3-Norm representation where all the coordinates are constrained to be on the surface of the unit sphere, and visualize spherical coordinates.
Even if CAKE-3-Norm shows lower performances (about 2 points less than CAKE-3), the visualization is still interesting, bringing some incentives about what has really been learned.
Figure~\ref{fig:visu} shows the visualization results for CAKE-3-Norm, AV and CAKE-2 representations (\textit{resp.} from top to down).
Each dot is located by the coordinates of its compact representation -- \((arousal, valence)\) for AV, \((k_1, k_2)\) for CAKE-2 and spherical coordinates ($\phi$ and $\theta$) for CAKE-3-Norm -- and colored according to the classifier prediction. The per class macro F1-score is displayed inside each emotion area.
First, each compact representation -- CAKE-2, CAKE-3-Norm and AV -- exhibits a strong consistency across the datasets (in Figure~\ref{fig:visu}, compare visualizations on the same row). Indeed, the three classifiers show a very similar organization of the emotion classes, which is demonstrating the reliability of the learned representation. Thereby, the \textit{neutral} class -- in blue -- is always placed at the origin and tends to neighbor all other classes. It is in line with the idea of neutral as an emotion with a very low intensity. Nevertheless, we can witness small inter-dataset variations, especially on SFEW~\cite{dhall_static_2011} (in Figure~\ref{fig:visu}, middle column) with \textit{disgust} and \textit{fear} -- \textit{resp.} brown and purple -- which are almost missing. This underlines the disparities of annotations across the datasets and confirms the need of multi-domain frameworks when wishing to achieve a more general emotion recognition model.
Second, we can analyze variations between the different representations for a given dataset (in Figure~\ref{fig:visu}, compare visualizations on the same column). As AV is based on arousal-valence, we observe the same emotion organization as in Figure \ref{fig:affecnet_arouval}. Especially, as the majority of the AffectNet's training (and validation) samples have a positive arousal, the classifier do not use the whole space (in Figure~\ref{fig:visu}, second row: see green, blue and orange areas) unlike CAKE-2 and CAKE-3 which are not constrained by arousal-valence.
We can find many similarities between these three representations, but the most impressive come across when comparing CAKE-2 and AV. Despite the inequality of scaling -- which causes the \textit{neutral} area (blue) to be smaller in CAKE-2 -- AV and CAKE-2 compact representations are very close. Indeed, the area classes are organized exactly in the same fashion. The only difference is that for AV they are disposed in a clockwise order around \textit{neutral} whereas for CAKE-2 they are disposed in an anticlockwise order. This observation shows that a DNN trained on the emotion recognition classification is able to learn an arousal-valence-like representation of the emotion. It contributes -- along with Khorrami~\cite{khorrami_deep_2015} who points that DNNs trained to recognize emotions are learning action units~\cite{ekman_measuring_1976} -- to bring the dependence across the emotion representations in the forefront.
\begin{figure}[hb]
\begin{minipage}{.86\linewidth}
\includegraphics[width=\linewidth]{Visualizations/gridEval_K3_nolegend-cropped.pdf}
\end{minipage}
\begin{minipage}{.86\linewidth}
\includegraphics[width=\linewidth]{Visualizations/gridEval_AV_nolegend-cropped.pdf}
\end{minipage}
\begin{minipage}{.86\linewidth}
\includegraphics[width=\linewidth]{Visualizations/gridEval_K2-cropped_v2-cropped.pdf}
\end{minipage}
\caption{Visualization of CAKE-3-Norm, AV and CAKE-2. Rows indicate evaluated representation -- \textit{resp.} from top to down: CAKE-3-Norm, AV, CAKE-2 -- and columns indicate datasets -- \textit{resp.} from left to right: AffectNet~\cite{mollahosseini_affectnet:_2017}, SFEW~\cite{dhall_static_2011} and RAF~\cite{li_reliable_2017}.}
\label{fig:visu}
\end{figure} |
2211.01326 | \section{Introduction}
Let $\mathcal{A}$ and $\mathcal{B}$ be two complex $\ast $-algebras. For $a,b\in \mathcal{A}$ (resp., $a,b\in \mathcal{B}$) denote by $a\filledsquare _{\eta} b=a^{*}b+\eta ba^{*}$ and $a\circ _{\eta} b=ab+\nu ba,$ where $\eta ,\nu $ are nonzero complex numbers. We say that a mapping $\Phi:\mathcal{A}\rightarrow \mathcal{B}$ {\it preserves triple product $a\filledsquare _{\eta }b\filledsquare _{\nu }c$} (resp., {\it preserves mixed product $a\filledsquare _{\eta }b\circ _{\nu }c$}), where $a\filledsquare _{\eta }b\filledsquare _{\nu }c=(a\filledsquare _{\eta }b)\filledsquare _{\nu }c$ (resp., $a\filledsquare _{\eta }b\circ _{\nu }c=(a\filledsquare _{\eta }b)\circ _{\nu }c$), if
{\allowdisplaybreaks\begin{align*}\allowdisplaybreaks
&\Phi (a\filledsquare _{\eta }b\filledsquare _{\nu }c)=\Phi (a)\filledsquare _{\eta }\Phi (b)\filledsquare _{\nu }\Phi (c)\nonumber \\
& \hspace{2.0cm} (\textrm{resp.,} \,\, \Phi (a\filledsquare _{\eta }b\circ _{\nu }c)=\Phi (a)\filledsquare _{\eta }\Phi (b)\circ _{\nu }\Phi (c)),
\end{align*}}
for all elements $a,b,c\in \mathcal{A}.$
Let $\mathcal{A}$ and $\mathcal{B}$ be two complex $\ast $-algebras, $\{\alpha _{k}\}_{k=1}^{6}$ complex numbers and $\Phi :\mathcal{A}\rightarrow \mathcal{B}$ a mapping. We say that $\Phi $ {\it preserves sum of triple products $\alpha _{1} ab^{*}c+\alpha _{2} acb^{*}+\alpha _{3} b^{*}ac +\alpha _{4} cab^{*}+\alpha _{5} b^{*}ca+\alpha _{6} cb^{*}a$} if
{\allowdisplaybreaks\begin{align}\allowdisplaybreaks\label{fundident}
&\Phi (\alpha _{1} ab^{*}c+\alpha _{2} acb^{*}+\alpha _{3} b^{*}ac +\alpha _{4} cab^{*}+\alpha _{5} b^{*}ca+\alpha _{6} cb^{*}a)\nonumber \\
&=\alpha _{1}\Phi (a)\Phi (b)^{*}\Phi (c)+\alpha _{2}\Phi (a)\Phi (c)\Phi (b)^{*}+\alpha _{3}\Phi (b)^{*}\Phi (a)\Phi (c)\nonumber \\
&+\alpha _{4}\Phi (c)\Phi (a)\Phi (b)^{*}+\alpha _{5}\Phi (b)^{*}\Phi (c)\Phi (a)+\alpha _{6}\Phi (c)\Phi (b)^{*}\Phi (a),
\end{align}}
for all elements $a,b,c\in \mathcal{A}.$
In recent years, there has been considerable interest in the study of mappings preserving different types of products, triple products or mixed products on $\ast $-algebras (for example, see the works \cite{Darvish}, \cite{Liu}, \cite{Taghavi}, \cite{Zhao1}, \cite{Zhao2} and the references therein). In particular, Darvish et al. \cite{Darvish} studied the structure of the mappings preserving product $a\filledsquare _{\eta} b$ on $C^{*}$-algebras, Liu and Ji \cite{Liu} studied the structure of the mappings preserving product $a\filledsquare _{1} b$ on factor von Neumann algebras and Taghavi et al. \cite{Taghavi} studied the structure of the mappings preserving triple product $a\filledsquare _{1} b\filledsquare _{1} c$ on $\ast $-algebras. Note that mappings preserving triple product $a\filledsquare _{1} b\filledsquare _{1} c$ satisfy (\ref{fundident}), for convenient scalars $\alpha _{k}$ $(k=1,2,\cdots ,6).$ Based on these facts, in this paper, we study the mappings that preserves sum of triple products $\alpha _{1} ab^{*}c+\alpha _{2} acb^{*}+\alpha _{3} b^{*}ac +\alpha _{4} cab^{*}+\alpha _{5} b^{*}ca+\alpha _{6} cb^{*}a$ on $\ast $-algebras. Applications of obtained results are given to mappings preserving triple product $a\filledsquare _{\eta }b\filledsquare _{\nu }c$ and preserving mixed product $a\filledsquare _{\eta }b\circ _{\nu }c.$
\section{The statement of the main results}
Two main results are given in this paper. The first read as follows.
\begin{theorem}\label{thm21} Let $\{\alpha _{k}\}_{k=1}^{6}$ be complex numbers satisfying the condition $\sum _{k=1}^{6} \alpha _{k} \neq 0,$ $\mathcal{A}$ and $\mathcal{B}$ two unital complex $\ast $-algebras with $1_{\mathcal{A}}$ and $1_{\mathcal{B}}$ their multiplicative identities, respectively, and such that $\mathcal{A}$ is prime and has a nontrivial projection. Then every bijective mapping $\Phi :\mathcal{A}\rightarrow \mathcal{B}$ preserving sum of triple products $\alpha _{1} ab^{*}c+\alpha _{2} acb^{*}+\alpha _{3} b^{*}ac +\alpha _{4} cab^{*}+\alpha _{5} b^{*}ca+\alpha _{6} cb^{*}a$ is additive. In addition, (i) if $\Phi (1_{\mathcal{A}})$ is a projection, then $\Phi $ is a $\ast $-Jordan ring isomorphism and (ii) if $\mathcal{B}$ is prime and $\phi (1_{\mathcal{A}})$ is a projection of $\mathcal{B},$ then $\Phi $ is either a $\ast $-ring isomorphism or an $\ast $-ring anti-isomorphism.
\end{theorem}
We organize the proof of Theorem \ref{thm21} in a series of Claims. The following three Claims will be used throughout this paper whose proofs are simple and therefore omitted here.
\begin{claim}\label{c21} If $\Phi $ preserves sum of triple products $\alpha _{1} ab^{*}c+\alpha _{2} acb^{*}+\alpha _{3} b^{*}ac +\alpha _{4} cab^{*}+\alpha _{5} b^{*}ca+\alpha _{6} cb^{*}a,$ then it also preserves sum of triple products $\alpha _{6} ab^{*}c+\alpha _{4} acb^{*}+\alpha _{5} b^{*}ac +\alpha _{2} cab^{*}+\alpha _{3} b^{*}ca+\alpha _{1} cb^{*}a.$
\end{claim}
\begin{claim}\label{c22} Let $a,b,c\in \mathcal{A}$ such that $\Phi (c)=\Phi (a)+\Phi (b)$. Then the following hold:
{\allowdisplaybreaks\begin{align*}\allowdisplaybreaks
(i)&\> \Phi (\alpha _{1} cs^{*}t+\alpha _{2} cts^{*}+\alpha _{3} s^{*}ct +\alpha _{4} tcs^{*}+\alpha _{5} s^{*}tc+\alpha _{6} ts^{*}c)\\
&=\Phi (\alpha _{1} as^{*}t+\alpha _{2} ats^{*}+\alpha _{3} s^{*}at +\alpha _{4} tas^{*}+\alpha _{5} s^{*}ta+\alpha _{6} ts^{*}a)\\
&+\Phi (\alpha _{1} bs^{*}t+\alpha _{2} bts^{*}+\alpha _{3} s^{*}bt +\alpha _{4} tbs^{*}+\alpha _{5} s^{*}tb+\alpha _{6} ts^{*}b),\\
(ii)&\> \Phi (\alpha _{1} st^{*}c+\alpha _{2} sct^{*}+\alpha _{3} t^{*}sc +\alpha _{4} cst^{*}+\alpha _{5} t^{*}cs+\alpha _{6} ct^{*}s)\\
&=\Phi (\alpha _{1} st^{*}a+\alpha _{2} sat^{*}+\alpha _{3} t^{*}sa +\alpha _{4} ast^{*}+\alpha _{5} t^{*}as+\alpha _{6} at^{*}s)\\
&+\Phi (\alpha _{1} st^{*}b+\alpha _{2} sbt^{*}+\alpha _{3} t^{*}sb +\alpha _{4} bst^{*}+\alpha _{5} t^{*}bs+\alpha _{6} bt^{*}s),
\end{align*}}
for all elements $s,t\in \mathcal{A}.$
\end{claim}
\begin{claim}\label{c23} $\Phi (0)=0.$
\end{claim}
The following well known result will be used throughout this paper: Let $p_{1}$ be an arbitrary nontrivial projection of $\mathcal{A}$ and write $p_{2}=1_{\mathcal{A}}-p_{1}.$ Then $\mathcal{A}$ has a Peirce decomposition $\mathcal{A}=\mathcal{A}_{11}\oplus \mathcal{A}_{12}\oplus \mathcal{A}_{21}\oplus \mathcal{A}_{22},$ where $\mathcal{A}_{ij}=p_{i}\mathcal{A}p_{j}$ $(i,j=1,2) ,$ satisfying the following multiplicative relations: $\mathcal{A}_{ij}\mathcal{A}_{kl}\subseteq \delta _{jk} \mathcal{A}_{il},$ where $\delta _{jk}$ is the {\it Kronecker delta function}.
\begin{claim}\label{c24} For arbitrary elements $a_{ii}\in \mathcal{A}_{ii},$ $b_{ij}\in \mathcal{A}_{ij}$ and $c_{ji}\in \mathcal{A}_{ji}$ $(i\neq j;i,j=1,2)$ the following hold: (i) $\Phi (a_{ii}+b_{ij})=\Phi (a_{ii})+\Phi (b_{ij})$ and (ii) $\Phi (a_{ii}+c_{ji})=\Phi (a_{ii})+\Phi (c_{ji}).$
\end{claim}
\begin{proof} By assumption on $\Phi ,$ there exists $f=f_{ii}+f_{ij}+f_{ji}+f_{jj}\in \mathcal{A}$ $(i\neq j;i,j=1,2)$ such that $\Phi (f)=\Phi (a_{ii}) + \Phi (b_{ij})$. By Claims \ref{c22}(i) and \ref{c23}, we have
{\allowdisplaybreaks\begin{align*}\allowdisplaybreaks
&\Phi (\alpha _{1} f1_{\mathcal{A}}^{*}p_{j}+\alpha _{2} fp_{j}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}fp_{j} +\alpha _{4} p_{j}f1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}p_{j}f+\alpha _{6} p_{j}1_{\mathcal{A}}^{*}f)\\
&=\Phi (\alpha _{1} a_{ii}1_{\mathcal{A}}^{*}p_{j}+\alpha _{2} a_{ii}p_{j}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}a_{ii}p_{j} +\alpha _{4} p_{j}a_{ii}1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}p_{j}a_{ii}+\alpha _{6} p_{j}1_{\mathcal{A}}^{*}a_{ii})\\
&+\Phi (\alpha _{1} b_{ij}1_{\mathcal{A}}^{*}p_{j}+\alpha _{2} b_{ij}p_{j}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}b_{ij}p_{j}+\alpha _{4} p_{j}b_{ij}1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}p_{j}b_{ij}+\alpha _{6} p_{j}1_{\mathcal{A}}^{*}b_{ij})\\
&=\Phi ((\alpha _{1} +\alpha _{2} +\alpha _{3})b_{ij}).
\end{align*}}
This shows that $\alpha _{1} f1_{\mathcal{A}}^{*}p_{j}+\alpha _{2} fp_{j}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}fp_{j} +\alpha _{4} p_{j}f1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}p_{j}f+\alpha _{6} p_{j}1_{\mathcal{A}}^{*}f=(\alpha _{1} +\alpha _{2} +\alpha _{3})b_{ij}$ which leads to $(\alpha _{1}+\alpha _{2}+\alpha _{3}) f_{ij}+(\alpha _{4}+\alpha _{5}+\alpha _{6}) f_{ji}+(\sum _{k=1}^{6} \alpha _{k})f_{jj}=(\alpha _{1}+\alpha _{2}+\alpha _{3}) b_{ij}.$ As a result of this identity, we deduce that $(\alpha _{6}+\alpha _{4}+\alpha _{5}) f_{ij}+(\alpha _{2}+\alpha _{3}+\alpha _{1}) f_{ji}+(\sum _{k=1}^{6} \alpha _{k})f_{jj}=(\alpha _{6}+\alpha _{4}+\alpha _{5}) b_{ij},$ in view of Claim \ref{c21}. Thus, by adding the two last identities we end up getting $(\sum _{k=1}^{6} \alpha _{k})(f_{ij}+f_{ji}+2f_{jj})=(\sum _{k=1}^{6} \alpha _{k})b_{ij}$ which allows the conclusion that $f_{ij}=b_{ij},$ $f_{ji}=0$ and $f_{jj}=0.$ Next, for an arbitrary element $t_{ij}\in \mathcal{A}_{ij},$ we have
{\allowdisplaybreaks\begin{align*}\allowdisplaybreaks
&\Phi (\alpha _{1} f1_{\mathcal{A}}^{*}t_{ij}+\alpha _{2} ft_{ij}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}ft_{ij} +\alpha _{4} t_{ij}f1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}t_{ij}f+\alpha _{6} t_{ij}1_{\mathcal{A}}^{*}f)\\
&=\Phi (\alpha _{1} a_{ii}1_{\mathcal{A}}^{*}t_{ij}+\alpha _{2} a_{ii}t_{ij}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}a_{ii}t_{ij}+\alpha _{4} t_{ij}a_{ii}1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}t_{ij}a_{ii}+\alpha _{6} t_{ij}1_{\mathcal{A}}^{*}a_{ii})\\
&+\Phi (\alpha _{1} b_{ij}1_{\mathcal{A}}^{*}t_{ij}+\alpha _{2} b_{ij}t_{ij}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}b_{ij}t_{ij}+\alpha _{4} t_{ij}b_{ij}1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}t_{ij}b_{ij}+\alpha _{6} t_{ij}1_{\mathcal{A}}^{*}b_{ij})\\
&=\Phi ((\alpha _{1} +\alpha _{2} +\alpha _{3})a_{ii}t_{ij})
\end{align*}}
which implies that $\alpha _{1} f1_{\mathcal{A}}^{*}t_{ij}+\alpha _{2} ft_{ij}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}ft_{ij} +\alpha _{4} t_{ij}f1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}t_{ij}f+\alpha _{6} t_{ij}1_{\mathcal{A}}^{*}f=(\alpha _{1} +\alpha _{2} +\alpha _{3})a_{ii}t_{ij}.$ As a consequence of this identity we get $(\alpha _{1}+\alpha _{2}+\alpha _{3})f_{ii}t_{ij}=(\alpha _{1}+\alpha _{2}+\alpha _{3})a_{ii}t_{ij}$ which allows to deduce that $(\alpha _{6}+\alpha _{4}+\alpha _{5})f_{ii}t_{ij}=(\alpha _{6}+\alpha _{4}+\alpha _{5})a_{ii}t_{ij},$ in view again of Claim \ref{c21}. Adding the last two results we obtain $(\sum _{k=1}^{6} \alpha _{k})f_{ii}t_{ij}=(\sum _{k=1}^{6} \alpha _{k})a_{ii}t_{ij}$ which results in $f_{ii}t_{ij}=a_{ii}t_{ij}.$ Therefore $f_{ii}=a_{ii}.$
By an entirely similar reasoning, we prove the case (ii).
\end{proof}
\begin{claim}\label{c25} For arbitrary elements $b_{ij}\in \mathcal{A}_{ij}$ and $c_{ji}\in \mathcal{A}_{ji}$ $(i\neq j; i,j=1,2)$ the following holds $\Phi (b_{ij}+c_{ji})=\Phi (b_{ij})+\Phi (c_{ji}).$
\end{claim}
\begin{proof} By our hypotheses, there is $f=f_{ii}+f_{ij}+f_{ji}+f_{jj}\in \mathcal{A}$ $(i\neq j;i,j=1,2)$ satisfying $\Phi (f)=\Phi (b_{ij}) + \Phi (c_{ji})$. We have by Claim \ref{c22}(i) that
{\allowdisplaybreaks\begin{align*}\allowdisplaybreaks
&\Phi (\alpha _{1} fp_{i}^{*}p_{i}+\alpha _{2} fp_{i}p_{i}^{*}+\alpha _{3} p_{i}^{*}fp_{i}+\alpha _{4} p_{i}fp_{i}^{*}+\alpha _{5} p_{i}^{*}p_{i}f+\alpha _{6} p_{i}p_{i}^{*}f)\\
&=\Phi (\alpha _{1} b_{ij}p_{i}^{*}p_{i}+\alpha _{2} b_{ij}p_{i}p_{i}^{*}+\alpha _{3} p_{i}^{*}b_{ij}p_{i}+\alpha _{4} p_{i}b_{ij}p_{i}^{*}+\alpha _{5} p_{i}^{*}p_{i}b_{ij}+\alpha _{6} p_{i}p_{i}^{*}b_{ij})\\
&+\Phi (\alpha _{1} c_{ji}p_{i}^{*}p_{i}+\alpha _{2} c_{ji}p_{i}p_{i}^{*}+\alpha _{3} p_{i}^{*}c_{ji}p_{i} +\alpha _{4} p_{i}c_{ji}p_{i}^{*}+\alpha _{5} p_{i}^{*}p_{i}c_{ji}+\alpha _{6} p_{i}p_{i}^{*}c_{ji})
\end{align*}}
that leads to identity $\Phi ((\sum _{k=1}^{6} \alpha _{k})f_{ii}+(\alpha _{5}+\alpha _{6})f_{ij}+(\alpha _{1}+\alpha _{2})f_{ji})=\Phi ((\alpha _{5}+\alpha _{6})b_{ij})+\Phi ((\alpha _{1}+\alpha _{2})c_{ji}).$ Write $g_{ii}=(\sum _{k=1}^{6} \alpha _{k})f_{ii},$ $g_{ij}=(\alpha _{5}+\alpha _{6})f_{ij},$ $g_{ji}=(\alpha _{1}+\alpha _{2})f_{ji},$ $g=g_{ii}+g_{ij}+g_{ji},$ $h_{ij}=(\alpha _{5}+\alpha _{6})b_{ij}$ and $h_{ji}=(\alpha _{1}+\alpha _{2})c_{ji}.$ Then $\Phi (g)=\Phi (h_{ij}) + \Phi (h_{ji}).$ It therefore follows that, for an arbitrary $t_{ij}\in \mathcal{A}_{ij},$
{\allowdisplaybreaks\begin{align*}\allowdisplaybreaks
&\Phi (\alpha _{1} g1_{\mathcal{A}}^{*}t_{ij}+\alpha _{2} gt_{ij}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}gt_{ij}+\alpha _{4} t_{ij}g1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}t_{ij}g+\alpha _{6} t_{ij}1_{\mathcal{A}}^{*}g)\\
&=\Phi (\alpha _{1} h_{ij}1_{\mathcal{A}}^{*}t_{ij}+\alpha _{2} h_{ij}t_{ij}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}h_{ij}t_{ij}+\alpha _{4} t_{ij}h_{ij}1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}t_{ij}h_{ij}+\alpha _{6} t_{ij}1_{\mathcal{A}}^{*}h_{ij})\\
&+\Phi (\alpha _{1} h_{ji}1_{\mathcal{A}}^{*}t_{ij}+\alpha _{2} h_{ji}t_{ij}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}h_{ji}t_{ij} +\alpha _{4} t_{ij}h_{ji}1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}t_{ij}h_{ji}+\alpha _{6} t_{ij}1_{\mathcal{A}}^{*}h_{ji})\\
&=\Phi ((\alpha _{1}+\alpha _{2}+\alpha _{3})h_{ji}t_{ij}+(\alpha _{4}+\alpha _{5}+\alpha _{6})t_{ij}h_{ji})
\end{align*}}
which shows that $\alpha _{1} g1_{\mathcal{A}}^{*}t_{ij}+\alpha _{2} gt_{ij}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}gt_{ij}+\alpha _{4} t_{ij}g1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}t_{ij}g+\alpha _{6} t_{ij}1_{\mathcal{A}}^{*}g=(\alpha _{1}+\alpha _{2}+\alpha _{3})h_{ji}t_{ij}+(\alpha _{4}+\alpha _{5}+\alpha _{6})t_{ij}h_{ji}.$ As a result we get the identity $(\alpha _{1}+\alpha _{2}+\alpha _{3})g_{ii}t_{ij}+(\alpha _{1}+\alpha _{2}+\alpha _{3})g_{ji}t_{ij}+(\alpha _{4}+\alpha _{5}+\alpha _{6})t_{ij}g_{ji}=(\alpha _{1}+\alpha _{2}+\alpha _{3})h_{ji}t_{ij}+(\alpha _{4}+\alpha _{5}+\alpha _{6})t_{ij}h_{ji}.$ This shows that $(\alpha _{6}+\alpha _{4}+\alpha _{5})g_{ii}t_{ij}+(\alpha _{6}+\alpha _{4}+\alpha _{5})g_{ji}t_{ij}+(\alpha _{2}+\alpha _{3}+\alpha _{1})t_{ij}g_{ji}=(\alpha _{6}+\alpha _{4}+\alpha _{5})h_{ji}t_{ij}+(\alpha _{2}+\alpha _{3}+\alpha _{1})t_{ij}h_{ji},$ by Claim \ref{c21}. Adding these last two results we obtain $(\sum _{k=1}^{6} \alpha _{k})(g_{ii}t_{ij}+g_{ji}t_{ij}+t_{ij}g_{ji})=(\sum _{k=1}^{6} \alpha _{k})(h_{ji}t_{ij}+t_{ij}h_{ji})$ that allows to deduce that $g_{ii}t_{ij}=0.$ Thus $g_{ii}=0$ which implies that $f_{ii}=0.$ Using a similar reasoning as before, we prove that $f_{jj}=0.$ Next, for an arbitrary element $t_{ji}\in \mathcal{A}_{ji},$ we have
{\allowdisplaybreaks\begin{align*}\allowdisplaybreaks
&\Phi (\alpha _{1} f1_{\mathcal{A}}^{*}t_{ji}+\alpha _{2} ft_{ji}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}ft_{ji}+\alpha _{4} t_{ji}f1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}t_{ji}f+\alpha _{6} t_{ji}1_{\mathcal{A}}^{*}f)\\
&=\Phi (\alpha _{1} b_{ij}1_{\mathcal{A}}^{*}t_{ji}+\alpha _{2} b_{ij}t_{ji}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}b_{ij}t_{ji}+\alpha _{4} t_{ji}b_{ij}1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}t_{ji}b_{ij}+\alpha _{6} t_{ji}1_{\mathcal{A}}^{*}b_{ij})\\
&+\Phi (\alpha _{1} c_{ji}1_{\mathcal{A}}^{*}t_{ji}+\alpha _{2} c_{ji}t_{ji}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}c_{ji}t_{ji} +\alpha _{4} t_{ji}c_{ji}1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}t_{ji}c_{ji}+\alpha _{6} t_{ji}1_{\mathcal{A}}^{*}c_{ji})\\
&=\Phi ((\alpha _{1}+\alpha _{2}+\alpha _{3})b_{ij}t_{ji}+(\alpha _{4}+\alpha _{5}+\alpha _{6})t_{ji}b_{ij}).
\end{align*}}
This results that $\alpha _{1} f1_{\mathcal{A}}^{*}t_{ji}+\alpha _{2} ft_{ji}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}ft_{ji} +\alpha _{4} t_{ji}f1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}t_{ji}f+\alpha _{6} t_{ji}1_{\mathcal{A}}^{*}f=(\alpha _{1}+\alpha _{2}+\alpha _{3})b_{ij}t_{ji}+(\alpha _{4}+\alpha _{5}+\alpha _{6})t_{ji}b_{ij}.$ As a consequence of this identity, we deduce that $(\alpha _{1}+\alpha _{2}+\alpha _{3})f_{ij}t_{ji}+(\alpha _{4}+\alpha _{5}+\alpha _{6})t_{ji}f_{ij}=(\alpha _{1}+\alpha _{2}+\alpha _{3})b_{ij}t_{ji}+(\alpha _{4}+\alpha _{5}+\alpha _{6})t_{ji}b_{ij}$ from which we can also deduce that $(\alpha _{6}+\alpha _{4}+\alpha _{5})f_{ij}t_{ji}+(\alpha _{2}+\alpha _{3}+\alpha _{1})t_{ji}f_{ij}=(\alpha _{6}+\alpha _{4}+\alpha _{5})b_{ij}t_{ji}+(\alpha _{2}+\alpha _{3}+\alpha _{1})t_{ji}b_{ij},$ by Claim \ref{c21}. Adding the last two identities we find $(\sum _{k=1}^{6} \alpha _{k}) (f_{ij}t_{ji}+t_{ji}f_{ij})=(\sum _{k=1}^{6} \alpha _{k})(b_{ij}t_{ji}+t_{ji}b_{ij})$ which leads to the conclusion that $f_{ij}t_{ji}=b_{ij}t_{ji}.$ Therefore, $f_{ij}=b_{ij}.$ Using a similar reasoning as before, we prove that $f_{ji}=c_{ji}.$
\end{proof}
\begin{claim}\label{c26} For arbitrary elements $a_{ij},b_{ij}\in \mathcal{A}_{ij}$ $(i\neq j;i,j=1,2)$ we have $\Phi (a_{ij}+b_{ij})=\Phi (a_{ij})+ \Phi (b_{ij}).$
\end{claim}
\begin{proof} First, note that
{\allowdisplaybreaks\begin{align*}\allowdisplaybreaks
&(\alpha _{1}+\alpha _{2}+\alpha _{3}) (a_{ij}+b_{ij})\\
&=\alpha _{1} (p_{i}+a_{ij})1_{\mathcal{A}}^{*}(p_{j}+b_{ij})+\alpha _{2} (p_{i}+a_{ij})(p_{j}+b_{ij})1_{\mathcal{A}}^{*}\\
&+\alpha _{3} 1_{\mathcal{A}}^{*}(p_{i}+a_{ij})(p_{j}+b_{ij})+\alpha _{4} (p_{j}+b_{ij})(p_{i}+a_{ij})1_{\mathcal{A}}^{*}\\
&+\alpha _{5} 1_{\mathcal{A}}^{*}(p_{j}+b_{ij})(p_{i}+a_{ij})+\alpha _{6} (p_{j}+b_{ij})1_{\mathcal{A}}^{*}(p_{i}+a_{ij}),
\end{align*}}
for all elements $a_{ij},b_{ij}\in \mathcal{A}_{ij}.$ Hence, by (\ref{fundident}) and Claim \ref{c24}(i), we have
{\allowdisplaybreaks\begin{align*}\allowdisplaybreaks
&\Phi ((\alpha _{1}+\alpha _{2}+\alpha _{3}) (a_{ij}+b_{ij}))\\
&=\Phi (\alpha _{1} (p_{i}+a_{ij})1_{\mathcal{A}}^{*}(p_{j}+b_{ij})+\alpha _{2} (p_{i}+a_{ij})(p_{j}+b_{ij})1_{\mathcal{A}}^{*}\\
&+\alpha _{3} 1_{\mathcal{A}}^{*}(p_{i}+a_{ij})(p_{j}+b_{ij})+\alpha _{4} (p_{j}+b_{ij})(p_{i}+a_{ij})1_{\mathcal{A}}^{*}\\
&+\alpha _{5} 1_{\mathcal{A}}^{*}(p_{j}+b_{ij})(p_{i}+a_{ij})+\alpha _{6} (p_{j}+b_{ij})1_{\mathcal{A}}^{*}(p_{i}+a_{ij}))\\
&=\alpha _{1} \Phi (p_{i}+a_{ij})\Phi (1_{\mathcal{A}})^{*}\Phi (p_{j}+b_{ij})+\alpha _{2} \Phi (p_{i}+a_{ij})\Phi (p_{j}+b_{ij})\Phi (1_{\mathcal{A}})^{*}\\
&+\alpha _{3} \Phi (1_{\mathcal{A}})^{*}\Phi (p_{i}+a_{ij})\Phi (p_{j}+b_{ij})+\alpha _{4} \Phi (p_{j}+b_{ij})\Phi (p_{i}+a_{ij})\Phi (1_{\mathcal{A}})^{*}\\
&+\alpha _{5} \Phi (1_{\mathcal{A}})^{*}\Phi (p_{j}+b_{ij})\Phi (p_{i}+a_{ij})+\alpha _{6} \Phi (p_{j}+b_{ij})\Phi (1_{\mathcal{A}})^{*}\Phi (p_{i}+a_{ij})\\
&=\alpha _{1} (\Phi (p_{i})+\Phi (a_{ij}))\Phi (1_{\mathcal{A}})^{*}(\Phi (p_{j})+\Phi (b_{ij}))\\
&+\alpha _{2} (\Phi (p_{i})+\Phi (a_{ij}))(\Phi (p_{j})+\Phi (b_{ij}))\Phi (1_{\mathcal{A}})^{*}\\
&+\alpha _{3} \Phi (1_{\mathcal{A}})^{*}(\Phi (p_{i})+\Phi (a_{ij}))(\Phi (p_{j})+\Phi (b_{ij}))\\
&+\alpha _{4} (\Phi (p_{j})+\Phi (b_{ij}))(\Phi (p_{i})+\Phi (a_{ij}))\Phi (1_{\mathcal{A}})^{*}\\
&+\alpha _{5} \Phi (1_{\mathcal{A}})^{*}(\Phi (p_{j})+\Phi (b_{ij}))(\Phi (p_{i})+\Phi (a_{ij}))\\
&+\alpha _{6} (\Phi (p_{j})+\Phi (b_{ij}))\Phi (1_{\mathcal{A}})^{*}(\Phi (p_{i})+\Phi (a_{ij}))\\
&=\alpha _{1} \Phi (p_{i})\Phi (1_{\mathcal{A}})^{*}\Phi (p_{j})+\alpha _{2} \Phi (p_{i})\Phi (p_{j})\Phi (1_{\mathcal{A}})^{*}+\alpha _{3} \Phi (1_{\mathcal{A}})^{*}\Phi (p_{i})\Phi (p_{j})\\
&+\alpha _{4} \Phi (p_{j})\Phi (p_{i})\Phi (1_{\mathcal{A}})^{*}+\alpha _{5} \Phi (1_{\mathcal{A}})^{*}\Phi (p_{j})\Phi (p_{i})+\alpha _{6} \Phi (p_{j})\Phi (1_{\mathcal{A}})^{*}\Phi (p_{i})\\
&+\alpha _{1} \Phi (a_{ij})\Phi (1_{\mathcal{A}})^{*}\Phi (p_{j})+\alpha _{2} \Phi (a_{ij})\Phi (p_{j})\Phi (1_{\mathcal{A}})^{*}+\alpha _{3} \Phi (1_{\mathcal{A}})^{*}\Phi (a_{ij})\Phi (p_{j})\\
&+\alpha _{4} \Phi (p_{j})\Phi (a_{ij})\Phi (1_{\mathcal{A}})^{*}+\alpha _{5} \Phi (1_{\mathcal{A}})^{*}\Phi (p_{j})\Phi (a_{ij})+\alpha _{6} \Phi (p_{j})\Phi (1_{\mathcal{A}})^{*}\Phi (a_{ij})\\
&+\alpha _{1} \Phi (p_{i})\Phi (1_{\mathcal{A}})^{*}\Phi (b_{ij})
+\alpha _{2} \Phi (p_{i})\Phi (b_{ij})\Phi (1_{\mathcal{A}})^{*}
+\alpha _{3} \Phi (1_{\mathcal{A}})^{*}\Phi (p_{i})\Phi (b_{ij})\\
&+\alpha _{4} \Phi (b_{ij})\Phi (p_{i})\Phi (1_{\mathcal{A}})^{*}
+\alpha _{5} \Phi (1_{\mathcal{A}})^{*}\Phi (b_{ij})\Phi (p_{i})
+\alpha _{6} \Phi (b_{ij})\Phi (1_{\mathcal{A}})^{*}\Phi (p_{i})\\
&+\alpha _{1} \Phi (a_{ij})\Phi (1_{\mathcal{A}})^{*}\Phi (b_{ij})
+\alpha _{2} \Phi (a_{ij})\Phi (b_{ij})\Phi (1_{\mathcal{A}})^{*}
+\alpha _{3} \Phi (1_{\mathcal{A}})^{*}\Phi (a_{ij})\Phi (b_{ij})\\
&+\alpha _{4} \Phi (b_{ij})\Phi (a_{ij})\Phi (1_{\mathcal{A}})^{*}
+\alpha _{5} \Phi (1_{\mathcal{A}})^{*}\Phi (b_{ij})\Phi (a_{ij})
+\alpha _{6} \Phi (b_{ij})\Phi (1_{\mathcal{A}})^{*}\Phi (a_{ij})\\
&=\Phi (\alpha _{1}p_{i}1_{\mathcal{A}}^{*}p_{j}+\alpha _{2} p_{i}p_{j}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}p_{i}p_{j}+\alpha _{4} p_{j}p_{i}1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}p_{j}p_{i} \\
&+\alpha _{6} p_{j}1_{\mathcal{A}}^{*}p_{i})+\Phi (\alpha _{1} a_{ij}1_{\mathcal{A}}^{*}p_{j}+\alpha _{2} a_{ij}p_{j}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}a_{ij}p_{j}+\alpha _{4} p_{j}a_{ij}1_{\mathcal{A}}^{*}\\
&+\alpha _{5} 1_{\mathcal{A}}^{*}p_{j}a_{ij}+\alpha _{6} p_{j}1_{\mathcal{A}}^{*}a_{ij})+\Phi (\alpha _{1} p_{i}1_{\mathcal{A}}^{*}b_{ij}+\alpha _{2} p_{i}b_{ij}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}p_{i}b_{ij}\\
&+\alpha _{4} b_{ij}p_{i}1_{\mathcal{A}}^{*}
+\alpha _{5} 1_{\mathcal{A}}^{*}b_{ij}p_{i}
+\alpha _{6} b_{ij}1_{\mathcal{A}}^{*}p_{i})+\Phi (\alpha _{1} a_{ij}1_{\mathcal{A}}^{*}b_{ij}
+\alpha _{2} a_{ij}b_{ij}1_{\mathcal{A}}^{*}\\
&+\alpha _{3} 1_{\mathcal{A}}^{*}a_{ij}b_{ij}+\alpha _{4} b_{ij}a_{ij}1_{\mathcal{A}}^{*}
+\alpha _{5} 1_{\mathcal{A}}^{*}b_{ij}a_{ij}
+\alpha _{6} b_{ij}1_{\mathcal{A}}^{*}a_{ij})\\
&=\Phi ((\alpha _{1}+\alpha _{2}+\alpha _{3}) a_{ij})+\Phi ((\alpha _{1}+\alpha _{2}+\alpha _{3}) b_{ij}).
\end{align*}}
Thus
{\allowdisplaybreaks\begin{align}\allowdisplaybreaks\label{id03}
\Phi ((\alpha _{1}+\alpha _{2}+\alpha _{3}) (a_{ij}+b_{ij}))=\Phi ((\alpha _{1}+\alpha _{2}+\alpha _{3})a_{ij})+\Phi ((\alpha _{1}+\alpha _{2}+\alpha _{3}) b_{ij}).
\end{align}}
However, by Claim \ref{c21}, we have
{\allowdisplaybreaks\begin{align}\allowdisplaybreaks\label{id04}
\Phi ((\alpha _{6}+\alpha _{4}+\alpha _{5})(a_{ij}+b_{ij}))=\Phi ((\alpha _{6}+\alpha _{4}+\alpha _{5}) a_{ij})+\Phi ((\alpha _{6}+\alpha _{4}+\alpha _{5}) b_{ij}).
\end{align}}
Therefore, if $\alpha _{1}+\alpha _{2}+\alpha _{3}\neq 0,$ then the identity $\Phi (a_{ij}+b_{ij})=\Phi (a_{ij})+ \Phi (b_{ij})$ follows directly from (\ref{id03}). Otherwise, we must have $\alpha _{6}+\alpha _{4}+\alpha _{5}\neq 0$ which also leads to $\Phi (a_{ij}+b_{ij})=\Phi (a_{ij})+ \Phi (b_{ij}),$ by identity (\ref{id04}).
\end{proof}
\begin{claim}\label{c27} For arbitrary elements $a_{ii},b_{ii}\in \mathcal{A}_{ii}$ $(i=1,2),$ we have $\Phi (a_{ii}+b_{ii})=\Phi (a_{ii})+\Phi (b_{ii}).$
\end{claim}
\begin{proof} Take an element $f=f_{ii}+f_{ij}+f_{ji}+f_{jj}\in \mathcal{A}$ such that $\Phi (f)=\Phi (a_{ii})+\Phi (b_{ii})$. Then
{\allowdisplaybreaks\begin{align*}\allowdisplaybreaks
&\Phi (\alpha _{1} f1_{\mathcal{A}}^{*}p_{j}+\alpha _{2} fp_{j}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}fp_{j}+\alpha _{4} p_{j}f1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}p_{j}f+\alpha _{6} p_{j}1_{\mathcal{A}}^{*}f)\\
&=\Phi (\alpha _{1} a_{ii}1_{\mathcal{A}}^{*}p_{j}+\alpha _{2} a_{ii}p_{j}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}a_{ii}p_{j}+\alpha _{4} p_{j}a_{ii}1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}p_{j}a_{ii}+\alpha _{6} p_{j}1_{\mathcal{A}}^{*}a_{ii})\\
&+\Phi (\alpha _{1} b_{ii}1_{\mathcal{A}}^{*}p_{j}+\alpha _{2} b_{ii}p_{j}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}b_{ii}p_{j}+\alpha _{4} p_{j}b_{ii}1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}p_{j}b_{ii}+\alpha _{6} p_{j}1_{\mathcal{A}}^{*}b_{ii})\\
&=0.
\end{align*}}
This shows that $\alpha _{1} f1_{\mathcal{A}}^{*}p_{j}+\alpha _{2} fp_{j}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}fp_{j}+\alpha _{4} p_{j}f1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}p_{j}f+\alpha _{6} p_{j}1_{\mathcal{A}}^{*}f=0$ which leads to the identity $(\alpha _{1}+\alpha _{2}+\alpha _{3})f_{ij}+(\alpha _{4}+\alpha _{5}+\alpha _{6})f_{ji}+(\sum _{k=1}^{6} \alpha _{k})f_{jj})=0.$ From this we deduce that $(\alpha _{6}+\alpha _{4}+\alpha _{5})f_{ij}+(\alpha _{2}+\alpha _{3}+\alpha _{1})f_{ji}+(\sum _{k=1}^{6} \alpha _{k})f_{jj})=0,$ by Claim \ref{c21}. Adding the two last equations yields $(\sum _{k=1}^{6} \alpha _{k} )(f_{ij}+f_{ji}+2f_{jj})=0$ which results in $f_{ij}=0,$ $f_{ji}=0$ and $f_{jj}=0.$ It therefore follows that $\Phi (f_{ii})=\Phi (a_{ii})+\Phi (b_{ii})$. Hence, for an arbitrary element $t_{ij}\in \mathcal{A}_{ij},$ we have
{\allowdisplaybreaks\begin{align*}\allowdisplaybreaks
&\Phi (\alpha _{1} f_{ii}1_{\mathcal{A}}^{*}t_{ij}+\alpha _{2} f_{ii}t_{ij}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}f_{ii}t_{ij} +\alpha _{4} t_{ij}f_{ii}1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}t_{ij}f_{ii}+\alpha _{6} t_{ij}1_{\mathcal{A}}^{*}f_{ii})\\
&=\Phi (\alpha _{1} a_{ii}1_{\mathcal{A}}^{*}t_{ij}+\alpha _{2} a_{ii}t_{ij}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}a_{ii}t_{ij}+\alpha _{4} t_{ij}a_{ii}1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}t_{ij}a_{ii}+\alpha _{6} t_{ij}1_{\mathcal{A}}^{*}a_{ii})\\
&+\Phi (\alpha _{1} b_{ii}1_{\mathcal{A}}^{*}t_{ij}+\alpha _{2} b_{ii}t_{ij}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}b_{ii}t_{ij}+\alpha _{4} t_{ij}b_{ii}1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}t_{ij}b_{ii}+\alpha _{6} t_{ij}1_{\mathcal{A}}^{*}b_{ii})\\
&=\Phi ((\alpha _{1} +\alpha _{2}+\alpha _{3})a_{ii}t_{ij})+\Phi ((\alpha _{1} +\alpha _{2}+\alpha _{3})b_{ii}t_{ij})\\
&=\Phi ((\alpha _{1} +\alpha _{2}+\alpha _{3})(a_{ii}+b_{ii})t_{ij}),
\end{align*}}
by Claim \ref{c26}, which results that $(\alpha _{1} +\alpha _{2}+\alpha _{3})f_{ii}t_{ij}=(\alpha _{1} +\alpha _{2}+\alpha _{3})(a_{ii}+b_{ii})t_{ij}.$ This makes it possible to deduce that $(\alpha _{6} +\alpha _{4}+\alpha _{5})f_{ii}t_{ij}=(\alpha _{6} +\alpha _{4}+\alpha _{5})(a_{ii}+b_{ii})t_{ij},$ by Claim \ref{c21}. Thus, adding the two last identities we get $(\sum _{k=1}^{6} \alpha _{k}) f_{ii}t_{ij}=(\sum _{k=1}^{6} \alpha _{k} )(a_{ii}+b_{ii})t_{ij}$ which yields $f_{ii}t_{ij}=(a_{ii}+b_{ii})t_{ij}.$ As consequence, we obtain $f_{ii}=a_{ii}+b_{ii}.$
\end{proof}
\begin{claim}\label{c28} For arbitrary elements $a_{ii}\in \mathcal{A}_{ii}$, $b_{ij}\in \mathcal{A}_{ij}$, $c_{ji}\in \mathcal{A}_{ji}$ and $d_{jj}\in \mathcal{A}_{jj}$ $(i\neq j;i,j=1,2)$ the following holds: (i) $\Phi (a_{ii}+b_{ij}+c_{ji})=\Phi (a_{ii})+\Phi (b_{ij})+\Phi (c_{ji})$ and (ii) $\Phi (b_{ij}+c_{ji}+d_{jj})=\Phi (b_{ij})+\Phi (c_{ji})+\Phi (d_{jj}).$
\end{claim}
\begin{proof} Take an element $f=f_{ii}+f_{ij}+f_{ji}+f_{jj}\in \mathcal{A}$ such that $\Phi (f)=\Phi (a_{ii})+\Phi (b_{ij})+\Phi (c_{ji})$ and write $\Phi (f)=\Phi (a_{ii})+\Phi (b_{ij}+c_{ji}),$ by Claim \ref{c25}. Then
{\allowdisplaybreaks\begin{align*}\allowdisplaybreaks
&\Phi (\alpha _{1} f1_{\mathcal{A}}^{*}p_{j}+\alpha _{2} fp_{j}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}fp_{j} +\alpha _{4} p_{j}f1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}p_{j}f+\alpha _{6} p_{j}1_{\mathcal{A}}^{*}f)\\
&=\Phi (\alpha _{1} a_{ii}1_{\mathcal{A}}^{*}p_{j}+\alpha _{2} a_{ii}p_{j}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}a_{ii}p_{j}+\alpha _{4} p_{j}a_{ii}1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}p_{j}a_{ii}+\alpha _{6} p_{j}1_{\mathcal{A}}^{*}a_{ii})\\
&+\Phi (\alpha _{1} (b_{ij}+c_{ji})1_{\mathcal{A}}^{*}p_{j}+\alpha _{2} (b_{ij}+c_{ji})p_{j}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}(b_{ij}+c_{ji})p_{j}\\
&+\alpha _{4} p_{j}(b_{ij}+c_{ji})1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}p_{j}(b_{ij}+c_{ji})+\alpha _{6} p_{j}1_{\mathcal{A}}^{*}(b_{ij}+c_{ji}))\\
&=\Phi ((\alpha _{1}+\alpha _{2}+\alpha _{3})b_{ij}+(\alpha _{4}+\alpha _{5}+\alpha _{6})c_{ji}).
\end{align*}}
It follows directly from this that $\alpha _{1} f1_{\mathcal{A}}^{*}p_{j}+\alpha _{2} fp_{j}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}fp_{j} +\alpha _{4} p_{j}f1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}p_{j}f+\alpha _{6} p_{j}1_{\mathcal{A}}^{*}f=(\alpha _{1}+\alpha _{2}+\alpha _{3})b_{ij}+(\alpha _{4}+\alpha _{5}+\alpha _{6})c_{ji}$ which results in $(\alpha _{1}+\alpha _{2}+\alpha _{3})f_{ij}+(\alpha _{4}+\alpha _{5}+\alpha _{6})f_{ji}+ (\sum _{k=1}^{6} \alpha _{k})f_{jj}=(\alpha _{1}+\alpha _{2}+\alpha _{3})b_{ij}+(\alpha _{4}+\alpha _{5}+\alpha _{6})c_{ji}.$ As a result, we can apply Claim \ref{c21} to conclude that $(\alpha _{6}+\alpha _{4}+\alpha _{5})f_{ij}+(\alpha _{2}+\alpha _{3}+\alpha _{1})f_{ji}+ (\sum _{k=1}^{6} \alpha _{k})f_{jj}=(\alpha _{6}+\alpha _{4}+\alpha _{5})b_{ij}+(\alpha _{2}+\alpha _{3}+\alpha _{1})c_{ji}.$ Adding these two identities we obtain $(\sum _{k=1}^{6} \alpha _{k})(f_{ij}+f_{ji}+2f_{jj})=(\sum _{k=1}^{6} \alpha _{k})(b_{ij}+c_{ji})$ which shows that $f_{ij}=b_{ij},$ $f_{ji}=c_{ji}$ and $f_{jj}=0.$ Next, write $\Phi (f)=\Phi (a_{ii}+b_{ij})+\Phi (c_{ji}),$ by Claim \ref{c24}(i). For an arbitrary element $t_{ji}\in \mathcal{A}_{ji},$ we have
{\allowdisplaybreaks\begin{align*}\allowdisplaybreaks
&\Phi (\alpha _{1} f1_{\mathcal{A}}^{*}t_{ji}+\alpha _{2} ft_{ji}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}ft_{ji} +\alpha _{4} t_{ji}f1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}t_{ji}f+\alpha _{6} t_{ji}1_{\mathcal{A}}^{*}f)\\
&=\Phi (\alpha _{1} (a_{ii}+b_{ij})1_{\mathcal{A}}^{*}t_{ji}+\alpha _{2} (a_{ii}+b_{ij})t_{ji}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}(a_{ii}+b_{ij})t_{ji}\\
&+\alpha _{4} t_{ji}(a_{ii}+b_{ij})1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}t_{ji}(a_{ii}+b_{ij})+\alpha _{6} t_{ji}1_{\mathcal{A}}^{*}(a_{ii}+b_{ij}))\\
&+\Phi (\alpha _{1} c_{ji}1_{\mathcal{A}}^{*}t_{ji}+\alpha _{2} c_{ji}t_{ji}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}c_{ji}t_{ji}+\alpha _{4} t_{ji}c_{ji}1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}t_{ji}c_{ji}+\alpha _{6} t_{ji}1_{\mathcal{A}}^{*}c_{ji})\\
&=\Phi ((\alpha _{1}+\alpha _{2}+\alpha _{3})b_{ij}t_{ji}+(\alpha _{4}+\alpha _{5}+\alpha _{6})t_{ji}a_{ii}+(\alpha _{4}+\alpha _{5}+\alpha _{6})t_{ji}b_{ij}).
\end{align*}}
This implies that $\alpha _{1} f1_{\mathcal{A}}^{*}t_{ji}+\alpha _{2} ft_{ji}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}ft_{ji} +\alpha _{4} t_{ji}f1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}t_{ji}f+\alpha _{6} t_{ji}1_{\mathcal{A}}^{*}f=(\alpha _{1}+\alpha _{2}+\alpha _{3})b_{ij}t_{ji}+(\alpha _{4}+\alpha _{5}+\alpha _{6})t_{ji}a_{ii}+(\alpha _{4}+\alpha _{5}+\alpha _{6})t_{ji}b_{ij}$ which results in $(\alpha _{4}+\alpha _{5}+\alpha _{6})t_{ji}f_{ii}=(\alpha _{4}+\alpha _{5}+\alpha _{6})t_{ji}a_{ii}.$ Now we can apply Claim \ref{c21} to conclude that $(\alpha _{2}+\alpha _{3}+\alpha _{1})t_{ji}f_{ii}=(\alpha _{2}+\alpha _{3}+\alpha _{1})t_{ji}a_{ii}.$ Thus, adding these two last identities we get $(\sum _{k=1}^{6} \alpha _{k})t_{ji}a_{ii}=(\sum _{k=1}^{6} \alpha _{k})t_{ji}a_{ii}$ which leads to $t_{ji}f_{ii}=t_{ji}a_{ii}.$ Therefore, $f_{ii}=a_{ii}.$
By an entirely similar reasoning, we prove the case (ii).
\end{proof}
\begin{claim}\label{c29} For arbitrary elements $a_{ii}\in \mathcal{A}_{ii}$, $b_{ij}\in \mathcal{A}_{ij}$, $c_{ji}\in \mathcal{A}_{ji}$ and $d_{jj}\in \mathcal{A}_{jj}$ $(i\neq j;i,j=1,2)$ the following holds $\Phi (a_{ii}+b_{ij}+c_{ji}+d_{jj})=\Phi (a_{ii})+\Phi (b_{ij})+\Phi (c_{ji})+\Phi (d_{jj}).$
\end{claim}
\begin{proof} Consider an element $f=f_{ii}+f_{ij}+f_{ji}+f_{jj}\in \mathcal{A}$ such that $\Phi (f)=\Phi (a_{ii})+\Phi (b_{ij})+\Phi (c_{ji})+\Phi (d_{jj})$ and write $\Phi (f)=\Phi (a_{ii}+b_{ij}+c_{ji})+\Phi (d_{jj})$, by Claim \ref{c28}(i). By Claim \ref{c25} we have
{\allowdisplaybreaks\begin{align*}\allowdisplaybreaks
&\Phi (\alpha _{1} f1_{\mathcal{A}}^{*}p_{i}+\alpha _{2} fp_{i}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}fp_{i} +\alpha _{4} p_{i}f1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}p_{i}f+\alpha _{6} p_{i}1_{\mathcal{A}}^{*}f)\\
&=\Phi (\alpha _{1} (a_{ii}+b_{ij}+c_{ji})1_{\mathcal{A}}^{*}p_{i}+\alpha _{2} (a_{ii}+b_{ij}+c_{ji})p_{i}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}(a_{ii}+b_{ij}+c_{ji})p_{i}\\
&+\alpha _{4} p_{i}(a_{ii}+b_{ij}+c_{ji})1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}p_{i}(a_{ii}+b_{ij}+c_{ji})+\alpha _{6} p_{i}1_{\mathcal{A}}^{*}(a_{ii}+b_{ij}+c_{ji}))\\
&+\Phi (\alpha _{1} d_{jj}1_{\mathcal{A}}^{*}p_{i}+\alpha _{2} d_{jj}p_{i}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}d_{jj}p_{i}+\alpha _{4} p_{i}d_{jj}1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}p_{i}d_{jj}+\alpha _{6} p_{i}1_{\mathcal{A}}^{*}d_{jj})\\
&=\Phi ((\textstyle \sum _{k=1}^{6} \alpha _{k})a_{ii}+(\alpha _{4}+\alpha _{5}+\alpha _{6})b_{ij}+(\alpha _{1}+\alpha _{2}+\alpha _{3})c_{ji}).
\end{align*}}
It follows that $\alpha _{1} f1_{\mathcal{A}}^{*}p_{i}+\alpha _{2} fp_{i}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}fp_{i} +\alpha _{4} p_{i}f1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}p_{i}f+\alpha _{6} p_{i}1_{\mathcal{A}}^{*}f=(\sum _{k=1}^{6} \alpha _{k})a_{ii}+(\alpha _{4}+\alpha _{5}+\alpha _{6})b_{ij}+(\alpha _{1}+\alpha _{2}+\alpha _{3})c_{ji}$ which implies that $(\sum _{k=1}^{6} \alpha _{k})f_{ii}+(\alpha _{4}+\alpha _{5}+\alpha _{6})f_{ij}+(\alpha _{1}+\alpha _{2}+\alpha _{3})f_{ji}=(\sum _{k=1}^{6} \alpha _{k})a_{ii}+(\alpha _{4}+\alpha _{5}+\alpha _{6})b_{ij}+(\alpha _{1}+\alpha _{2}+\alpha _{3})c_{ji}$ and in view of Claim \ref{c21} we arrive at $(\sum _{k=1}^{6} \alpha _{k})f_{ii}+(\alpha _{2}+\alpha _{3}+\alpha _{1})f_{ij}+(\alpha _{6}+\alpha _{4}+\alpha _{5})f_{ji}=(\sum _{k=1}^{6} \alpha _{k})a_{ii}+(\alpha _{2}+\alpha _{3}+\alpha _{1})b_{ij}+(\alpha _{6}+\alpha _{4}+\alpha _{5})c_{ji}.$ Adding these two identities we have $(\sum _{k=1}^{6} \alpha _{k})(2f_{ii}+f_{ij}+f_{ji})=(\sum _{k=1}^{6} \alpha _{k})(2a_{ii}+b_{ij}+c_{ji})$ which shows that $f_{ii}=a_{ii},$ $f_{ij}=b_{ij}$ and $f_{ji}=c_{ji}.$ Using a similar reasoning as before, we prove that $f_{jj}=d_{jj}.$
\end{proof}
\begin{claim}\label{c210} $\Phi $ is an additive mapping.
\end{claim}
\begin{proof} The result is a direct consequence of Claims \ref{c26}, \ref{c27} and \ref{c29}.
\end{proof}
To prove the second part of the Theorem \ref{thm21}, we assume that the element $\Phi (1_{\mathcal{A}})$ is a projection of $\mathcal{B}.$
\begin{claim}\label{c211} (i) $\Phi (1_{\mathcal{A}})=1_{\mathcal{B}},$ (ii) $\Phi ((\sum _{k=1}^{6} \alpha _{k})a)=(\sum _{k=1}^{6} \alpha _{k})\Phi (a),$ for all element $a\in \mathcal{A},$ and (iii) $\Phi (b^{*})=\Phi (b)^{*},$ for all element $b\in \mathcal{A}.$
\end{claim}
\begin{proof} First, we observe that
{\allowdisplaybreaks\begin{align*}\allowdisplaybreaks
&\Phi ((\textstyle \sum _{k=1}^{6} \alpha _{k})1_{\mathcal{A}})=\Phi (\alpha _{1} 1_{\mathcal{A}}1_{\mathcal{A}}^{*}1_{\mathcal{A}}+\alpha _{2} 1_{\mathcal{A}}1_{\mathcal{A}}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}1_{\mathcal{A}}1_{\mathcal{A}} +\alpha _{4} 1_{\mathcal{A}}1_{\mathcal{A}}1_{\mathcal{A}}^{*}\\
&+\alpha _{5} 1_{\mathcal{A}}^{*}1_{\mathcal{A}}1_{\mathcal{A}}+\alpha _{6} 1_{\mathcal{A}}1_{\mathcal{A}}^{*}1_{\mathcal{A}})=\alpha _{1} \Phi (1_{\mathcal{A}})\Phi (1_{\mathcal{A}})^{*}\Phi (1_{\mathcal{A}})+\alpha _{2} \Phi (1_{\mathcal{A}})\Phi (1_{\mathcal{A}})\Phi (1_{\mathcal{A}})^{*}\\
&+\alpha _{3} \Phi (1_{\mathcal{A}})^{*}\Phi (1_{\mathcal{A}})\Phi (1_{\mathcal{A}})+\alpha _{4} \Phi (1_{\mathcal{A}})\Phi (1_{\mathcal{A}})\Phi (1_{\mathcal{A}})^{*}+\alpha _{5} \Phi (1_{\mathcal{A}})^{*}\Phi (1_{\mathcal{A}})\Phi (1_{\mathcal{A}})\\
&+\alpha _{6}\Phi (1_{\mathcal{A}})\Phi (1_{\mathcal{A}})^{*}\Phi (1_{\mathcal{A}})=(\textstyle \sum _{k=1}^{6} \alpha _{k})\Phi (1_{\mathcal{A}}).
\end{align*}}
Thus, if $b\in \mathcal{A}$ is an element such that $\Phi (b)=1_{\mathcal{B}},$ then
{\allowdisplaybreaks\begin{align*}\allowdisplaybreaks
&\Phi ((\textstyle \sum _{k=1}^{6} \alpha _{k})b^{*})=\Phi (\alpha _{1} 1_{\mathcal{A}}b^{*}1_{\mathcal{A}}+\alpha _{2} 1_{\mathcal{A}}1_{\mathcal{A}}b^{*}+\alpha _{3} b^{*}1_{\mathcal{A}}1_{\mathcal{A}} +\alpha _{4} 1_{\mathcal{A}}1_{\mathcal{A}}b^{*}\\
&+\alpha _{5} b^{*}1_{\mathcal{A}}1_{\mathcal{A}}+\alpha _{6} 1_{\mathcal{A}}b^{*}1_{\mathcal{A}})=\alpha _{1} \Phi (1_{\mathcal{A}})\Phi (b)^{*}\Phi (1_{\mathcal{A}})+\alpha _{2} \Phi (1_{\mathcal{A}})\Phi (1_{\mathcal{A}})\Phi (b)^{*}\\
&+\alpha _{3} \Phi (b)^{*}\Phi (1_{\mathcal{A}})\Phi (1_{\mathcal{A}})+\alpha _{4} \Phi (1_{\mathcal{A}})\Phi (1_{\mathcal{A}})\Phi (b)^{*}+\alpha _{5} \Phi (b)^{*}\Phi (1_{\mathcal{A}})\Phi (1_{\mathcal{A}})\\
&+\alpha _{6}\Phi (1_{\mathcal{A}})\Phi (b)^{*}\Phi (1_{\mathcal{A}})=(\textstyle \sum _{k=1}^{6} \alpha _{k}) \Phi (1_{\mathcal{A}})=\Phi ((\textstyle \sum _{k=1}^{6} \alpha _{k})1_{\mathcal{A}}).
\end{align*}}
This shows that $b^{*}=1_{\mathcal{A}}$ which leads to $b=1_{\mathcal{A}}.$ For this reason, for an arbitrary element $a\in \mathcal{A},$ replace $b$ and $c$ by $1_{\mathcal{A}},$ respectively, in (\ref{fundident}). Then
{\allowdisplaybreaks\begin{align*}\allowdisplaybreaks
&\Phi ((\textstyle \sum _{k=1}^{6} \alpha _{k})a)=\Phi (\alpha _{1} a1_{\mathcal{A}}^{*}1_{\mathcal{A}}+\alpha _{2} a1_{\mathcal{A}}1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}a1_{\mathcal{A}} +\alpha _{4} 1_{\mathcal{A}}a1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}1_{\mathcal{A}}a\\
&+\alpha _{6} 1_{\mathcal{A}}1_{\mathcal{A}}^{*}a)=\alpha _{1}\Phi (a)\Phi (1_{\mathcal{A}})^{*}\Phi (1_{\mathcal{A}})+\alpha _{2}\Phi (a)\Phi (1_{\mathcal{A}})\Phi (1_{\mathcal{A}})^{*}\\
&+\alpha _{3}\Phi (1_{\mathcal{A}})^{*}\Phi (a)\Phi (1_{\mathcal{A}})+\alpha _{4}\Phi (1_{\mathcal{A}})\Phi (a)\Phi (1_{\mathcal{A}})^{*}+\alpha _{5}\Phi (1_{\mathcal{A}})^{*}\Phi (1_{\mathcal{A}})\Phi (a)\\
&+\alpha _{6}\Phi (1_{\mathcal{A}})\Phi (1_{\mathcal{A}})^{*}\Phi (a)=(\textstyle \sum _{k=1}^{6} \alpha _{k})\Phi (a).
\end{align*}}
Moreover, for an arbitrary element $b\in \mathcal{A},$ replace $a$ and $c$ by $1_{\mathcal{A}},$ respectively, in (\ref{fundident}). Then we arrive at $\Phi (b^{*})=\Phi (b)^{*}.$
\end{proof}
\begin{claim}\label{c212} $\Phi $ is a $\ast $-Jordan multiplicative mapping.
\end{claim}
\begin{proof} For arbitrary elements $a,c\in \mathcal{A}$ we have
{\allowdisplaybreaks\begin{align*}\allowdisplaybreaks
&\Phi (\alpha _{1} a1_{\mathcal{A}}^{*}c+\alpha _{2} ac1_{\mathcal{A}}^{*}+\alpha _{3} 1_{\mathcal{A}}^{*}ac +\alpha _{4} ca1_{\mathcal{A}}^{*}+\alpha _{5} 1_{\mathcal{A}}^{*}ca+\alpha _{6} c1_{\mathcal{A}}^{*}a)\\
&=\alpha _{1}\Phi (a)\Phi (1_{\mathcal{A}})^{*}\Phi (c)+\alpha _{2}\Phi (a)\Phi (c)\Phi (1_{\mathcal{A}})^{*}+\alpha _{3}\Phi (1_{\mathcal{A}})^{*}\Phi (a)\Phi (c)\\
&+\alpha _{4}\Phi (c)\Phi (a)\Phi (1_{\mathcal{A}})^{*}+\alpha _{5}\Phi (1_{\mathcal{A}})^{*}\Phi (c)\Phi (a)+\alpha _{6}\Phi (c)\Phi (1_{\mathcal{A}})^{*}\Phi (a)
\end{align*}}
which results that
{\allowdisplaybreaks\begin{align}\allowdisplaybreaks\label{ident05}
&\Phi ((\alpha _{1}+\alpha _{2}+\alpha _{3}) ac +(\alpha _{4} +\alpha _{5}+\alpha _{6}) ca)\nonumber\\
&=(\alpha _{1}+\alpha _{2}+\alpha _{3})\Phi (a)\Phi (c)+(\alpha _{4}+\alpha _{5}+\alpha _{6})\Phi (c)\Phi (a).
\end{align}}
Similarly, by Claim \ref{c21} we obtain
{\allowdisplaybreaks\begin{align}\allowdisplaybreaks\label{ident06}
&\Phi ((\alpha _{6}+\alpha _{4}+\alpha _{5}) ac +(\alpha _{2} +\alpha _{3}+\alpha _{1}) ca)\nonumber\\
&=(\alpha _{6}+\alpha _{4}+\alpha _{5})\Phi (a)\Phi (c)+(\alpha _{2}+\alpha _{3}+\alpha _{1})\Phi (c)\Phi (a),
\end{align}}
Adding the identities (\ref{ident05}) and (\ref{ident06}) and using the Claims \ref{c210} and \ref{c211}(ii) we arrive at
{\allowdisplaybreaks\begin{align}\allowdisplaybreaks\label{ident07}
&\Phi (ac+ca)=\Phi (a)\Phi (c)+\Phi (c)\Phi (a).
\end{align}}
Thus the result follows in view of Claim \ref{c211}(iii).
\end{proof}
Now, we assume that $\mathcal{B}$ is prime and $\phi (1_{\mathcal{A}})$ is a projection of $\mathcal{B}.$ From this we can easily deduce the following result.
\begin{claim}\label{} $\Phi $ is either a $\ast $-ring isomorphism or an $\ast $-ring anti-isomorphism.
\end{claim}
\begin{proof} The result is a direct consequence of the Claim \ref{c212} and \cite[Theorem H]{Herstein}.
\end{proof}
The second main result of this paper reads as follows.
\begin{theorem}\label{thm22} Let $\{\alpha _{k}\}_{k=1}^{6}$ be complex numbers satisfying the condition $\sum _{k=1}^{6} \alpha _{k} \neq 0,$ $\mathcal{A}$ and $\mathcal{B}$ two unital complex $\ast $-algebras with $1_{\mathcal{A}}$ and $1_{\mathcal{B}}$ their multiplicative identities, respectively, and such that $\mathcal{A}$ is prime and has a nontrivial projection. Let $\Phi :\mathcal{A}\rightarrow \mathcal{B}$ be a bijective mapping preserving sum of triple products $\alpha _{1} ab^{*}c+\alpha _{2} acb^{*}+\alpha _{3} b^{*}ac +\alpha _{4} cab^{*}+\alpha _{5} b^{*}ca+\alpha _{6} cb^{*}a$ such that $\Phi (1_{\mathcal{A}})$ is a projection and satisfying at least one of the following conditions: (i) $\sum _{k=1,2,3} \alpha _{k}-\sum _{k=4,5,6} \alpha _{k} \neq 0$ and $\Phi ((\sum _{k=1,2,3} \alpha _{k}-\sum _{k=4,5,6} \alpha _{k})a)=(\sum _{k=1,2,3} \alpha _{k}-\sum _{k=4,5,6} \alpha _{k})\Phi (a),$ for all element $a\in \mathcal{A},$ (ii) $\sum _{k=1,3,5} \alpha _{k}-\sum _{k=2,4,6} \alpha _{k} \neq 0$ and $\Phi ((\sum _{k=1,3,5} \alpha _{k}-\sum _{k=2,4,6} \alpha _{k})a)=(\sum _{k=1,3,5} \alpha _{k}-\sum _{k=2,4,6} \alpha _{k})\Phi (a),$ for all element $a\in \mathcal{A},$ (iii) $\sum _{k=1,2,4} \alpha _{k}-\sum _{k=3,5,6} \alpha _{k} \neq 0$ and $\Phi ((\sum _{k=1,2,4} \alpha _{k}-\sum _{k=3,5,6} \alpha _{k})a)=(\sum _{k=1,2,4} \alpha _{k}-\sum _{k=3,5,6} \alpha _{k})\Phi (a),$ for all element $a\in \mathcal{A}.$
Then $\Phi $ is a $\ast $-ring isomorphism.
\end{theorem}
\begin{proof} To prove the Theorem it is enough to show that $\Phi $ is a multiplicative mapping, in view of the Claims \ref{c210} and \ref{c211}(iii). First case: $\sum _{k=1,2,3} \alpha _{k}-\sum _{k=4,5,6} \alpha _{k} \neq 0.$ Subtracting (\ref{ident06}) from (\ref{ident05}), we have
{\allowdisplaybreaks\begin{align*}\allowdisplaybreaks
&\Phi ((\alpha _{1}+\alpha _{2}+\alpha _{3}-\alpha _{4}-\alpha _{5}-\alpha _{6}) ac-(\alpha _{1}+\alpha _{2}+\alpha _{3}-\alpha _{4}-\alpha _{5}-\alpha _{6}) ca)\\
&=(\alpha _{1}+\alpha _{2}+\alpha _{3}-\alpha _{4}-\alpha _{5}-\alpha _{6})\Phi (a)\Phi (c)\\
&\hspace{4.0cm}-(\alpha _{1}+\alpha _{2}+\alpha _{3}-\alpha _{4}-\alpha _{5}-\alpha _{6})\Phi (c)\Phi (a)
\end{align*}}
which leads to
{\allowdisplaybreaks\begin{align}\allowdisplaybreaks\label{ident08}
&\Phi (ac-ca)=\Phi (a)\Phi (c)-\Phi (c)\Phi (a),
\end{align}}
in view of hypothesis. From (\ref{ident07}) and (\ref{ident08}), we arrive at $\Phi (ac)=\Phi (a)\Phi (c),$ for all elements $a,c\in \mathcal{A}.$ Second case: $\sum _{k=1,3,5} \alpha _{k}-\sum _{k=2,4,6} \alpha _{k} \neq 0.$ Replacing $a$ by $1_{\mathcal{A}}$ and $b$ by $b^{*},$ in (\ref{fundident}), we get
{\allowdisplaybreaks\begin{align*}\allowdisplaybreaks
&\Phi (\alpha _{1} 1_{\mathcal{A}}bc+\alpha _{2} 1_{\mathcal{A}}cb+\alpha _{3} b1_{\mathcal{A}}c +\alpha _{4} c1_{\mathcal{A}}b+\alpha _{5} bc1_{\mathcal{A}}+\alpha _{6} cb1_{\mathcal{A}})\nonumber \\
&=\alpha _{1}\Phi (1_{\mathcal{A}})\Phi (b)\Phi (c)+\alpha _{2}\Phi (1_{\mathcal{A}})\Phi (c)\Phi (b)+\alpha _{3}\Phi (b)\Phi (1_{\mathcal{A}})\Phi (c)\nonumber \\
&+\alpha _{4}\Phi (c)\Phi (1_{\mathcal{A}})\Phi (b)+\alpha _{5}\Phi (b)\Phi (c)\Phi (1_{\mathcal{A}})+\alpha _{6}\Phi (c)\Phi (b)\Phi (1_{\mathcal{A}}),
\end{align*}}
which leads to the identity
{\allowdisplaybreaks\begin{align*}\allowdisplaybreaks
&\Phi ((\alpha _{1}+\alpha _{3}+\alpha _{5}) bc +(\alpha _{2} +\alpha _{4}+\alpha _{6}) cb)\\
&=(\alpha _{1}+\alpha _{3}+\alpha _{5})\Phi (b)\Phi (c)+(\alpha _{2}+\alpha _{4}+\alpha _{6})\Phi (c)\Phi (b).
\end{align*}}
Now, replacing $b$ by $c$ and $c$ by $b,$ in the last identity, we get
{\allowdisplaybreaks\begin{align*}\allowdisplaybreaks
&\Phi ((\alpha _{2} +\alpha _{4}+\alpha _{6}) bc+(\alpha _{1}+\alpha _{3}+\alpha _{5}) cb)\\
&=(\alpha _{2}+\alpha _{4}+\alpha _{6})\Phi (b)\Phi (c)+(\alpha _{1}+\alpha _{3}+\alpha _{5})\Phi (c)\Phi (b).
\end{align*}}
subtracting the last identity from the previous one, we have
{\allowdisplaybreaks\begin{align*}\allowdisplaybreaks
&\Phi ((\alpha _{1}+\alpha _{3}+\alpha _{5}-\alpha _{2}-\alpha _{4}-\alpha _{6}) bc-(\alpha _{1}+\alpha _{3}+\alpha _{5}-\alpha _{2}-\alpha _{4}-\alpha _{6}) cb)\\
&=(\alpha _{1}+\alpha _{3}+\alpha _{5}-\alpha _{2}-\alpha _{4}-\alpha _{6})\Phi (b)\Phi (c)\\
&\hspace{4.0cm} +(\alpha _{1}+\alpha _{3}+\alpha _{5}-\alpha _{2}-\alpha _{4}-\alpha _{6})\Phi (c)\Phi (b)
\end{align*}}
which implies that
{\allowdisplaybreaks\begin{align}\allowdisplaybreaks\label{ident09}
&\Phi (bc-cb)=\Phi (b)\Phi (c)-\Phi (c)\Phi (b),
\end{align}}
for all elements $b,c\in \mathcal{A}.$ The identities (\ref{ident07}) and (\ref{ident09}) show that $\Phi (bc)=\Phi (b)\Phi (c),$ for all elements $b,c\in \mathcal{A}.$ Third case: $\sum _{k=1,2,4} \alpha _{k}-\sum _{k=3,5,6} \alpha _{k} \neq 0.$ Using reasoning similar to the second case, we can conclude that $\Phi (ab)=\Phi (a)\Phi (b),$ for all elements $a,b\in \mathcal{A}.$
From what we just saw above we deduce that $\Phi $ is a multiplicative mapping.
\end{proof}
From Theorems \ref{thm21} and \ref{thm22} we can deduce the following results.
\begin{corollary} Let $\mathcal{A}$ and $\mathcal{B}$ be two unital complex $\ast $-algebras with $1_{\mathcal{A}}$ and $1_{\mathcal{B}}$ their multiplicative identities, respectively, and such that $\mathcal{A}$ is prime and has a nontrivial projection. Then every bijective mapping $\Phi :\mathcal{A}\rightarrow \mathcal{B}$ preserving triple product $a\filledsquare _{\eta }b\filledsquare _{\nu }c$ (resp., preserving mixed product $a\filledsquare _{\eta }b\circ _{\nu }c$), where $\eta $ and $\nu $ are nonzero complex numbers satifying the conditions $\overline{\eta }\neq -1$ and $\nu \neq -1$ (resp., $\eta \neq -1$ and $\nu \neq -1$), is additive. Moreover, (i) if $\Phi (1_{\mathcal{A}})$ is a projection, then $\Phi $ is a $\ast $-Jordan ring isomorphism and (ii) if $\mathcal{B}$ is prime and $\phi (1_{\mathcal{A}})$ is a projection of $\mathcal{B},$ then $\Phi $ is either a $\ast $-ring isomorphism or an $\ast $-ring anti-isomorphism.
\end{corollary}
\begin{corollary} Let $\mathcal{A}$ and $\mathcal{B}$ be two unital complex $\ast $-algebras with $1_{\mathcal{A}}$ and $1_{\mathcal{B}}$ their multiplicative identities, respectively, and such that $\mathcal{A}$ is prime and has a nontrivial projection. Let $\Phi :\mathcal{A}\rightarrow \mathcal{B}$ be a bijective mapping preserving triple product $a\filledsquare _{\eta }b\filledsquare _{\nu }c$ (resp., preserving mixed product $a\filledsquare _{\eta }b\circ _{\nu }c$), where $\eta $ and $\nu $ are nonzero complex numbers, such that $\Phi (1_{\mathcal{A}})$ is a projection and satisfying at least one of the following conditions: (i) ($\overline{\eta }\neq -1$ and $\nu \neq \pm 1$) and $\Phi ((\overline{\eta }+1)(1-\nu )a)=(\overline{\eta }+1)(1-\nu )\Phi (a),$ for all element $a\in \mathcal{A},$ or (ii) ($\overline{\eta }\neq \pm 1$ and $\nu \neq -1$) and $\Phi ((\overline{\eta }-1)(1+\nu )a)=(\overline{\eta }-1)(1+\nu )\Phi (a),$ for all element $a\in \mathcal{A}$ (resp., (i) ($\eta \neq -1$ and $|\nu |\neq 1$) and $\Phi ((\eta +1)(1-\nu )a)=(\eta +1)(1-\nu )\Phi (a),$ for all element $a\in \mathcal{A},$ or (ii) ($|\eta |\neq 1$ and $\nu \neq -1$) and $\Phi ((\eta -1)(1+\nu )a)=(\eta -1)(1+\nu )\Phi (a),$ for all element $a\in \mathcal{A}$). Then, $\Phi $ is a $\ast $-ring isomorphism.
In particular, If (i) ($\overline{\eta }\neq -1$ and $\nu \neq \pm 1$) and $(\overline{\eta }+1)(1-\nu )$ is a rational number or (ii) ($\overline{\eta }\neq \pm 1$ and $\nu \neq -1$) and $(\overline{\eta }-1)(1+\nu )$ is a rational number (resp., (i) ($\eta \neq -1$ and $|\nu |\neq 1$) and $(\eta +1)(1-\nu )$ is a rational number or (ii) ($|\eta |\neq 1$ and $\nu \neq -1$) and $(\eta -1)(1+\nu )$ is a rational number), then $\Phi $ is a $\ast $-ring isomorphism.
\end{corollary} |
1608.06903 | \section{Introduction}
\setcounter{equation}{0}
\hspace*{0.3in} In reliability optimization and life testing experiments, many times the tests are censored or truncated when failure of a device during the warranty period may not be counted or items may be replaced after a certain time under a replacement policy. Moreover, many reliability systems and biological organism including human life span are bounded above because of test conditions, cost or other constraints. These situations result in a data set which is modeled by distributions with finite range (i.e. with bounded support) viz. power function density, finite range density, truncated Weibull, beta, Kumaraswamy and so on (see for example, Ghitany~\cite{gh}, Lai and Jones~\cite{lai1}, Lai and Mukherjee~\cite{lai2}, Moore and Lai~\cite{moo} and Mukherjee and Islam~\cite{muk}).
\\\hspace*{0.3in} Recently, G$\acute{o}$mez et al.~\cite{go1} introduce the log-Lindley (LL) distribution with parameters $(\sigma,\lambda)$, written as LL($\sigma,\lambda$), as an alternative to the beta distribution with the probability density function given by
\begin{equation}\label{e0}
f(x;\sigma,\lambda)=\frac{\sigma^2}{1+\lambda\sigma}\left(\lambda-\log x\right) x^{\sigma-1};~0<x<1,~\lambda\geq 0,~\sigma>0,
\end{equation}
where $\sigma$ is the shape parameter and $\lambda$ is the scale parameter. This distribution with a simple expression and nice reliability properties, is derived from the generalized Lindley distribution as proposed by Zakerzadeh and Dolati~\cite{za}, which is again a generalization of the Lindley distribution as proposed by Lindley~\cite{li}. The LL distribution exhibits bath-tub failure rates and has increasing generalized failure rate (IGFR). This distribution has useful applications in the context of inventory management, pricing and supply chain contracting problems (see, for example, Ziya et al.~\cite{zia}, Lariviere and Porteus~\cite{lar1} and Lariviere~\cite{lar2}), where a demand distribution is required to have the IGFR property. Moreover, it has application in the actuarial context where the cumulative distribution function (CDF) of the LL distribution is used to distort the premium principle (G$\acute{o}$mez et al.~\cite{go1}). The LL distribution is also shown to fit rates and proportions data better than the beta distribution (G$\acute{o}$mez et al.~\cite{go1}).
\\\hspace*{0.3in} Order statistics play an important role in reliability optimization, life testing, operations research and many other areas. Parallel and series systems are the building blocks of many complex coherent systems in reliability theory. While the lifetime of a series system corresponds to the smallest order statistic $X_{1:n}$, the same of a parallel system is represented by the largest order statistic $X_{n:n}$. Although stochastic comparisons of order statistics from homogeneous populations have been studied in detail in the literature, not much work is available so far for the same from heterogeneous populations, due to its complicated nature of expressions. Such comparisons are studied with exponential, gamma, Weibull, generalized exponential or Fr$\acute{e}$chet distributed components with unbounded support. One may refer to Dykstra \emph{et al.}~\cite{dkr11}, Misra and Misra~\cite{mm11.1}, Zhao and Balakrishnan~(\cite{zb11.2}), Torrado and Kochar~\cite{tr11}, Kundu and Chowdhury~\cite{kun2}, Kundu \emph{et al.}~\cite{kun1}, Gupta \emph{et al.}~\cite{gu} and the references there in. Moreover, not much attention has been paid so far to the stochastic comparison of two systems having finite range distributed components. The notion of majorization (Marshall et al. [5]) is also essential to the understanding of the stochastic inequalities for comparing order statistics. This concept is used in the context of optimal component allocation in parallel-series as well as in series-parallel systems, allocation of standby in series and parallel systems, and so on, see, for instance, El-Neweihi et al.~\cite{el}. It is also used in the context of minimal repair of two-component parallel system with exponentially distributed lifetime by Boland and El-Neweihi~\cite{bo}. \\
\hspace*{0.3 in} In this paper our main aim is to compare two parallel systems in terms of reversed hazard rate order and likelihood ratio order with majorized scale and shape parameters separately, when the components are from two heterogeneous LL distributions as well as from the multiple outlier LL random variables. The rest of the paper is organized as follows. In Section 2, we have given the required notations, definitions and some useful lemmas which have been used throughout the paper. Results related to reversed hazard rate ordering and likelihood ratio ordering between two order statistics $X_{n:n}$ and $Y_{n:n}$ are derived in Section 3.
\\\hspace*{0.3 in} Throughout the paper, the word increasing (resp. decreasing) and nondecreasing (resp. nonincreasing) are used interchangeably, and $\Re$ denotes the set of real numbers $\{x:-\infty<x<\infty\}$. We also write $a\stackrel{sign}{=}b$ to mean that $a$ and $b$ have the same sign.
For any differentiable function $k(\cdot)$, we write $k'(t)$ to denote the first derivative of $k(t)$ with respect to $t$.
\section{Notations, Definitions and Preliminaries}
\hspace*{0.3 in} For an absolutely continuous random variable $X$, we denote the probability density function, the distribution function and the reversed hazard rate function by $f_X(\cdot), F_X(\cdot),$ and $\tilde r_X(\cdot)$ respectively. The survival or reliability
function of the random variable $X$ is written as $\bar F_X(\cdot)=1-F_X(\cdot)$.
\\\hspace*{0.3 in} In order to compare different order statistics, stochastic orders are used for fair and reasonable comparison.
In literature many different kinds of stochastic orders have been developed and studied.
The following well known definitions may be obtained in Shaked and Shanthikumar~\cite{shak1}.
\begin{d1}\label{de1}
Let $X$ and $Y$ be two absolutely continuous random variables with respective supports $(l_X,u_X)$ and $(l_Y,u_Y)$,
where $u_X$ and $u_Y$ may be positive infinity, and $l_X$ and $l_Y$ may be negative infinity.
Then, $X$ is said to be smaller than $Y$ in
\begin{enumerate}
\item[(i)] likelihood ratio (lr) order, denoted as $X\leq_{lr}Y$, if
$$\frac{f_Y(t)}{f_X(t)}\;\text{is increasing in} \,t\in(l_X,u_X)\cup(l_Y,u_Y);$$
\item[(ii)] hazard rate (hr) order, denoted as $X\leq_{hr}Y$, if $$\frac{\bar F_Y(t)}{\bar F_X(t)}\;\text{is increasing in}\, t \in (-\infty,max(u_X,u_Y)),$$
which can equivalently be written as $r_X(t)\geq r_Y(t)$ for all $t$;
\item[(iii)] reversed hazard rate (rhr) order, denoted as $X\leq_{rhr}Y$, if $$ \frac{F_Y(t)}{ F_X(t)}\;\text{is increasing in}\, t \in(min(l_X,l_Y),\infty),$$
which can equivalently be written as $\tilde r_X(t)\leq \tilde r_Y(t)$ for all $t$;
\item[(iv)] usual stochastic (st) order, denoted as $X\leq_{st}Y$, if $\bar F_X(t)\leq \bar F_Y(t)$ for all \\$t\in (-\infty,\infty).$
\end{enumerate}
\end{d1}
In the following diagram we present a chain of implications of the stochastic orders, see, for instance, Shaked and Shanthikumar \cite{shak1}, where the definitions and usefulness of these orders can be found.
\vspace{0.17 in}
\\\hspace*{1.7 in}$~~~~~~X\leq_{hr}Y$
\\\hspace*{1.7 in}$~~~~~~~~~~~\uparrow ~~~~~~~\searrow$
\\\hspace{6 in} $~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~X\leq_{lr}Y~~\rightarrow~~X\leq_{st}Y.$
\hspace{2.51 cm}$~~~~~~~~~~~~~~~~~~~~~~~~~\downarrow~~~~~~~~~\nearrow$
\hspace{2 cm}$~~~~~~~~~~~~~~~~~~~~~~~~X\leq_{rhr}Y$
\\\hspace*{0.3 in} It is well known that the results on different stochastic orders can be established on using majorization order(s). Let $I^n$ denotes an $n$-dimensional Euclidean space where $I\subseteq\Re$. Further, let $\mathbf{x}=(x_1,x_2,\dots,x_n)\in I^n$ and $\mathbf{y}=(y_1,y_2,\dots,y_n)\in I^n$ be any two real vectors with $x_{(1)}\le x_{(2)}\le\cdots\le x_{(n)}$ being the increasing arrangements of the components of the vector $\mathbf{x}$. The following definitions may be found in Marshall \emph{et al.} \cite{Maol}.\\
\begin{d1}
The vector $\mathbf{x} $ is said to majorize the vector $\mathbf{y} $ (written as $\mathbf{x}\stackrel{m}{\succeq}\mathbf{y}$) if
\begin{equation*}
\sum_{i=1}^j x_{(i)}\le\sum_{i=1}^j y_{(i)},\;j=1,\;2,\;\ldots, n-1,\;\;and \;\;\sum_{i=1}^n x_{(i)}=\sum_{i=1}^n y_{(i)}.
\end{equation*}
\end{d1}
\begin{d1}
A function $\psi:I^n\rightarrow\Re$ is said to be Schur-convex (resp. Schur-concave) on $I^n$ if
\begin{equation*}
\mathbf{x}\stackrel{m}{\succeq}\mathbf{y} \;\text{implies}\;\psi\left(\mathbf{x}\right)\ge (\text{resp. }\le)\;\psi\left(\mathbf{y}\right)\;for\;all\;\mathbf{x},\;\mathbf{y}\in I^n.
\end{equation*}
\end{d1}
\begin{n1}
Let us introduce the following notations.
\begin{enumerate}
\item[(i)] $\mathcal{D}_{+}=\left\{\left(x_{1},x_2,\ldots,x_{n}\right):x_{1}\geq x_2\geq\ldots\geq x_{n}> 0\right\}$.
\item[(ii)] $\mathcal{E}_{+}=\left\{\left(x_{1},x_2,\ldots,x_{n}\right):0< x_{1}\leq x_2\leq\ldots\leq x_{n}\right\}$.
\end{enumerate}
\end{n1}
Next, two lemmas are given which will be used to prove our main results. The first one can be obtained by combining Proposition H2 of Marshall \emph{et al.} (\cite{Maol}, p. 132) and Lemma 3.2 of Kundu \emph{et al.} (\cite{kun1}) while the second one is due to Lemma 3.4 of Kundu \emph{et al.} (\cite{kun1}).
\begin{l1}\label{l3}
Let $\varphi({\bf x})=\sum_{i=1}^ng_i(x_i)$ with ${\bf x}\in \mathcal{D}_+$, where $g_i:\mathbb{R}\to\mathbb{R}$ is differentiable, for all $i=1,2,\ldots, n$.
Then $\varphi(\mathbf{x})$ is Schur-convex (Schur-concave) on $\mathcal{D}_+$ if, and only if,
$$g_{i}'(a)\geq\; (resp. \leq)\ g_{i+1}'(b)\;\text{whenever}\;a\geq b,\;\text{for all}\;i=1,2,\ldots,n-1,$$
where $g'(a)=\frac{d g(x)}{dx}\big|_{x=a}$.
\end {l1}
\begin{l1}\label{l4}
Let $\varphi({\bf x})=\sum_{i=1}^ng_i(x_i)$ with ${\bf x}\in \mathcal{E}_+$, where $g_i:\mathbb{R}\to\mathbb{R}$ is differentiable, for all $i=1,2,\ldots, n$.
Then $\varphi(\mathbf{x})$ is Schur-convex (Schur-concave) on $\mathcal{E}_+$ if, and only if,
$$g_{i+1}'(a)\geq\;(resp. \leq)\ g_{i}'(b)\;\text{whenever}\;a\geq b,\;\text{for all}\;i=1,2,\ldots,n-1,$$
where $g'(a)=\frac{d g(x)}{dx}\big|_{x=a}$.
\end {l1}
\section{Main Results}
\setcounter{equation}{0}
\hspace{0.3in} For $i=1,2,\ldots,n$, let $X_i$ (resp. $Y_i$) be $n$ independent nonnegative random variables following LL distribution as given in (\ref{e0}).\\
\hspace*{0.3 in} If $F_{n:n}\left(\cdot\right)$ and $G_{n:n}\left(\cdot\right)$ be the distribution functions of $X_{n:n}$ and $Y_{n:n}$ respectively, where $\mbox{\boldmath$\sigma$}=\left(\sigma_1,\sigma_2,\ldots,\sigma_n\right)$, $\mbox{\boldmath$\theta$}=\left(\theta_1,\theta_2, \ldots,\theta_n\right)$, $\mbox{\boldmath$\lambda$}=\left(\lambda_1,\lambda_2,\ldots,\lambda_n\right)$ and $\mbox{\boldmath$\delta$}=\left(\delta_1,\delta_2,\ldots,\delta_n\right)$, then
\begin{equation*}
F_{n:n}\left(x\right)=\prod_{i=1}^n \frac{x^{\sigma_{i}}\left(1+\sigma_{i}\left(\lambda_{i}-\log x\right)\right)}{1+\lambda_{i}\sigma_{i}},
\end{equation*}
and
\begin{equation*}
G_{n:n}\left(x\right)=\prod_{i=1}^n \frac{x^{\theta_{i}}\left(1+\theta_{i}\left(\delta_{i}-\log x\right)\right)}{1+\delta_{i}\theta_{i}}.
\end{equation*}
Again, if $\tilde{r}_{n:n}^{X}$ and $\tilde{r}_{n:n}^{Y}$ are the reversed hazard rate functions of $X_{n:n}$ and $Y_{n:n}$ respectively, then
\begin{equation}
\tilde{r}_{n:n}^X\left(x\right)=\sum_{i=1}^n\frac{\sigma_i}{x}\left(1-\frac{1}{1+\sigma_i\left(\lambda_i-\log x\right)}\right)\label{e1},
\end{equation}
and
\begin{equation}
\tilde{r}_{n:n}^Y\left(x\right)=\sum_{i=1}^n\frac{\theta_i}{x}\left(1-\frac{1}{1+\theta_i\left(\delta_i-\log x\right)}\right)\label{e2}.
\end{equation}
\hspace*{0.3 in}The following two theorems show that under certain conditions on parameters, there exists reversed hazard rate ordering between $X_{n:n}$ and $Y_{n:n}$.
\begin{t1}\label{th1}
For $i=1,2,\ldots, n$, let $X_i$ and $Y_i$ be two sets of mutually independent random variables with $X_i\sim LL\left(\sigma_i,\lambda_i\right)$ and $Y_i\sim LL\left(\theta_i,\lambda_i\right)$. Further, suppose that
$\mbox{\boldmath $\sigma$}, \mbox{\boldmath $\theta$}, \mbox{\boldmath $\lambda$}\in \mathcal{D}_+$ or $\mbox{\boldmath $\sigma$}, \mbox{\boldmath $\theta$}, \mbox{\boldmath $\lambda$}\in \mathcal{E}_+$.
Then, $$\mbox{\boldmath $\sigma$}\stackrel{m}{\succeq}\mbox{\boldmath $\theta$}\;\text{implies}\; X_{n:n}\ge_{rhr}Y_{n:n}.$$
\end{t1}
{\bf Proof:}
Let $g_{i}(y)=\frac{y}{x}\left(1-\frac{1}{1+y\left(\lambda_i-\log x\right)}\right).$ Differentiating $g_{i}(y)$ with respect to $y$, we get $$g_{i}^{'}(y)=\frac{1}{x}\left(1-\frac{1}{\left(1+y\left(\lambda_i-\log x\right)\right)^2}\right),$$ giving $$g_{i}^{'}(\sigma_i)-g_{i+1}^{'}(\sigma_{i+1})=\frac{\left(1+\sigma_i\left(\lambda_i-\log x\right)\right)^2-\left(1+\sigma_{i+1}\left(\lambda_{i+1}-\log x\right)\right)^2}{x\left(\left(1+\sigma_i\left(\lambda_i-\log x\right)\right)\left(1+\sigma_{i+1}\left(\lambda_{i+1}-\log x\right)\right)\right)^2}.$$
So, if $\mbox{\boldmath $\sigma$}, \mbox{\boldmath $\lambda$}\in \mathcal{D}_+ \left(resp.\ \mathcal{E}_+\right)$, then $g_{i}^{'}(\sigma_i)-g_{i+1}^{'}(\sigma_{i+1})\geq \left(\leq\right)0.$ Then, by Lemma \ref{l3} (Lemma \ref{l4}), $\tilde{r}^X_{n:n}\left(x\right)$ is Schur convex in $\mbox{\boldmath $\sigma$}$, proving the result. $\hfill\Box$\\
The counterexample given below shows that the ascending (descending) order of the components of the scale and shape parameters are necessary for the result of Theorem \ref{th1} to hold.
\begin{e1}\label{ce2}
Let $X_i\sim LL\left(\sigma_i, \lambda_i\right)$ and $Y_i\sim LL\left(\theta_i, \lambda_i\right), i=1,2,3.$ Now, if $\left(\sigma_1,\sigma_2, \sigma_3\right)=\left(1, 1,5\right)\in \mathcal{E}_+$, $\left(\theta_1,\theta_2, \theta_3\right)=\left(1,2,4\right)\in \mathcal{E}_+$ and $\left(\lambda_1, \lambda_2, \lambda_3\right)=\left(4, 3, 0.2\right)\in \mathcal{D}_+$ are taken, then from Figure \ref{fig1}, it is clear that $\frac{F_{3:3}(x)}{G_{3:3}(x)}$ is not monotone, giving that $X_{3:3}\ngeq_{rhr}Y_{3:3}$, although $\mbox{\boldmath $\sigma$}\stackrel{m}{\succeq}\mbox{\boldmath $\theta$}$.
\begin{figure}[t]\centering
\includegraphics[height=7 cm]{ll1.pdf}
\caption{\label{fig1} Graph of $\frac{F_{3:3}(x)}{G_{3:3}(x)}$}
\end{figure}
\end{e1}
\hspace*{0.3in} Theorem \ref{th1} guarantees that for parallel systems of components having independent LL distributed lifetimes with common scale parameter vector, the majorized shape parameter vector leads to larger system's life in the sense of the reversed hazard rate ordering. Now the question arises$-$what will happen if the scale parameter $\mbox{\boldmath $\lambda$}$ majorizes $\mbox{\boldmath $\delta$}$ when the shape parameter vector remains constant? The theorem given below answers that if the order of the components of shape and scale parameter vectors are reversed, then $X_{n:n}$ will be smaller than $Y_{n:n}$ in reversed hazard rate ordering.
\begin{t1}\label{th2}
For $i=1,2,\ldots, n$, let $X_i$ and $Y_i$ be two sets of mutually independent random variables with $X_i\sim LL\left(\sigma_i,\lambda_i\right)$ and $Y_i\sim LL\left(\sigma_i,\delta_i\right)$. Further, suppose that
$\mbox{\boldmath $\sigma$}\in \mathcal{E}_+$, $\mbox{\boldmath $\lambda$}, \mbox{\boldmath $\delta$}\in \mathcal{D}_+$ or $\mbox{\boldmath $\sigma$}\in \mathcal{D}_+$, $\mbox{\boldmath $\lambda$}, \mbox{\boldmath $\delta$}\in \mathcal{E}_+$.
Then, $$\mbox{\boldmath $\lambda$}\stackrel{m}{\succeq}\mbox{\boldmath $\delta$}\;\text{implies}\; X_{n:n}\leq_{rhr}Y_{n:n}.$$
\end{t1}
{\bf Proof:} For $i= 1, 2\ldots, n$, let us consider $g_{i}(y)=\frac{\sigma_i}{x}\left(1-\frac{1}{1+\sigma_i\left(y-\log x\right)}\right).$ Differentiating $g_{i}(y)$ with respect to $y$, we get $$g_{i}^{'}(y)=\frac{\sigma_i^2}{x\left(1+\sigma_i\left(y-\log x\right)\right)^2},$$ giving
\begin{equation*}
\begin{split}
g_{i}^{'}(\lambda_i)-g_{i+1}^{'}(\lambda_{i+1})&\stackrel{sign}{=}\left(\sigma_i^2-\sigma_{i+1}^2\right)+\sigma_i^2\sigma_{i+1}^2\left[\left(\lambda_{i+1}-\log x\right)^2-\left(\lambda_{i}-\log x\right)^2\right]\\&\quad+2\sigma_i\sigma_{i+1}\left[\left(\sigma_i\lambda_{i+1}-\sigma_{i+1}\lambda_{i}\right)-\log x\left(\sigma_i-\sigma_{i+1}\right)\right].
\end{split}
\end{equation*}
So, if $\mbox{\boldmath $\lambda$}\in \mathcal{D}_+\left(resp.\ \mathcal{E}_+\right)$ and $\mbox{\boldmath $\sigma$}\in \mathcal{E}_+\left(resp.\ \mathcal{D}_+\right)$, then $g_{i}^{'}(\lambda_i)-g_{i+1}^{'}(\lambda_{i+1})\leq \left(\geq\right)0.$ So, by Lemma \ref{l3} (Lemma \ref{l4}), $\tilde{r}^X_{n:n}\left(x\right)$ is Schur-concave in $\mbox{\boldmath $\lambda$}$, proving the result.$\hfill\Box$\\
Next, one counterexample is provided to show that, nothing can be said about reversed hazard rate ordering between $X_{n:n}$ and $Y_{n:n}$ if $\mbox{\boldmath $\lambda$}$ majorizes $\mbox{\boldmath $\delta$}$ and all of $\mbox{\boldmath $\lambda$}$, $\mbox{\boldmath $\delta$}$ and $\mbox{\boldmath $\sigma$}$ are either in $\mathcal{E}_+$ or in $\mathcal{D}_+$.
\begin{e1}\label{e3}
Let $X_i\sim LL\left(\sigma_i, \lambda_i\right)$ and $Y_i\sim\left(\sigma_i, \delta_i\right), i=1,2,3$. Let $\left(\lambda_1,\lambda_2, \lambda_3\right)=\left(0.1, 0.3, 4.1\right)\in \mathcal{E}_+$ and $\left(\delta_1, \delta_2, \delta_3\right)=\left(0.2, 0.3,4\right)\in \mathcal{E}_+$, giving $\mbox{\boldmath $\lambda$}\stackrel{m}{\succeq}\mbox{\boldmath $\delta$}$. Now, if $\left(\sigma_1,\sigma_2, \sigma_3\right)=\left(0.1, 3,5\right)\in \mathcal{E}_+$ is taken, then Figure \ref{fig2} (a) shows that $\frac{F_{3:3}(x)}{G_{3:3}(x)}$ is increasing in $x$. Again if $\left(\sigma_1,\sigma_2, \sigma_3\right)=\left(2, 3,5\right)\in \mathcal{E}_+$ is taken, then Figure \ref{fig2} (b) shows that $\frac{F_{3:3}(x)}{G_{3:3}(x)}$ is decreasing in $x$. So, it can be concluded that, for all $\mbox{\boldmath $\sigma$}, \mbox{\boldmath$\lambda$}, \mbox{\boldmath$\delta$}\in \mathcal{D}_+ (resp.\ \mathcal{E}_+)$, $\mbox{\boldmath $\lambda$}\stackrel{m}{\succeq}\mbox{\boldmath $\delta$}$ does not always imply $X_{3:3}\leq_{rhr}Y_{3:3}$.
\begin{figure}[ht]
\centering
\begin{minipage}[b]{0.48\linewidth}
\includegraphics[height=6.5 cm]{ll2.pdf}
\centering{(a) For $\left(\sigma_1,\sigma_2, \sigma_3\right)=\left(0.1, 3,5\right)$}
\end{minipage}
\quad
\begin{minipage}[b]{0.48\linewidth}
\includegraphics[height=6.5 cm]{ll3.pdf}
\centering{(b) For $\left(\sigma_1,\sigma_2, \sigma_3\right)=\left(2, 3,5\right)$}
\end{minipage}
\caption{\label{fig2}Graph of $\frac{F_{3:3}(x)}{G_{3:3}(x)}$}
\end{figure}
\end{e1}
\hspace*{0.3in} The following theorem shows that depending upon certain conditions, majorization order of the shape parameters implies likelihood ratio ordering between $X_{n:n}$ and $Y_{n:n}$.
\begin{t1}\label{th3}
For $i=1,2,\ldots, n$, let $X_i$ and $Y_i$ be two sets of mutually independent random variables with $X_i\sim LL\left(\sigma_i,\lambda_i\right)$ and $Y_i\sim LL\left(\theta_i,\lambda_i\right)$. Further, suppose that
$\mbox{\boldmath $\sigma$}, \mbox{\boldmath $\theta$}, \mbox{\boldmath $\lambda$}\in \mathcal{D}_+$ or $\mbox{\boldmath $\sigma$}, \mbox{\boldmath $\theta$}, \mbox{\boldmath $\lambda$}\in \mathcal{E}_+$.
Then, if $\lambda_i\sigma_i>1/2$, $$\mbox{\boldmath $\sigma$}\stackrel{m}{\succeq}\mbox{\boldmath $\theta$}\;\text{implies}\; X_{n:n}\ge_{lr}Y_{n:n}.$$
\end{t1}
{\bf Proof:}
In view of theorem \ref{th1} and using (3.1) and (3.2), here we have only to show that
\begin{eqnarray*}
\frac{\tilde{r}_{n:n}^X\left(x\right)}{\tilde{r}_{n:n}^Y\left(x\right)}&=&\frac{\sum_{k=1}^nu_k\left(\sigma_k,x\right)}{\sum_{k=1}^nu_k\left(\theta_k,x\right)}\\
&=& \eta(x) (say),
\end{eqnarray*} is increasing in $x$, where $u_k(y,x)=\frac{y^{2}\left(\lambda_k-\log x\right)}{1+y\left(\lambda_k-\log x\right)}$. Now, differentiating $\eta(x)$ with respect to $x$,
\begin{eqnarray*}
\eta^{'}(x)&\stackrel{sign}{=}&\sum_{k=1}^n\frac{\partial u_k\left(\sigma_k,x\right)}{\partial x}\sum_{k=1}^nu_k\left(\theta_k,x\right)-\sum_{k=1}^n\frac{\partial u_k\left(\theta_k,x\right)}{\partial x}\sum_{k=1}^nu_k\left(\sigma_k,x\right)\\&=&-h\left(\mbox{\boldmath$\sigma$},x\right)\sum_{k=1}^nu_k\left(\theta_k,x\right)+h\left(\mbox{\boldmath$\theta$},x\right)\sum_{k=1}^nu_k\left(\sigma_k,x\right),
\end{eqnarray*}
where $$ h(\mbox{\boldmath$\sigma$}, x)=-\sum_{k=1}^n\frac{\partial u_k\left(\sigma_k,x\right)}{\partial x}=\frac{1}{x}\sum_{k=1}^n\frac{\sigma_k^2}{\left(1+\sigma_k\left(\lambda_k-\log x\right)\right)^2}
$$
and
$$ h(\mbox{\boldmath$\theta$}, x)=-\sum_{k=1}^n\frac{\partial u_k\left(\theta_k,x\right)}{\partial x}=\frac{1}{x}\sum_{k=1}^n\frac{\theta_k^2}{\left(1+\theta_k\left(\lambda_k-\log x\right)\right)^2}.
$$
Thus, to show that $\eta(x)$ is increasing in $x$, we have only to show that $$\psi\left(\mbox{\boldmath$\sigma$}, x\right)=\frac{h(\mbox{\boldmath$\sigma$}, x)}{\sum_{k=1}^nu_k\left(\sigma_k,x\right)}$$
is Schur-concave in $\mbox{\boldmath$\sigma$}$. \\
Now, as
$$\frac{\partial h(\mbox{\boldmath$\sigma$}, x)}{\partial \sigma_i}=\frac{1}{x}.\frac{2\sigma_i}{\left(1+\sigma_i(\lambda_i-\log x)\right)^3}$$
and
$$\frac{\partial }{\partial \sigma_i}\left[\sum_{k=1}^n u_k\left(\sigma_k,x\right)\right]=1-\frac{1}{\left(1+\sigma_i(\lambda_i-\log x)\right)^2},$$
then
\begin{eqnarray*}
\frac{\partial \psi}{\partial\sigma_i}&\stackrel{sign}{=}&\frac{2\sigma_i}{\left(1+\sigma_i\left(\lambda_i-\log x\right)\right)^3}\sum_{k=1}^n u_k\left(\sigma_k,x\right)-x.h(\mbox{\boldmath$\sigma$}, x)\left(1-\frac{1}{\left(1+\sigma_i(\lambda_i-\log x)\right)^2}\right).
\end{eqnarray*}
So, if $\mbox{\boldmath $\sigma$}, \mbox{\boldmath $\lambda$}\in \mathcal{D}_+ \left(resp \ \in \mathcal{E}_+\right)$, i.e., for $i\leq j$ if $\sigma_i\geq\sigma_j$ and $\lambda_i\geq\lambda_j$ $\left(\sigma_i\leq\sigma_j, \lambda_i\leq\lambda_j\right)$, then noticing the fact that $\frac{1}{\left(1+\sigma_i\left(\lambda_i-\log x\right)\right)^2}$ is decreasing in $\sigma_i$ as well as in $\lambda_i,$ it can be written that $$\frac{1}{\left(1+\sigma_i\left(\lambda_i-\log x\right)\right)^2}\leq (\geq) \frac{1}{\left(1+\sigma_j\left(\lambda_i-\log x\right)\right)^2}\leq (\geq) \frac{1}{\left(1+\sigma_j\left(\lambda_j-\log x\right)\right)^2}.$$
Again, as $\sigma_i\lambda_i>\frac{1}{2}$ implying $\sigma_i\left(\lambda_i-\log x\right)>\frac{1}{2}$ for all $0<x<1$, then
$$\frac{\partial}{\partial\sigma_i}\left(\frac{\sigma_i}{\left(1+\sigma_i\left(\lambda_i-\log x\right)\right)^3}\right)=\frac{1-2\sigma_i\left(\lambda_i-\log x\right)}{\left(1+\sigma_i\left(\lambda_i-\log x\right)\right)^4}<0,$$ proving that $\frac{\sigma_i}{\left(1+\sigma_i\left(\lambda_i-\log x\right)\right)^3}$ is decreasing in $\sigma_i$. Again, it is also decreasing in $\lambda_i$. Thus, for all $\sigma_i\geq\sigma_j$ and $\lambda_i\geq\lambda_j$ $\left(\sigma_i\leq\sigma_j, \lambda_i\leq\lambda_j\right)$,
$$\frac{\sigma_i}{\left(1+\sigma_i\left(\lambda_i-\log x\right)\right)^3}\leq (\geq)\frac{\sigma_j}{\left(1+\sigma_j\left(\lambda_i-\log x\right)\right)^3}\leq (\geq)\frac{\sigma_j}{\left(1+\sigma_j\left(\lambda_j-\log x\right)\right)^3}.$$
So, for all $i\leq j$
\begin{equation*}
\begin{split}
\frac{\partial\psi}{\partial\sigma_i}-\frac{\partial\psi}{\partial\sigma_j}&\stackrel{sign}{=}\sum_{k=1}^n\frac{\sigma_k^2\left(\lambda_k-\log x\right)}{1+\sigma_k\left(\lambda_k-\log x\right)}\left[\frac{2\sigma_i}{\left(1+\sigma_i\left(\lambda_i-\log x\right)\right)^3}-\frac{2\sigma_j}{\left(1+\sigma_j\left(\lambda_j-\log x\right)\right)^3}\right]\\
&\quad+\sum_{k=1}^n\frac{\sigma_k^2}{\left(1+\sigma_k\left(\lambda_k-\log x\right)\right)^2}\left[\frac{1}{\left(1+\sigma_i\left(\lambda_i-\log x\right)\right)^2}-\frac{1}{\left(1+\sigma_j\left(\lambda_j-\log x\right)\right)^2}\right]\\
&\leq (\geq) 0.
\end{split}
\end{equation*}
Thus the result follows from Lemma 3.1 (Lemma 3.3) of Kundu \emph{et al.} (\cite{kun1}).$\hfill\Box$\\
\hspace*{0.3in} Although Theorem \ref{th3} holds under a sufficient condition for two $n$ component systems, the next theorem shows that no such condition is required for these systems having multiple-outlier LL model if the scale parameter vectors of these systems are common.
\begin{t1}\label{th5}
For $i=1,2,...,n$, let $X_i$ and $Y_i$ be two sets of independent random variables each following the multiple-outlier EW model such that $X_i\sim LL\left(\sigma,\lambda\right)$ and $Y_i\sim LL\left(\theta,\lambda\right)$ for $i=1,2,\ldots,n_1$,
$X_i\sim LL\left(\sigma^*,\lambda^*\right)$ and $Y_i\sim LL\left(\theta^*,\lambda^*\right)$ for $i=n_1+1,n_1+2,\ldots,n_1+n_2(=n)$ If $$(\underbrace{\sigma,\sigma,\ldots,\sigma,}_{n_1} \underbrace{\sigma^*,\sigma^*,\ldots,\sigma^*}_{n_2})\stackrel{m}{\succeq}
(\underbrace{\theta,\theta,\ldots,\theta,}_{n_1} \underbrace{\theta^*,\theta^*,\ldots,\theta^*}_{n_2})$$
and either $\{\sigma\ge\sigma^*, \theta\ge\theta^*, \lambda\ge\lambda^*\}$ or $\{\sigma\leq\sigma^*, \theta\leq\theta^*, \lambda\leq\lambda^*\}$ then $ X_{n:n}\ge_{lr}Y_{n:n}$.
\end{t1}
{\bf Proof:} Following Theorem \ref{th3} and
in view of Theorem \ref{th1}, we have only to show that
$$\psi_{1}(\mbox{\boldmath$\sigma$},x)=\frac{\sum_{k=1}^n\frac{\sigma_k^{2}}{\left(1+\sigma_k(\lambda_k-\log x)\right)^{2}}}{\sum_{k=1}^n\frac{\sigma_k^{2}(\lambda_k-\log x)}{1+\sigma_k(\lambda_k-\log x)}}$$
is Schur-concave in $\mbox{\boldmath$\sigma$}$.\\
\hspace*{0.3 in} Now, three cases may arise:\\
$Case (i)$ If $1\leq i<j\leq n_1$, $i.e.$, if $\sigma_i=\sigma_j=\sigma$ and $\lambda_i=\lambda_j=\lambda$, then $\frac{\partial \psi_{1}}{\partial \sigma_i}-\frac{\partial \psi_{1}}{\partial \sigma_j}=0.$ \\
$Case (ii)$ If $n_1+1\leq i<j\leq n$, $i.e.$, if $\sigma_i=\sigma_j=\sigma^*$ and $\lambda_i=\lambda_j=\lambda^*$, then $\frac{\partial \Psi}{\partial \sigma_i}-\frac{\partial \Psi}{\partial \sigma_j}=0.$ \\
$Case (iii)$ If $1\leq i\leq n_1$ and $n_1+1\leq j\leq n$, then $\sigma_i=\sigma$, $\lambda_i=\lambda$ and $\sigma_j=\sigma^*$, $\lambda_i=\lambda^*$. It can be easily shown that
\begin{equation*}
\begin{split}
\frac{\partial \psi_1}{\partial \sigma_i}-\frac{\partial \psi_1}{\partial \sigma_j}&\stackrel{sign}{=}\left(\frac{n_1\sigma^{2}}{(1+\xi_1)^{2}}+\frac{n_2\sigma^{*2}}{(1+\xi_2)^{2}}\right)\left(\frac{\xi_2^{2}}{(1+\xi_2)^{2}}-\frac{\xi_1^{2}}{(1+\xi_1)^{2}}\right)\\&\quad+\left(\frac{\sigma\xi_2}{(1+\xi_1)}-\frac{\sigma^*\xi_1}{(1+\xi_2)}\right)\left(\frac{2n_1\sigma}{(1+\xi_2)^2(1+\xi_1)}+\frac{2n_2\sigma^*}{(1+\xi_1)^2(1+\xi_2)}\right).
\end{split}
\end{equation*}
where $\xi_1=\sigma(\lambda-\log x)$ and $\xi_2=\sigma^{*}(\lambda^*-\log x)$. Now, as $\sigma \geq (\leq) \sigma^*$ and $\lambda \geq (\leq) \lambda^*$, implying that $\sigma(\lambda-\log x) \geq (\leq) \sigma^*(\lambda^*-\log x)$ $i.e.$ $\xi_1\geq (\leq)\xi_2$, and moreover, $\frac{\xi}{1+\xi}=1-\frac{1}{1+\xi}$ is increasing in $\xi$, then $\frac{\xi_2^{2}}{(1+\xi_2)^{2}}\leq (\geq)\frac{\xi_1^{2}}{(1+\xi_1)^{2}}$. Again,
\begin{eqnarray*}
\frac{\sigma\xi_2}{1+\xi_1}-\frac{\sigma^*\xi_1}{1+\xi_2}&=&\frac{\sigma\sigma^*\left\{(\lambda^*-\log x)(1+\sigma^*(\lambda^*-\log x))-(\lambda-\log x)(1+\sigma(\lambda-\log x))\right\}}{\left(1+\xi_1\right)\left(1+\xi_2\right)}\\
&\le(\ge)& 0.
\end{eqnarray*}
So, by Lemma 3.1 (Lemma 3.3) of Kundu \emph{et al.} (\cite{kun1}), the result is proved. $\hfill\Box$\\
\hspace*{0.3in} Theorem \ref{th3} guarantees that, for two $n$ component parallel systems (with a sufficient condition) having independent LL distributed lifetimes with a common scale parameter vector, the majorized shape parameter vector leads to greater system's lifetime in the sense of likelihood ratio order. The next theorem states that the majorized scale parameter vector leads to smaller system's lifetime in the sense of likelihood ratio order when the shape parameter vector of these two $n$-component parallel systems are common.
\begin{t1}\label{th6}
For $i=1,2,\ldots, n$, let $X_i$ and $Y_i$ be two sets of mutually independent random variables with $X_i\sim LL\left(\sigma_i,\lambda_i\right)$ and $Y_i\sim LL\left(\sigma_i,\delta_i\right)$. Further, suppose that
$\mbox{\boldmath $\sigma$}\in \mathcal{E}_+$, $\mbox{\boldmath $\lambda$}, \mbox{\boldmath $\delta$}\in \mathcal{D}_+$ or $\mbox{\boldmath $\sigma$}\in \mathcal{D}_+$, $\mbox{\boldmath $\lambda$}, \mbox{\boldmath $\delta$}\in \mathcal{E}_+$.
Then, $$\mbox{\boldmath $\lambda$}\stackrel{m}{\succeq}\mbox{\boldmath $\delta$}\;\text{implies}\; X_{n:n}\leq_{lr}Y_{n:n}.$$
\end{t1}
{\bf Proof:} In view of Theorem \ref{th2} and using (\ref{e1}) and (\ref{e2}), we are to prove that
\begin{equation*}
\eta_{1}(x)=\frac{\sum_{k=1}^n\frac{\sigma_k}{x}\left(1-\frac{1}{1+\sigma_k(\lambda_k-\log x)}\right)}{\sum_{k=1}^n\frac{\sigma_k}{x}\left(1-\frac{1}{1+\sigma_k(\delta_k-\log x)}\right)}
\end{equation*}
is decreasing in $x$ $i.e.$ to prove that
$$\psi_{2}(\mbox{\boldmath$\lambda$},x)=\frac{\sum_{k=1}^n\frac{\sigma_k^2}{\left(1+\sigma_k\left(\lambda_k-\log x\right)\right)^2}}{\sum_{k=1}^n\frac{\sigma_k^2\left(\lambda_k-\log x\right)}{\left(1+\sigma_k\left(\lambda_k-\log x\right)\right)}}$$
is Schur-convex in $\mbox{\boldmath$\lambda$}$. Now,
$$\frac{\partial\psi_2}{\partial\lambda_i}\stackrel{sign}{=}-\frac{2\sigma_i^3}{\left(1+\sigma_i\left(\lambda_i-\log x\right)\right)^3}\sum_{k=1}^n\frac{\sigma_k^2\left(\lambda_k-\log x\right)}{\left(1+\sigma_k\left(\lambda_k-\log x\right)\right)}-\frac{\sigma_i^2}{\left(1+\sigma_i\left(\lambda_i-\log x\right)\right)^2}\sum_{k=1}^n\frac{\sigma_k^2}{\left(1+\sigma_k\left(\lambda_k-\log x\right)\right)^2}.$$
So, by noticing the fact that
$$\frac{\partial}{\partial\sigma_i}\left[\frac{\sigma_i}{\left(1+\sigma_i\left(\lambda_i-\log x\right)\right)}\right]=\frac{1}{\left(1+\sigma_i\left(\lambda_i-\log x\right)\right)^2}>0,$$
giving that $\frac{\sigma_i}{\left(1+\sigma_i\left(\lambda_i-\log x\right)\right)}$ is increasing in $\sigma_i$, $\mbox{\boldmath $\lambda$}\in \mathcal{D}_+ \left(resp. \mathcal{E}_+\right)$ and $\mbox{\boldmath $\sigma$}\in \mathcal{E}_+ \left(resp. \mathcal{D}_+\right)$, $i.e.$ for all $i\le j$ $\lambda_i\geq(\leq)\lambda_j$ and $\sigma_i\leq(\geq)\sigma_j$ gives
$$\frac{\sigma_i^3}{\left(1+\sigma_i\left(\lambda_i-\log x\right)\right)^3}\leq(\geq)\frac{\sigma_j^3}{\left(1+\sigma_j\left(\lambda_i-\log x\right)\right)^3}\leq(\geq)\frac{\sigma_j^3}{\left(1+\sigma_j\left(\lambda_j-\log x\right)\right)^3}$$
and
$$\frac{\sigma_i^2}{\left(1+\sigma_i\left(\lambda_i-\log x\right)\right)^2}\leq(\geq)\frac{\sigma_j^2}{\left(1+\sigma_j\left(\lambda_i-\log x\right)\right)^2}\leq(\geq)\frac{\sigma_j^2}{\left(1+\sigma_j\left(\lambda_j-\log x\right)\right)^2}.$$
So,
$$\frac{\partial\psi_2}{\partial\lambda_i}-\frac{\partial\psi_2}{\partial\lambda_j}\geq(\leq)0.$$
Thus the result follows from Lemma 3.1 (Lemma 3.3) of Kundu \emph{et al.} (\cite{kun1}).$\hfill\Box$\\ |
1608.06741 | \section{Introduction}
McKean--Vlasov or mean-field SDEs are a class of stochastic differential equations where the drift and diffusion depend on the current position along the path and on the current distribution. They were derived to describe propagation of chaos in a system of particles that interact only by their empirical mean in the limit of large number of particles \cite{McKean1966-kb}. We study mean-field SDEs in one dimension and are interested in the following initial-value problem: determine the real-valued process $X^\mu(t)$, $t>0$, such that
\begin{equation}
X^\mu(t)-X^\mu(0)%
=\int_0^t \int_{\real} a(X^\mu(s), y)\, P_s^\mu(dy)\,ds%
+ \int_0^t\int_{\real} b(X^\mu(s),
y)\,P_s^\mu(dy)\,dW(s),%
\label{eq:mf_sde}
\end{equation}
where $P_s^\mu$
denotes the distribution of $X^\mu(s)$ and the initial distribution $X^\mu(0)\sim \mu$ for some prescribed probability measure $\mu$. Here, $a\colon\real^2\to\real$ is the drift, $b\colon\real^2\to\real$ is the diffusion, $W(t)$ is a one-dimensional Brownian motion (independent of $X^\mu(0)$), and we interpret the stochastic integral as an Ito integral. We also write this as
\[
dX^\mu(t)=P^\mu_t (a(X^\mu(t),\cdot))\,dt + P^\mu_t (b(X^\mu(t),\cdot))\,dW(t),\qquad X^\mu(0)\sim \mu,
\]
where $\nu(\phi)\coloneq\int_\real \phi(x)\,\nu(dx)$ for an integrable function $\phi\colon \real\to\real$ and a measure $\nu$ on $\real$. Under the following condition, \cref{eq:mf_sde} has a unique strong solution with a smooth density \cite[Theorem 2.1]{Antonelli2002-er}.
(Though \cref{eq:mf_sde} is well-posed more generally \cite{Gartner1988-xc,sznit,peter2}, \cref{ass1} is close to the ones in our error analysis.)
\begin{assumption}\label[assumption]{ass1}
Suppose that $p$th moments of the initial distribution $\mu$ are finite for all $p\ge 1$ and that the coefficients $a$ and $b$ are smooth with all derivatives uniformly bounded.
\end{assumption}
Several numerical methods have been proposed for \cref{eq:mf_sde} and their convergence behaviour analysed. Early work includes \cite{Bossy1997-fs,Bossy1996-gr}, which show convergence of a method based on Monte Carlo evaluation of the averages and Euler--Maruyama time stepping. The same method was studied using Malliavin calculus in \cite{Antonelli2002-er} and more refined convergence results proved. More recently, \cite{Ricketson2015-xv} has developed the multilevel Monte Carlo method in cases where the drift and diffusion depend on the distribution via the mean of a function of $X^\mu(t)$. Cubature methods have also been developed in \cite{McMurray2015-lv}.
We are interested in numerical approximation of the distribution of $X^\mu(t_n)$ by a probability measure $Q_n$, where $t_n=n\tstep$ for a time step $\tstep>0$. Consider a one-step numerical method that pushes forward the measure $Q_n$ to $Q_{n+1}$. For an example, let
\begin{equation}\label{psi}
\Psi(x,\tstep,Q)%
\coloneq x%
+ \tstep\,Q (a(x,\cdot))%
+\sqrt{\tstep}\, Q( b(x,\cdot))\,\xi,%
\end{equation}
for $\xi\sim \Nrm(0,1)$ or a random variable with a nearby distribution, such as the two-point random variable with $\prob{\xi=\pm 1}=1/2$. For the Euler--Maruyama method,
$Q_{n+1}$ is the distribution of $X_{n+1}=\Psi(X_n,\tstep, Q_n)$, assuming $\xi$ is independent of $X_n$ and $X_0\sim \mu$.
In the case that $a,b$ are independent of their second argument,
\[
X_{n+1}=X_n + \tstep \,a(X_n)+ \sqrt{\tstep} \,b(X_n) \,\xi_n,
\]
where $\xi_n$ are \iid copies of $\xi$,
which is the standard Euler--Maruyama method. For ordinary SDEs, it is well-known that first-order weak convergence results if $a,b$ and the test function $\phi\colon\real\to\real$ are sufficiently smooth \cite{Kloeden2011-qd}:
\[
\mean{\phi(X^\mu(1))}- \mean{\phi(X_N)}%
=P^\mu_1(\phi) -Q_N(\phi)%
=\order{\tstep},%
\qquad t_N=1.
\]
This method is of limited practical value for approximating $P^\mu_t(\phi)$.
The support of $Q_N$ is uncountable if Gaussian random variables $\xi$ are used or otherwise countable but very large in number, and the expectation $Q_N(\phi)$ is usually approximated via a Monte Carlo
method that samples from $Q_N$. For the mean-field SDE, this is more problematic, as all the particles must be tracked at the same time as $Q_n(a(X_n,\cdot))$ and $Q_n(b(X_n,\cdot))$ must be evaluated at each time step.
In this paper, we explore an alternative to Monte Carlo integration and employ instead Gauss quadrature, which provides accurate quadrature rules
that converge rapidly in the number of quadrature points,
under smoothness criterion on the integrand.
The idea then is to replace $Q_{n}$ by an $m_n$-point Gauss
quadrature and thereby
reduce the number of points that we follow with the time stepping.
That is, we propagate weights $w^i_n$ and quadrature points $x^i_n$ of an
$m_n$-point rule $Q_n$, and approximate
\[
P^\mu_1(\phi)%
\approx Q_N(\phi)%
\coloneq\sum_{i=1}^{m_N} w^i_N \,\phi(x_N^i),%
\qquad t_N=1.
\]
We
derive a choice of $m_{n}$ in \cref{err} that gives first-order convergence for smooth problems. The computation of the
Gauss quadrature rules is very efficient using standard
algorithms \cite{MR0245201,MR2061539,Boley1987-jt}. This leads to numerical methods for mean-field SDEs that are very efficient and we find methods that require $\order{\abs{\log \epsilon}^{3}/\epsilon}$ work to achieve accuracy $\epsilon$ for mean-field SDEs with smooth coefficients and initial distributions (see \cref{blimey2}). This compares favourably with the $\order{1/\epsilon^2}$ work required for multilevel Monte Carlo methods, as we see in \cref{example:gbm}.
Mean-field SDEs arise as reduced-order models for systems of interacting particles. The drift and diffusion are defined in terms of the distribution of $X(t)$, so that moments of $X(t)$ can be included in their definition. In other words, the interaction with the ensemble of particles is approximated by moments and mean-field SDEs, including one-dimensional mean-field SDEs, are of interest in studying high-dimensional systems. The techniques in this paper apply to mean-field SDEs in one spatial dimension, as Gauss quadrature is most natural for integrals over the real line, where algorithms are readily available to compute the quadrature rule. In principle, the methods and theory extend to higher dimensions, though it would be difficult to compute a suitable cubature rule. It would require a cubature rule that can be easily computed and satisfies Gauss quadrature-type error estimates (see \cref{classic_gq}). These are currently unavailable (see \cite{Xu2015-vr} for a recent discussion of Gaussian cubature).
This paper is organised as follows: \cref{sec:g} reviews key facts about Gauss quadrature and develops preliminary lemmas. \cref{sec:alg} describes the method for Gauss quadrature with Euler--Maruyama time stepping, which we call the GQ1 method. The error analysis for stochastic ODEs is developed in \cref{err}, where we show how to choose the number $m_n$ of Gauss points. In \cref{err2}, we extend the error analysis to mean-field SDEs and modify the choice of $m_n$ for this case. We also discuss a straight-forward generalisation of the methodology to the initial-value problem for
\begin{gather}
\begin{split}
X^\mu(t)-X^\mu(0)%
&=\int_0^t A\pp{\int_{\real} a(X^\mu(s), y)\, P_s^\mu(dy)}\,ds%
\\&\qquad + \int_0^t B\pp{\int_{\real} b(X^\mu(s),
y)\,P_s^\mu(dy) }\,dW(s),%
\end{split}\label{eq:mf_sde_ref}
\end{gather}
for smooth functions $A,B\colon \real\to\real$, which allows a nonlinear dependence on the time-$t$ distribution. In \cref{num}, we describe two extensions of GQ1: namely, GQ1e, which uses GQ1 with extrapolation, and GQ2, which use Gauss quadrature with a second-order time-stepping method. The remainder of the section gives a number of numerical experiments, including a comparison with the multilevel Monte Carlo method for ordinary SDEs.
\subsection{Notation}
For a measure $\mu$ on $\real$ and an integrable function
$\phi\colon\real\to\real$, denote
$\mu(\phi)\coloneq \int_{\real} \phi(x)\,\mu(dx)$. Let $C^k(\real^d)$ denote the space of $k$-times continuously differentiable real-valued functions on $\real^d$ and
$F^{k,\beta}\coloneq \{\phi\in C^k(\real^d)\colon \norm{\phi}_{k,\beta}<\infty\}$, where
\[
\norm{\phi}_{k,\beta}%
\coloneq \max_{0\le \abs{\alpha}\le k} %
\sup_{x\in\real^d}
\frac{\abs{\phi^{(\alpha)}(x)}}%
{1+\abs{x}^\beta},
\]
using the multi-index notation.
Let $C^k_K(\real^d) %
\coloneq\{\phi\in C^k(\real^d)\colon%
\norm{\phi^{(\alpha)} }_\infty%
\le K, \quad %
0\le \abs{\alpha}\le k\}$,
where $\norm{\cdot}_\infty$ denotes the supremum norm. Throughout the paper, we use $c$ as a generic constant that varies from place to place.
\section{Gauss quadrature and error estimates}\label[section]{sec:g}
Before describing the algorithm, we review Gauss quadrature and associated error estimates. Let ${\cal P}_n$ denote the
polynomials up to degree $n$.
\begin{definition}[Gauss quadrature]
We say weights $w^i>0$ and points $x^i\in\real$ for
$i=1,\dots,m$ define an $m$-point Gauss quadrature rule
with respect to a measure $\mu$ on $\real$ if
\[
\int_\real p(x)\,\mu(dx)%
=\sum_{i=1}^m w^i\, p(x^i),\qquad %
\forall \,p\in {\cal P}_{2m-1}.
\]
\end{definition}
The $m$-point Gauss quadrature rule for a discrete measure
\[
\mu%
=\sum_{i=1}^N %
v^i\, \delta_{y^i},
\]
with weights $v^i>0$ and points $y^i$,
can be found via the three-term recurrence
relation for the orthogonal polynomials corresponding to the inner product $\ip{f,g}_\mu\coloneq\int_\real f(x) \,g(x)\, \mu(dx)$. First, form the matrix $A$
with diagonal $[1,y^1,\dots,y^N]$ and first row and column
given by $[1,\sqrt{v^1},\dots,\sqrt{v^N}]$ (all other entries
zero). By applying orthogonal transformations, reduce $A$ to a symmetric tridiagonal matrix with diagonal $[\alpha^0,\alpha^1,\dots,\alpha^N]$ and off-diagonal $[{\beta}^0,{\beta}^1,\dots,\beta^N]$. The $\alpha^i$ and $\beta^i$ define the
three-term recurrence relation. Next define the Jacobi matrix, which is the symmetric tridiagonal matrix with diagonal
$[\alpha^0,\alpha^1,\dots]$ and off-diagonals $[\sqrt{\beta^0},\sqrt{\beta^1},\dots]$. To find the $m$-point Gauss quadrature rule, the leading $m\times m$ submatrix of the Jacobi matrix should be chosen. Its eigenvalues determine the
quadrature points and the first component of the normalised
eigenvectors determine the weights, as given by the
well-known Golub--Welsch algorithm. See \cite{Boley1987-jt,MR0245201,MR2061539}.
Thus, to compute the $m$-point Gauss quadrature rule for an $N$-point discrete measure, we reduce the original matrix $(N+1)\times(N+1)$ matrix $A$
to tridiagonal form using a Lanczos procedure and solve a symmetric
eigenvalue problem for an $m\times m$ matrix. The complexity is $\order{N^2+m^3}$, which becomes burdensome when either $m$ or $N$ are large. It is the rapid convergence properties of Gauss quadrature that enable us to control the problem size.
Let us describe the errors for Gauss quadrature.
For an integrable function $\phi\colon\real\to\real$, denote the
approximation error
\[
E(\phi)=
\int_\real \phi(x)\,\mu(dx)%
-\sum_{i=1}^m w^i \,\phi(x^i).%
\]
\begin{theorem}\label{classic_gq}
Let $\phi\in C^{2m}(\real)$. The error for $m$-point Gauss quadrature is
\[
E(\phi)%
= \frac{\phi^{(2m)}(\xi)}{(2m)!} \,\ip{p_m,p_m}_\mu,\qquad \forall\,\phi\in C^{2m}(\real),
\]
for some $\xi\in\real$, where $\ip{p_m,p_m}_\mu=\int_\real
p_m(x)^2\,\mu(dx)$, $p_m(x)=(x-x^1)\cdots(x-x^m)$, and $x^i$ are the Gauss quadrature points.
\end{theorem}
\begin{proof}
See \cite[Theorem 3.6.24]{Stoer2010-ku}.
\end{proof}
This theorem shows that Gauss quadrature converges rapidly as the number of points $m\to\infty$ for smooth integrands $\phi$. We require the following alternative characterisation of the error in terms of a minimax polynomial. A similar result is available for continuous measures in \cite[Theorem 5.4]{Atkinson1989-zq}.
\begin{theorem}\label{lgqe} Consider a discrete probability measure $\mu=\sum_{i=1}^N v^i \,\delta_{y^i}$ and approximation by the $m$-point Gauss quadrature rule $\sum_{i=1}^m w^i \,\delta_{x^i}$.
The absolute error
\[
\abs{E(\phi)}%
\le
\min_{p\in \mathcal{P}_{2m-1}} %
\bp{\max_{i=1,\dots,N} \abs{p(y^i)-\phi(y^i)}%
+ \max_{i=1,\dots,m}\abs{p(x^i)-\phi(x^i)}}.
\]
\end{theorem}
\begin{proof}
Let $p\in \mathcal{P}_{2m-1}$. As $m$-point Gauss quadrature is exact for $p\in \mathcal{P}_{2m-1}$,
\begin{align*}
E(\phi)&
=E(\phi-p)%
=\sum_{i=1}^N v^i\,(\phi-p)(y^i)%
-\sum_{i=1}^m w^i \,(\phi-p)(x^i),
\end{align*}
so that
\[
\abs{E(\phi)}
\le \sum_{i=1}^N v^i \max_{i=1,\dots,N}\abs{\phi(y^i)-p(y^i)}%
+\sum_{i=1}^m w^i \max_{i=1,\dots,m}\abs{\phi(x^i)-p(x^i)}.
\]
Since $\sum_{j=1}^N v^i=\sum_{j=1}^m w^i=1$, this completes the proof.
\end{proof}
For the numerical solution of SDEs, we are interested in the discrete measure generated by applying Euler--Maruyama with a two-point approximation to the Gaussian increment, which increases the number of points in the support by a factor of two on each step. Using the resulting tree structure, the support can be grouped into points that stem from a smaller set of points. We write down a special error estimate in this setting.
\begin{corollary}\label[corollary]{c1} Let $\mu$ be a discrete measure with support $\{y^1,\dots,y^{N m}\}$ and consider approximation by $m$-point Gauss quadrature. Suppose that there exists $z^i$ such that \[
\max_{j=(i-1)N+1,\dots,iN} \abs{ z^i-y^j }%
\le \delta, \quad\text{for $i=1,\dots,m$.}
\]
Then,
\[
\abs{ E(\phi) } %
\le \delta\, (2\,R)^{2m -1}\,%
\frac{1}{(2 m)!}\,%
\sup_{x\in(-R,R)}\abs{\phi^{(2 m)} (x)},\qquad %
\forall \phi\in C^{2 m}(\real),
\]
where $R=\max\{\abs{z^i}, \abs{y^j}\colon i=1,\dots,m,\;j=1,\dots,Nm\}$.
\end{corollary}
%
\begin{proof}
Consider interpolation of $\phi$ by $p\in \mathcal{P}_{2m-1}$ based on the $2m$ interpolation points $z^1,\dots,z^{m},x^1,\dots,x^m$, where $x^i$ denote the Gauss quadrature points. The error at $y^j$ satisfies
\[
p(y^j)-\phi(y^j)%
= \bp{ (y^j-z^1)\cdots(y^j-z^m ) \,(y^j-x^1)\cdots(y^j-x^{m}) }\frac{1}{(2 m)!} \,\phi^{(2 m)}(\xi),
\]
for some $\xi\in(-R,R)$ (by standard error analysis for Lagrange interpolation).
In the product, for each $j$, one term is bounded by $\delta$. Each $\abs{y^j-z^i}\le 2\,R$ by definition of $R$. Hence,
\[
\max_j
\abs{p(y^j)-\phi(y^j)}%
\le \delta\,%
(2 R)^{2m-1} \,%
\frac{1}{(2 m)!} \,%
\sup_{x\in(-R,R)}\abs{\phi^{(2 m)} (x)}.
\]
The polynomial $p$ is exact at $x^i$ and \cref{lgqe} completes the proof.
\end{proof}
\section{GQ1: Gauss quadrature with Euler--Maruyama}\label[section]{sec:alg}
We explain now in detail our method: initialise $Q_0$ with a discrete approximation,
\[
Q_0=\sum_{i=1}^{m_0} w^i_0\, \delta_{x^i_0},
\]
to the initial distribution $\mu$. In the case that $\mu=\delta_x$ or $X^\mu(0)=x$ for some known $x\in\real$, take the
one-point quadrature rule with $x_0^1=x$, weight
$w^1_0=1$, and $m_0=1$.
Suppose that the weights $w^i_n$ and points $x^i_n$ of $Q_n$ are
known at step $n$. To determine $Q_{n+1}$, generate
the Euler--Maruyama points $X_{n+1}^{i\pm}$ defined by
\begin{equation}\label{wales}
X^{i\pm}_{n+1}%
=x_n^i%
+Q_n (a(x_n^i,\cdot))\,\tstep%
\pm Q_n (b(x_n^i,\cdot) ) \,\tstep^{1/2},\qquad %
i=1,\dots,m_n,
\end{equation}
and define the corresponding weights
$W^{i\pm}_{n+1}=w^i_n/2$. Together the points
$X_{n+1}^{i\pm}$ and weights $W_{n+1}^{i\pm}$ define a $2\,m_n$-point quadrature rule, which we denote $Q_{n+1}^\pm$.
If left unchecked, this leads to a $2^n$-factor increase in the size of the quadrature rule, which becomes costly.
At each step, we may continue with $Q_{n+1}=Q_{n+1}^\pm$ (if the number of points is acceptable or the final time is reached) or approximate and reduce the number of points using Gauss quadrature. To approximate, we do the following:
\begin{algorithm}
\begin{enumerate}
\item Choose a support $[-R, R]$.
\item For $\abs{X_{n+1}}\ge R$, generate two points at $\pm R$ with weights $\sum_{\pm X_{n+1}^{j}\ge R} W_{n+1}^{j}$.
\item For $\abs{X_{n+1}} < R$, generate the $m_{n+1}$-point Gauss quadrature rule for the measure $Q_{n+1}^\pm$ restricted to $(-R,R)$ (i.e., for the measure $Q_R(\cdot)=Q_{n+1}^\pm(\cdot \cap (-R, R))$).
\item Combine the points and weights, to define a $(m_{n+1}+2)$-point quadrature rule $Q_{n+1}$.
\end{enumerate}
\label[algorithm]{alg}
\end{algorithm}
The iteration is repeated until the final time is reached.
Following an error analysis in the next sections, we give formulae for the number of points $m_n$ and support radius $R$ in terms of $\tstep$ and $t_n$.
First, we establish conditions for boundedness of moments for $Q_n$.
\begin{lemma}\label[lemma]{mf_mom}
Suppose that $a,b\in C^0_K(\real^2)$ and that $Q_0(e^{\alpha \,x^2})<\infty$ for some $\alpha>0$. Then, for some $c,\lambda>0$ independent of $\tstep$,
\[
Q_n(e^{\lambda \,x^2}) %
\le c\qquad \forall\,t_n\le 1.
\]
\end{lemma}
\begin{proof} Consider $\Psi$ defined in \cref{psi} where $\xi$ is the two-point random variable given by $\prob{\xi=\pm 1}=1/2$.
Let $X_{n+1}=\Psi(X,\tstep,Q_n)$ for a fixed value $X$. Then
\begin{align*}
X_{n+1}^2%
&\le X^2%
+ 2\,\tstep\, Q_n(a(X,\cdot)) \,X%
+ \tstep^2\, Q_n(a(X,\cdot))^2%
+ \tstep\, Q_n(b(X,\cdot))^2\\%
&\qquad +2 (X+\tstep\, Q_n(a(X),\cdot))\, Q_n(b(X,\cdot)) \sqrt{\tstep}\,\xi\\
&\le X^2%
+ \tstep\, Q_n(a(X,\cdot))\, (X^2+1)%
+ \tstep^2\, Q_n(a(X,\cdot))^2%
+ \tstep\, Q_n(b(X,\cdot))^2 \\%
&\qquad+ 2\,(X+\tstep\, Q_n(a(X,\cdot))) \,Q_n(b(X,\cdot))\, \sqrt{\tstep}\,\xi.
\end{align*}
Hence, as $a,b$ are bounded by $K$,
\begin{align*}
X_{n+1}^2 &\le X^2 \,(1 +\tstep\, K)%
+ \tstep\, K%
+ \tstep^2 \,K^2%
+ \tstep\, K^2%
\\&\quad+ 2\, (X+\tstep\, Q_n(a(X,\cdot)))\, Q_n(b(X,\cdot)) \sqrt{\tstep}\,\xi\\
&\le X^2\,(1+K\,\tstep) + c\,\alpha\,\tstep+
2\,(X+\tstep\, Q_n(a(X,\cdot))) \,Q_n(b(X,\cdot))\, \sqrt{\tstep}\,\xi.
\end{align*}
Note that $(e^{x}+e^{-x})/2\le e^{x^2}$ for $x\in\real$ and
\begin{align*}
\mean{e^{\alpha \,X_{n+1}^2} }%
&\le e^{\alpha\, \,X^2\,(1+\tstep \,K) + c\, \alpha\, \tstep }
\,{ e^{4\,\alpha^2 \,(X+\tstep \, Q_n(a(X,\cdot) ))^2\, \tstep\, Q_n(b(X,\cdot))^2}}.
\end{align*}
Now $\abs{(X+\tstep\, Q_n(a(X,\cdot) ))^2\, Q_n(b(X,\cdot))^2} \le 2 K^2\, X^2 + 2\,\tstep ^2\, K^4$. Consequently,
\begin{align}\label{conseq}
\mean{e^{\alpha\, X_{n+1}^2}}%
&\le
{ e^{ \alpha\, X^2\,(1+c\, \tstep +c\,\tstep\, \alpha)%
+c\,\alpha\,\tstep +c\,\tstep ^3 \alpha^2}}.
\end{align}
\cref{alg} is used in the iteration, so that the support is reduced and Gauss quadrature is applied. Note that
\[
Q(e^{\alpha \,x^2})\le \int_\real e^{\alpha\, x^2}\,\mu(dx),\qquad \alpha>0,
\]
where $Q$ is a Gauss quadrature rule for $\mu$ (by applying \cref{classic_gq} and noting that even derivatives of $e^{\alpha \,x^2}$ are non-negative). Similarly, the support reduction moves mass inwards and the resulting integral of $e^{\alpha \,x^2}$ is reduced. Consequently, if $X\sim Q_n$ in \cref{conseq}, we have
\begin{align*}
Q_{n+1}(e^{\alpha\, x^2})%
&\le
Q_n(e^{\alpha\, (1+c \,\tstep +c\,\tstep \,\alpha) x^2})\,%
e^{c\,\alpha \,\tstep( 1+\tstep^2 \,\alpha)}.
\end{align*}
We can iterate this to find a bound on $Q_n(e^{\alpha\, x^2})$ in terms of $Q_0(e^{\alpha\, x^2})$. The value of $\alpha$ changes at each step of the iteration, and
\begin{align*}
Q_{n+1}(e^{\alpha_{0}\, x^2})%
&\le
Q_{n}( e^{ \alpha_1\, x^2} )\,%
e^{c\,\alpha_0 \,\tstep\, (1+\tstep^2 \,\alpha_0)},
\end{align*}
where $\alpha_1=\alpha_{0}\,(1+c \,\tstep )+c\,\tstep\, \alpha_{0}^2$.
Let $\alpha_{n+1}=\alpha_n\,(1+c\,\tstep ) + c\,\tstep\, \alpha_n^2$.
If $\alpha_{n}\le1$, then $\alpha_{n+1}\le \alpha_n\,(1+2\,c\,\tstep )\le \alpha_0\,(1+2\,c\,\tstep )^n\le \alpha_0\, e^{2\,c\,t_n}$. We see that, if $\alpha_0 \le e^{-2\, c}$, then $\alpha_n\le e^{2\,c\, t_n} \alpha_0\le 1$ for $t_n\le 1$. It is now easy to show that
\begin{align*}
Q_{n}( e^{\alpha_{0} \,x^2 } )%
&\le
Q_{n-m} (e^{ \alpha_m\, x^2}) \,e^{2\, c},
\end{align*}
for $t_n\le 1$ and any $\alpha_0\le e^{-2\,c}$. In particular, $Q_n(e^{\lambda \,x^2}) \le Q_0(e^{ \alpha \,x^2})\, e^{2\,c}$ for $\lambda\le e^{-2\,c} \min\{\alpha,1\}$.
\end{proof}%
We examine the error incurred reducing the support to $[-R,R]$.
\begin{lemma}\label[lemma]{supred}
Let $\mu$ be a probability measure on $\real$ and suppose that $\mu(e^{\lambda \,x^2})<K$, for some $\lambda>0$. For $\tstep>0$, define the measure $\mu_\tstep$ by
\[
\mu_\tstep(A)%
\coloneq \mu(A\cap 1_{ (-R,R) }) %
+ \mu((-\infty,-R])\, \delta_{-R}(A)%
+ \mu([R,\infty)) \,\delta_R(A),
\]
for
$R=\sqrt{(4/\lambda)\,\abs{\log\tstep}}$ and Borel sets $A\subset \real$.
There exists $c>0$, independent of $\tstep$, such that
\[
\abs{\mu(\phi)-\mu_\tstep(\phi) }%
\le c \,\norm{\phi}_{0,\beta}\,%
\tstep^2,\qquad %
\forall \phi\in F^{0,\beta}.
\]
\end{lemma}
\begin{proof} It suffices to consider the two measures $\mu$ and $\mu_\tstep$ on the tail $(-R,R)^c$, as they are equal on $(-R,R)$. First, note that
\[
\abs{\mu(1_{(-R,R)^c} \,\phi)}%
\le e^{-\lambda R^2/2}\,%
\mu( \Phi),\qquad%
\Phi(x)\coloneq e^{\lambda \,x^2} \,e^{-\lambda \,x^2/2} \,1_{(-R,R)^c}(x)\, \abs{\phi(x)},
\]
where $1_{S}$ denotes the indicator function on the set $S$.
As $\phi\in F^{0,\beta}$, $\abs{\phi(x)}\le \norm{\phi}_{0,\beta}\,(1+\abs{x}^\beta)$ and $\abs{e^{-\lambda \,x^2/2}\,\phi(x)}$ is uniformly bounded by $c\,\norm{\phi}_{0,\beta}$ for a constant $c$ independent of $R$ and $\phi$, but dependent on $\beta$ and $\lambda$. Hence,
\[
\abs{\mu(1_{(-R,R)^c} \,\phi) }%
\le c\, \norm{\phi}_{0,\beta}\, e^{-\lambda\,R^2/2}\, \mu(e^{\lambda\, x^2} )%
\le c \, \norm{\phi}_{0,\beta}\,e^{-\lambda\, R^2/2}.
\]
For $R=\sqrt{(4/\lambda)\,\abs{\log \tstep} }$, we see that $e^{-\lambda R^2/2} \le \tstep^2$. Hence, $\abs{\mu(1_{(-R,R)^c} \phi)}$ is bounded by $c\,\norm{\phi}_{0,\beta}\,\tstep^2$. The same applies to $\abs{\mu_{\tstep}(1_{(-R,R)^c}\phi)}$ by a similar argument and the proof is complete.
\end{proof}
Thus, the support reduction with $R=\sqrt{(4/\lambda)\,\abs{\log \tstep}}$ maintains accuracy if $\mu(e^{\lambda x^2})$ is finite and the test function grows polynomially.
Next, we estimate the error for the Gauss quadrature at step $n$.
\begin{lemma}\label[lemma]{bounds} Suppose that $a,b\in C_K^0(\real^2)$. Let $Q_R(\cdot)=Q_{n+1}^\pm(\cdot \cap (-R,R))$ and let $Q$ be the $m_{n+1}$-point Gauss quadrature rule approximating $Q_R$.
If $m_{n+1}\ge m_{n}$, for all $\phi\in C^{2m_n}(\real)$,
\[
\abs{ Q_R(\phi)-Q(\phi)}%
\le K\, (2\,R)^{2m_{n}-1} \,\tstep^{1/2}\,%
\frac1 { (2m_{n})! }\,%
\sup_{x\in(-R,R)}\abs{\phi^{ (2m_{n})}(x) }.
\]
\end{lemma}
\begin{proof} If both $X_{n+1}^{j\pm}$ belong to $(-R,R)$, let $z^j=x^j+Q_n(a(x^j,\cdot))\,\tstep$. Then
\begin{equation}
\abs{ X_{n+1}^{j\pm}-z^j}%
\le \abs{Q_n( b ( x^j,\cdot) ) } \,\tstep^{1/2} %
\le K \,\tstep^{1/2}\label{this}
\end{equation}
and $\abs{z^j} \le R$ (every $z^j$ lies half way between $X^{j\pm}_{n+1}$). If only one $X_{n+1}^{j\pm} \in(-R,R)$, let $z^j$ be that point.
The measure $Q_R$ has at most $2m_n$ points and we apply \cref{c1} with $N=2$ and $\delta=K \,\tstep^{1/2}$. In general, $Q_R$ may have less than $2m_n$ points and we should trivially extend $Q_R$ to apply \cref{c1} (i.e., extend $Q_R$ to a $2m_{n+1}$-point rule by adding zero-weighted points in $(-R,R)$ consistent with \eqref{this}).
\end{proof}
\begin{corollary} \label[corollary]{corr}%
Let $a,b\in C^0_K(\real)$. Let $Q$ be the $m_{n+k}$-point Gauss quadrature rule for $Q_R(\cdot)=Q_{n+k}^\pm(\cdot \cap (-R,R))$ (i.e., after not performing \cref{alg} $(k-1)$-times). Suppose that $m_{n+k}\ge m_n$. For each $k$, there exists $c>0$ such that,
for all $\phi\in C^{ 2m_n}(\real)$,
\[
\abs{ Q_R(\phi)-Q(\phi)}%
\le c\, (2\,R)^{2 m_{n}-1} \tstep^{1/2}%
\frac1 { (2m_{n})! }\,%
\sup_{x\in(-R,R)}\abs{\phi^{ (2m_{n})}(x) }.
\]
\end{corollary}
\begin{proof}
This is a simple extension of \cref{bounds} using \cref{c1}.
\end{proof}
\section{Error analysis for ordinary SDEs}\label{err}
The proposed algorithm has much in similarity to those
introduced by~\cite{Muller-Gronbach2015-vv}. In that paper, Ito--Taylor methods for a general class of
multi-dimensional SDEs are developed that use support-reduction strategies to improve efficiency. They reduce the support of the measure by reducing its diameter and eliminating points whilst maintaining
moment conditions. Along with a non-uniform time-stepping regime, the authors provide detailed error and
complexity analyses.
The present situation is similar and effectively we are
transplanting \cref{alg} for their
reduction strategies. Using appropriate Gauss quadrature error
estimates, much of their analysis applies in the present case.
The estimate in \cref{corr} depends on the radius $R$ of the support. We now choose $R=\sqrt{(4/\lambda)\,\abs{\log\tstep}}$, for $\lambda$ given by \cref{mf_mom}. Fix $k$ (the number of steps between applying \cref{alg}) and $\beta$ (to choose test functions $\phi\in F^{0,\beta}$).
\begin{proposition}\label[proposition]{t}
Let $R=\sqrt{(4/\lambda)\,\abs{\log\tstep}}$ in \cref{alg}. Then,
for all $\phi \in F^{0,\beta}\cap C^{2m_{n}}(\real)$,
\begin{align*}
\abs{ Q_{n+k}^{\pm}(\phi)-Q_{n+k}(\phi)}%
&\le c \,\tstep^{1/2}\, \abs{\frac {16}\lambda\log \tstep}^{\frac{2m_n-1}2}%
\frac1 { (2m_{n})! }\\%
&\qquad \times\sup_{x\in(-R,R)}
\abs{\phi^{ (2m_{n}) } (x) }%
+c \, \norm{\phi}_{0,\beta}\,\tstep^2.
\end{align*}
\end{proposition}
\begin{proof} The error due to the Gauss quadrature on $(-R,R)$ is described by \cref{corr}. Applying \cref{mf_mom} with \cref{supred}, the error due to the support reduction is bounded by $c \,\norm{\phi}_{0,\beta}\,\tstep^2$. Summing the two gives the desired upper bound.
\end{proof}
Given $\tstep>0$ and a $k\in\naturals$, we choose the number of points $m_n$ as the smallest non-negative integer such that
\begin{equation}\label[ineq]{eq:MM}
\log \Gamma(2m_n+1)\ge M_1(m_n, \tstep, n+k),\qquad \forall t_{n+k}<1,
\end{equation}
where $\Gamma(x)$ denotes the gamma function and
\begin{equation}\label{eq:M}
M_p(m,\tstep, n)%
\coloneq \pp{p+\frac 12}\,\abs{\log \tstep}%
+\frac {2m-1}2 \, \log \abs{\frac{16}\lambda\log \tstep}
+ \pp{m-2} \,\abs{ \log (1-t_n) }.
\end{equation}
We now describe how fast $m_n$ increases as $\tstep$ decreases. Assuming the Golub--Welsch algorithm takes $\order{m^3}$ operations, $\sum_n m_n^3$ gives the amount of work needed to apply \cref{alg} at every time step and we describe its growth.
\begin{theorem}\label{blimey}
The number of Gauss quadrature points $m_n$ is a non-decreasing function of $n$. As the time step decreases, $m_n$ is non-decreasing. The number of points $m_n$ for $Q_n$ satisfies
\[
m_n%
\le 1+\max\Bp{
\frac 34 \,\abs{\log \tstep},
\displaystyle \frac{e^2}{2} \,\sqrt{\frac{16\,\abs{\log{\tstep}} }{\lambda\,(1-t_n)} } }.
\]
In particular, $\sum_{t_n<1} m_n^3 =\order{(\abs{\log \tstep}\,/\,\tstep)^{3/2}}$.
\end{theorem}
\begin{proof}
The function $M_1$ is increasing in $n$ (via $t_n\in(0,1)$) for $m\ge 2$. Hence, $m_n$ is non-decreasing in $n$ ($m_n$ is discrete and may not change as $t_n$ is varied by small amounts).
Also, for fixed $t_n$, $M_1$ is a decreasing function of $\tstep$, and hence $m_n$ is non-decreasing as $\tstep$ decreases.
From \cref{eq:M},
\[
M_1(m,\tstep,n)\le \frac32 \,\abs{\log \tstep}%
+{m}\, \log \frac{16\,\abs{\log \tstep}} {\lambda\,(1-t_n)}.
\]
Stirling's formula \cite[Eq. 6.1.37]{Abram} tells us that
\[
x!%
= \Gamma(x+1)%
= \sqrt{2\pi} \,x^{x+1/2}\,\exp\pp{-x+\frac{\theta}{12x}},\qquad \text{for some $\theta\in(0,1)$,}
\]
and hence $\log\abs{(2m)!}=\log \Gamma(2m+1) \ge \frac 12\, \log(4\,\pi\, m)+ 2\,m\,(\log(2\,m)-1)$. Then,
\[
\log \Gamma(2\,m+1) %
\ge 2\,m+ 2\,m\,\pp{\log(2\,m)-2}%
=2\,m+m \log \frac{4\,m^2}{e^4}.
\]
If $m\ge (3/4)\abs{\log \tstep}$ and $m\ge (e^2/2) \sqrt{16\,\abs{\log \tstep}/(\lambda\,(1-t_n))}$, then $\log \Gamma(2\,m+1)\ge M_1(m,\tstep,n)$. Hence, as $\sum_{k=1}^\infty k^{-3/2}$ is finite,
\[
\sum_{t_n<1} m_n^3%
\le c \,\pp{\frac{16\,\abs{\log \tstep}}{\lambda\,\tstep}}^{3/2}
\]
\end{proof}
We now give the main convergence theorem for ordinary SDEs. In this case, the coefficients $a(x,y)$ and $b(x,y)$ are independent of the mean-field $y$. We choose the single-point initial distribution $\mu=\delta_x$ and write $X(t)$ for $X^{\mu}(t)$ and $P^x_t$ for $P^{\delta_x}_t$.
\begin{assumption}\label[assumption]{ass}
Suppose that $K\ge 1\ge \lambda>0$ and assume that
$x_0\in[-K,K]$ and $a,b\in C^{4}_K(\real)$ and $b^2(x)\ge
\lambda$ for all $x\in\real$.
\end{assumption}
\begin{theorem} \label{sodemain}Let \cref{ass} hold. Consider
the $m_n$-point Gauss quadrature rule $Q_n$
defined in \cref{alg} with $m_n$ given by \cref{eq:MM} and $R=\sqrt{(4/\lambda)\abs{\log \tstep} }$.
The total error satisfies
\[
\abs{P_1^x(\phi) -Q_N(\phi)}%
\le \begin{cases}
c\, \norm{\phi}_{2,\beta}\,%
(1+\abs{x}^c)\,
\,\tstep\,\abs{\log \tstep}, \qquad \forall \phi\in F^{2,\beta},
\\[0.5em]
c\, \norm{\phi}_{3,\beta}\,%
(1+\abs{x}^c)\,
\,\tstep, \qquad\quad\;\qquad \forall \phi\in F^{3,\beta},
\end{cases}
\]
for a constant $c$ independent of $K$.
\end{theorem}
\begin{proof} Let $g_n(x)\coloneq P^{x}_{1-t_n} (\phi)\equiv \mean{\phi(X(1-t_n))\,|\, X(0)=x}$ for $x\in\real$. Notice that $g_N=\phi$ and $g_0(x)=P^x_1(\phi)$. Let $T_\tstep(\phi)(x)=\mean{\phi(\Psi(x,\tstep,\cdot))}$ for $\Psi$ defined in \cref{psi}.
The total error
\[
P^x_1(\phi)
-Q_N(\phi)%
=\sum_{n=1}^N E_n^T %
+\sum_{n=1}^N E_n^G,%
\]
where $E_n^G=Q_n^\pm (g_n)-Q_n (g_n)$ (the error
due to \cref{alg}) and $E_n^T= Q_{n-1}(g_{n-1})-Q_n^\pm (g_n)= Q_{n-1} (P_\tstep^x (g_n))-Q_{n-1}(T_\tstep(g_n))$ (the bias error due to Euler--Maruyama over time step $\tstep$). We estimate the two sources of error, focusing on the case where $\phi \in F^{2,\beta}$.
Local truncation error: Under \cref{ass}, \cite[Eq. (35) with $\gamma=1$]{Muller-Gronbach2015-vv} shows that
$E_n^T$ satisfies
\[
\abs{E_n^T}%
\le \begin{cases}\displaystyle
c\, \norm{\phi}_{4,\beta}\, \pp{1+\abs{x}^c}\,
\frac{\tstep^2}{1-t_n},& n=1,\cdots,N-1,\\[0.7em]
c\, \norm{\phi}_{2,\beta}\, \pp{1+\abs{x}^c}\,
\tstep,&n=N.
\end{cases}
\]
\cref{alg} error: We do not apply \cref{alg} on the final step and so $E_N^G=0$. For $n=1,\dots,N-1,$
\cref{t} gives that
\begin{align*}
\abs{E_n^G}%
=\abs{ Q_{n}^\pm (g_n)-Q_n (g_n) }%
&\le c\, \norm{g^{(2 m_{n-k} ) }_{n}}_\infty %
\frac{1}{(2 m_{n-k})!}\, %
\tstep^{1/2}
\abs{\frac {16}\lambda \log \tstep}^{\frac{2 m_{n-k}-1}2}\\%
&\qquad+c\,\norm{\phi}_{0,\beta}\,\tstep^2.%
\end{align*}
\cite[Lemma 8]{Muller-Gronbach2015-vv} provides that
\[
\norm{g_n}_{k,\beta} %
\le \norm{\phi}_{2,\beta}
\frac{1}{(1-t_n)^{(k-2)/2}},\qquad \forall k\ge 4.
\]
Consequently,
\[
\abs{E_n^G}%
\le c\,%
\norm{\phi}_{2,\beta}\,%
\frac{1}{(1-t_n)^{ m_{n-k} -1 } }\,%
\frac{1}{(2m_{n-k})!}\,
\tstep^{1/2}\,%
\abs{\frac {16}\lambda\log \tstep}^{ \frac{2 m_{n-k}-1}2} %
+c\,\norm{\phi}_{0,\beta}\,\tstep^2.
\]
Notice that
\[
\frac{1}{(1-t_n)^{ m_{n-k}-1 } }%
\frac{1}{(2m_{n-k})!}
\tstep^{1/2 }%
\abs{\frac {16} \lambda \log \tstep}^{ \frac{2 m_{n-k}-1}2}
\le\frac1{(1-t_n)}
\tstep^2,%
\]
if
\begin{align*}
{\Gamma(2m_{n-k} +1)}%
&\ge
\tstep^{-3/2}\,%
{ \abs{\frac {16} \lambda \log \tstep}^{ \frac{2 m_{n-k}-1}2}} %
\frac{1}{(1-t_n)^{m_{n-k}-2 } }.
\end{align*}
This holds as we have chosen $m_{n-k}$ satisfies
$
\log \Gamma(2m_{n-k}+1) %
\ge M_1(m_{n-k},\tstep, n),
$
for $M_1$ defined in \cref{eq:M}.
Then, $\abs{E^G_n} \le c\,(\norm{\phi}_{2,\beta}+1)\,\tstep^2/ (1-t_n)$.
Summing all the errors and using $\sum_{n=1}^{N-1} \tstep/(1-t_n) \le \log (N)=\abs{\log \tstep}$, we complete the proof. For $\phi\in F^{3,\beta}$, the argument is similar except the $(1-t_n)$ factors do not arise and so the $\abs{\log \tstep}$ term does not appear.
\end{proof}
\section{Error analysis for mean-field SDEs}\label[section]{err2}
We now generalise our error analysis to mean-field SDEs. We wish to show that $Q_n$ approximates $P_{t_n}^\mu$, starting from a good approximation of the initial distribution, $Q_0\approx\mu$. To express the closeness of $Q_0$ to $\mu$, we use the Wasserstein distance. For any probability measures $\mu,\nu$ on $\real$, define the Wasserstein distance
\[
W_{k,\beta}(\mu,\nu)%
\coloneq \sup\Bp{\abs{\mu(\phi)-\nu(\phi)}%
\colon \norm{\phi}_{k,\beta}\le 1}.%
\]
\begin{assumption}\label[assumption]{ass:initial}
The initial measure $Q_0$ satisfies $Q_0(e^{\alpha\,x^2})<\infty$ for some $\alpha>0$ independent of $\tstep$ and approximates $\mu$ in the sense that $W_{2,\beta}(\mu,Q_0)\le c\,\tstep$.
\end{assumption}
Under this assumption, \cref{mf_mom} applies and $Q_n(e^{\lambda \,x^2})$, for $t_n\le 1$, is uniformly bounded for some $\lambda>0$. We choose $R=\sqrt{(4\,/\lambda)\,\abs{\log \tstep}}$ in \cref{alg}.
We introduce a non-autonomous SDE corresponding to the mean-field SDE with $P^\mu_t(a(X,\cdot))$ and $P^\mu_t(b(X,\cdot))$ treated as known functions of $(X,t)$.
Let $X(t;s,x)$ for $t\ge s$ denote the
solution of
\begin{equation}
dX%
=\bar{a}(X,t)\,dt%
+ \bar{b}(X,t) \,dW(t),\qquad
X(s;s,x)=x,\label{eq:mf_sde2}
\end{equation}
for $\bar{a}(X,t)\coloneq P_t^\mu(a(X,\cdot))$ and $\bar{b}(X,t)\coloneq P_t^\mu(b(X,\cdot))$.
Here we fix the initial distribution as a delta measure at
$x$ and keep the same measure $P_t^\mu$ from \cref{eq:mf_sde} for the mean fields.
Note that $\int_{\real} \mean{ \phi( X(t;0,x) ) }\,\mu(dx)=P_t^\mu(\phi)$, so that $P^\mu_t(\phi)=\mu (P_{0,t} (\phi) )$ for $P_{s,t}(\phi)(x)\coloneq \mean{ \phi( X(t;s,x) ) }$. In this notation, we drop the
$\mu$ superscript, even though the non-autonomous SDE depends on $\mu$ via the drift and diffusion.
In the following assumption on the drift and diffusion, the mean-field diffusion $\bar b$ is used to set a non-degeneracy condition.
\begin{assumption}\label[assumption]{ass2}
Suppose that $a,b\in C^4_K(\real^2)$ and, for some $K\ge 1\ge \lambda>0$, that $\bar{b}^2(t,x)\ge
\lambda$ for $x\in\real$ and $t\in[0,1]$.
\end{assumption}
The main theorem for the numerical approximation of mean-field SDEs by GQ1 is the following. The method of selecting the number of Gauss points $m_n$ is modified to approximate the distribution uniformly on the time interval. In this case, $m_n\equiv m$ should be chosen independent of $n$. We choose $m$ as the smallest integer greater than the initial number of points $m_0$ such that $
\log \Gamma(2m+1)\ge M_1(m, \tstep, n+k)$ where $M_1$ is given by
\begin{equation}\label{eq:MMM}
M^{\text{mf}}_p(m,\tstep, n)%
\coloneq \pp{p+m-\frac32}\,\abs{\log \tstep}%
+\frac {2m-1}2\, \log \abs{\frac {16} \lambda\log \tstep}
\end{equation}
or
\begin{equation}\label{eq:MMM1}
M^{\text{smooth}}_p(m,\tstep, n)%
\coloneq \pp{p+\frac 12}\,\abs{\log \tstep}%
+\frac {2m-1}2\, \log \abs{\frac {16} \lambda\log \tstep}.
\end{equation}
The choice of $M_1$ depends on the regularity of the underlying problem, as described in \cref{thm:main_mf}. The time $t_n$ appears on the right-hand side in neither case and $m$ is independent of $n$. In the following, the overall work for the time-stepping is dominated by $\sum_{t_n\le 1} m_n^3$ (the work to compute the Gauss quadrature rule at each step). The work to compute the initial measure $Q_0$ is often neglible, for example, if the initial distribution is Gaussian or in other cases where accurate quadrature rules are easily computed.
\begin{theorem}\label{blimey2}
Denote the initial number of points for the rule $Q_0$ by $m_0$. For \cref{eq:MMM},
\[
m%
\le%
\max\Bp{m_0,1+\frac{e^2}{2} \,\sqrt{\frac{8\,\abs{\log{\tstep}} }{\lambda\, \tstep} }}.
\]
If the work to compute $Q_0$ is $\order{\smash{\abs{\log \tstep}^{3/2}/\tstep^{5/2}}}$ and the initial number of points $m_0=\order{\smash{\abs{\log \tstep}^{1/2}/\tstep^{1/2}}}$, then the overall total work $\order{\smash{\abs{\log \tstep}^{3/2}/\tstep^{5/2}}}$.
For \cref{eq:MMM1}, \[
m%
\le\max\Bp{m_0,1+\frac34 \abs{\log \tstep},1+\frac{e^2}{2} \,\sqrt{\frac{8\,\abs{\log{\tstep}} }{\lambda} }}.
\]
If the work to compute $Q_0$ is $\order{\abs{\log \tstep}^3/\tstep}$ and the initial number of points $m_0=\order{\abs{\log \tstep}}$, then the overall total work $\order{\abs{\log \tstep}^3/\tstep}$.
\end{theorem}
\begin{proof}
From \cref{eq:MMM},
\[
M_1^{\mathrm{mf}}(m,\tstep,n)\le {m}\, \log \frac{16\,\abs{\log \tstep}} {\lambda\,\tstep}
\]
and
\[
\log \Gamma(2\,m+1) %
\ge 2\,m+m \log \frac{4\,m^2}{e^4}.
\]
If $m\ge (e^2/2) \sqrt{16\,\abs{\log \tstep}/(\lambda\,\tstep)}$, then we have $\log \Gamma(2\,m+1)\ge M_1^{\mathrm{mf}}(m,\tstep,n)$. Similarly, from \cref{eq:MMM1},
\[
M_1^{\mathrm{smooth}}(m,\tstep,n)\le \frac32 \,\abs{\log \tstep}%
+{m}\, \log \frac{16\,\abs{\log \tstep}} {\lambda}.
\] If $m\ge (3/4)\abs{\log \tstep}$ and $m\ge (e^2/2) \sqrt{16\,\abs{\log \tstep}/\lambda}$, then we see $\log \Gamma(2\,m+1)\ge M_1^{\mathrm{smooth}}(m,\tstep,n)$.
The estimate for the total work follows as $\sum_{n=1}^N m^3=m^3/\tstep$.
\end{proof}
In the following, we show upper bounds on the error for smooth and rough problems, and smooth in this case indicates infinite differentiability, which is much stronger than in \cref{sodemain}. This is because infinite differentiability allows the reduction of the number of Gauss points $m$ to $\order{\abs{\log \tstep}^{1/2}}$ from $(\abs{\log \tstep}/\tstep)^{1/2}$.
\begin{theorem}\label{thm:main_mf}
Let \cref{ass1,ass:initial,ass2} hold and the number of Gauss points $m$ be given by \cref{eq:MMM}.
For some $c>0$
\[
\max_{t_N\le 1}
\abs{ P^\mu_{t_N}(\phi)%
-Q_N(\phi)}
\le c\,%
\norm{\phi}_{2,\beta}\,%
\tstep\,\abs{\log \tstep}%
,\qquad %
\forall \phi\in F^{2,\beta}.
\]
If in addition to \cref{ass1}, we have $W_{\infty,\beta}(\mu,Q_0)\le c\,\tstep$ and in addition to \cref{ass2}, we have $a,b\in C^\infty_K(\real^2)$, and the number of Gauss points $m$ is given by \cref{eq:MMM1}, then
\[
\max_{t_N\le 1}
\abs{ P^\mu_{t_N}(\phi)%
-Q_N(\phi)}
\le c\,%
\norm{\phi}_{\infty,\beta}\,%
\tstep%
,\qquad %
\forall \phi\in F^{\infty,\beta}.
\]
\end{theorem}
Before the proof, we develop a sequence of lemmas. First, we show that the Euler--Maruyama step depends continuously on the initial measure $\mu$ in terms of the Wasserstein distance.
\begin{lemma}\label[lemma]{lemma:b}
Suppose that $a,b\in C^k_K(\real^{2})$.
There exists $c>0$ such that, for any $x\in\real$,
\begin{equation}\label[ineq]{imp}
\abs{\delta(x)} %
\le%
\begin{cases}
c\,\tstep\, %
\norm{g}_{3,\beta}\, %
(1+\abs{x}^\beta)\,%
W_{k,\beta}(\mu,\nu),& \quad\forall g\in F^{3,\beta},\\[1em]
c\,\tstep\, %
\norm{g}_{2,\beta}\, %
(1+\abs{x}^\beta)\,%
(W_{k,\beta}(\mu,\nu)+1),& \quad\forall g\in F^{2,\beta},
\end{cases}
\end{equation}
where
$
\delta(x)%
\coloneq\mean{g(\Psi(x, \tstep,\mu)) }%
-\mean{g(\Psi(x,\tstep,\nu)}$ and $\Psi$ is defined by \cref{psi}.
\end{lemma}
\begin{proof}
Let $x_{\lambda,\mu}=x%
+\lambda \,\mu (a(x,\cdot))\,\tstep%
+(\lambda \,\mu (b(x,\cdot))\,\xi\,\sqrt{\tstep}$ and
\begin{align*}
\phi(\lambda;g)%
&=
g\pp{x_{\lambda,\mu}}-g(x_{\lambda,\nu}).
\end{align*}
Then $\delta=\mean{\phi(1;g)}$ and $\phi(0;g)=0$ and
\begin{align*}
\phi'(\lambda;g)%
&=g'(x_{\lambda,\mu}) \bp{\mu (a(x,\cdot))\,\tstep+ \mu (b(x,\cdot)) \,\sqrt{\tstep}\,\xi}\\
&\qquad-g'(x_{\lambda,\nu})\,\bp{\nu(a(x,\cdot))\,\tstep+ \nu( b(x,\cdot)) \,\sqrt{\tstep}\,\xi}.
\end{align*}
Note that $\mean{\phi'(0;g)}=g'(x) (\mu-\nu) (a(x,\cdot))\tstep$ as $\mean{\xi}=0$.
By Taylor's theorem,
\begin{align*}
\delta&=\mean{\phi(0;g)%
+\phi'(0;g)%
+\int_0^1 \phi''(\lambda;g)\,\lambda\,d\lambda}\\%
&= g^\prime(x) \,\tstep\, %
{(\mu-\nu) (a(x,\cdot)) }%
+\mean{\int_0^1 \phi''(\lambda;g)\, \lambda \,d\lambda}.
\end{align*}
Now,
\begin{align*}
&\abs{\phi''(\lambda;g)}\\%
&\le \abs{g''(x_{\lambda,\mu})} \pp{ \bp{\mu(a(x,\cdot))\tstep+\mu(b(x,\cdot))\sqrt{\tstep}\xi}^2%
-\bp{\nu(a(x,\cdot))\tstep+\nu(b(x,\cdot))\sqrt{\tstep}\xi}^2}\\
&\qquad +\abs{g''(x_{\lambda,\mu})-g''(x_{\lambda,\nu}) } \cdot
\abs{ \nu(a(x,\cdot))\,\tstep+\nu(b(x,\cdot))\,\sqrt{\tstep}\,\xi}^2.
\end{align*}
Hence, as $a,b,\xi$ are all bounded,
\[
\abs{\delta} \le c\,(1+\abs{x}^\beta)\, W_{k,\beta}(\mu,\nu)\,\pp{\norm{g}_{0,\beta} \, \tstep + \norm{g}_{2,\beta}\, \tstep+ \norm{g}_{3,\beta} \,\tstep}.
\]
This now implies the first equation in \cref{imp}. The second is similar.
\end{proof}
\begin{lemma} \label[lemma]{bbar}%
Let \cref{ass1,ass2} hold. If $a,b\in C^k_K(\real^2)$, then $\bar{a}$ and $\bar{b}$ belong to $C^k_K(\real^2)$.
\end{lemma}
\begin{proof} Under \cref{ass1}, $P_t^\mu$ has a smooth density and $\bar{a}, \bar{b}$ inherit their smoothness from $a$, $b$, and the density. The argument is given in more detail in \cite[page 431]{Antonelli2002-er}.
\end{proof}
\begin{lemma}\label[lemma]{reg_g}
Let \cref{ass1,ass2} hold and $g_{n,N}\coloneq P_{t_n,t_N}\phi$. Then, for non-negative integers $r,k$,
\begin{equation*}
\norm{g_{n,N}}_{k,\beta}%
\le c\, \norm{\phi}_{r,\beta}\,
\frac{1}{(t_N-t_n)^{(k-\min\{k,r\})/2}},\qquad \forall \phi \in F^{k,\beta}.
\end{equation*}
\end{lemma}
\begin{proof} For the autonomous case, see \cite[Lemma 8]{Muller-Gronbach2015-vv}. In this case, the drift and diffusion are non-autonomous. The argument generalises as \cite[Chapter 9, Theorem 7]{Friedman2013-ud} applies also for time-dependent coefficients with the assumptions given.
\end{proof}
The next lemma states a bound on the local truncation error.
\begin{lemma} \label[lemma]{lemma:a}
Let \cref{ass1,ass2} hold.
There exists $c>0$ such that
\[
\abs{P_{t_{n-1},t_n}(\phi)(x)%
- \mean{\Psi (x, \tstep, P_{t_{n-1}}^\mu) } }%
\le \begin{cases}
c \,%
\norm{\phi}_{4,\beta}\,%
(1+\abs{x}^c)\,%
\tstep^{2},&\qquad%
\forall \phi\in F^{4,\beta},\\[1em]
c \,%
\norm{\phi}_{2,\beta}\,%
(1+\abs{x}^c)\,%
\tstep,&\qquad%
\forall \phi\in F^{2,\beta}.
\end{cases}
\]
\end{lemma}
\begin{proof} When $a,b$ are independent of the second argument, this is implied by
\cite[Lemma 3 with $\gamma=1$]{Muller-Gronbach2015-vv}. In our case, the drift is $\bar a(X, t)$ and diffusion $\bar b(X,t)$, which are smooth functions according to \cref{bbar} and their lemma is easily extended.
\end{proof}
\begin{proof}[Proof of \cref{thm:main_mf}]
Define the measure $e_N=P_{t_N}^\mu-Q_N$ and consider $\phi\in F^{2,\beta}$. Let $g_{n,N}\coloneq P_{t_n,t_N}(\phi)$, so that $g_{n,n}=\phi$. Decompose the error $e_N(\phi)$ for $N\ge 1$ as
\begin{equation}\label{err4}
e_{N}(\phi)%
=\sum_{n=1}^N E_n^{T_1} + E_n^{T_2} + E_n^G ,
\end{equation}
where $E_n^{T_1}$ represents the error from the Euler--Maruyama discretisation of the non-autonomous system, $E_n^{T_2}$ represents the error from the mean-field, and $E_n^G$ represents the error from \cref{alg} applied to $g_{n,N}$. In detail, let
\begin{align*}
\mathsf{I}%
&\coloneq Q_{n-1}\, ( P_{t_{n-1},t_n}( P_{t_n,t_N}(\phi)) ) %
=\int_\real P_{t_{n-1},t_N}( g_{n,N})(x)\,Q_{n-1}(dx),\\%
\mathsf{II}%
&\coloneq
\int_\real \mean{g_{n,N}(\Psi(x, \tstep, P_{t_{n-1}}^\mu))}\,Q_{n-1}(dx),\\%
\mathsf{III}%
&\coloneq
Q^\pm_{n} ( P_{t_{n}, t_N} (\phi) )
=\int_\real \mean{g_{n,N}(\Psi(x, \tstep, Q_{n-1})}\,Q_{n-1}(dx),\\
\mathsf{IV}%
&\coloneq
Q_{n} ( P_{t_{n},t_N} (\phi) ),
\end{align*}
where $\mean{\cdot}$ denotes the expectation over $\xi$ in the definition of $\Psi$ (see \cref{psi}). Consider the telescoping sum
\[
e_{N}(\phi)%
=\sum_{n=1}^N \pp{Q_{n-1}(P_{t_{n-1},t_N}(\phi) ) %
- Q_{n} (P_{t_n,t_N} (\phi) ) \strutB }.
\]
We have \cref{err4} for $E_n^{T_1}=\mathsf{I}-\mathsf{II}$, $E_n^{T_2}=\mathsf{II}-\mathsf{III}$, $E_n^G=\mathsf{III}-\mathsf{IV}$. We estimate the three sources of error in turn. We focus on the rough case (i.e., $\phi\in F^{2,\beta}$) and briefly note the differences with the smooth case.
{Local truncation error for non-autonomous SDE:} From \cref{lemma:a}, with $n<N$,
\begin{align*}
\abs{\mathsf{I}-\mathsf{II}}%
&=
\abs{ Q_{n-1} \pp{ P_{t_{n-1},t_n} ( g_{n,N})(x) %
-\mean{g_{n,N}(\Psi(x, \tstep, P^\mu_{t_{n-1}}))} } }\\
&\le c\,\norm{g_{n,N}}_{4,\beta} %
\bp{1+Q_{n-1}(\abs{x}^c)\strutB} %
\tstep^{ 2 }.
\end{align*}
By \cref{mf_mom}, $Q_n(\abs{x}^c)$ is uniformly bounded and, by \cref{reg_g}, $\norm{g_{n,N}}_{4,\beta}$ is bounded by $c\,\norm{\phi}_{2,\beta}/(t_N-t_n)$. Similarly, for $n=N$, $ \abs{\mathsf{I}-\mathsf{II}} \le c \norm{\phi}_{2,\beta}(1+Q_{n-1}(\abs{x}^c)) \tstep$. Hence, $\sum_{n=1}^N \abs{ \mathsf{I}-\mathsf{II} } \le c\, \norm{\phi}_{2,\beta} \,\tstep \,\abs{\log \tstep}$.
In the smooth case, the estimate is the same, without the $(t_N-t_n)$ singularity and hence without the $\log$ term.
{Mean-field error:} From \cref{lemma:b},
\begin{align*}
\abs{ \mathsf{II}-\mathsf{III} }%
&\le
\abs{ Q_{n-1}\pp{%
\mean{ g_{n,N}( \Psi(x,\tstep, P^\mu_{t_{n-1}} ) ) } %
- \mean{ g_{n,N}(\Psi(x,\tstep ,Q_{n-1})) } } }\\
&\le c%
\pp{1+Q_{n-1}(\abs{x}^\beta) }\,%
\tstep\,%
\norm{g_{n,N}}_{3,\beta}\,%
W_{4,\beta}(P^\mu_{t_{n-1}}, Q_{n-1}).
\end{align*}
By \cref{mf_mom}, $Q_n(\abs{x}^\beta)$ is uniformly bounded and, by \cref{reg_g}, $\norm{g_{n,N}}_{3,\beta}$ is bounded by $c\norm{\phi}_{2,\beta}/(t_N-t_n)^{1/2}$ for $n=1,\dots,N-1$. Hence, \[\abs{\mathsf{II}-\mathsf{III} } \le c\, \tstep\, \norm{\phi}_{2,\beta}\,W_{2,\beta}(P^\mu_{t_{n-1}},Q_{n-1})\frac{1}{(t_N-t_n)^{1/2}}.\] For $n=N$, \[
\abs{\mathsf{II}-\mathsf{III} } \le K \,\pp{ 1+ Q_{N-1}(x^\beta) }\, \tstep\, %
\norm{\phi}_{2,\beta}\,%
W_{2,\beta} (P^\mu_{t_{N-1}}, Q_{N-1} ) + \norm{\phi}_{2,\beta}\, K\,\tstep.\]
In the smooth case, $\phi\in F^{\infty,\beta}$ and $a,b\in C^{\infty}_K(\real^2)$, so that $\norm{g_{n,N}}_{3,\beta}$ is uniformly bounded and $\abs{\mathsf{II}-\mathsf{III}} \le c\, \tstep\, \norm{\phi}_{\infty,\beta}\,W_{\infty,\beta}(P^\mu_{t_{n-1}},Q_{n-1})$.
\item {\cref{alg} error:} We consider the case where \cref{alg} is applied at every step $n=1,\dots,N-1$. Then, for each $n$,
\[
\mathsf{III}-\mathsf{IV}%
= Q^\pm_{n} (g_{n,N})-Q_n(g_{n,N}) .%
\]
Here $Q_{n}$ is the measure given by approximating $Q^\pm_{n}$ by \cref{alg} and the associated error is described by \cref{t}. Thus, recalling that
$R=\sqrt{(4/\lambda)\,\abs{\log\tstep}}$,
\[
\abs{\mathsf{III}-\mathsf{IV}}
\le
c\, (2R)^{2 \,m_n-1}\, \tstep^{1/2}\,%
\frac1 { (2\,m_n)! }\,%
\norm{g_{n,N}^{ (2\,m_n) }} _\infty + c\norm{\phi}_{0,\beta}\tstep^2.
\]
Applying \cref{reg_g},
\begin{align*}
\abs{\mathsf{III}-\mathsf{IV}}
&\le
c\, (2R)^{2\, m-1}\, %
\tstep^{1/2}\,
\frac1 { (2\,m)! }%
\, \norm{\phi}_{2,\beta}\,
\frac{1}{(t_N-t_n)^{m-1}}%
+ c\norm{\phi}_{0,\beta}\tstep^2\\%
&\le
c\, (2R)^{2\, m-1}\, %
\frac1 { (2\,m)! }%
\, \norm{\phi}_{2,\beta}\,
\frac{1}{\tstep^{m-5/2}} \frac1{t_N-t_n}+ c\norm{\phi}_{0,\beta}\tstep^2.
\end{align*}
This is bounded by $c \norm{\phi}_{2,\beta}\tstep^2/(t_N-t_n)$ if
$\log \Gamma(2\,m+1)\ge M_1^{\mathrm{mf}}(m, \tstep, n+k)$ for $M_1^{\mathrm{mf}}$ defined by \cref{eq:MMM}.
In the smooth case, $\abs{\mathsf{III}-\mathsf{IV}}\le c \tstep^2 \norm{\phi}_{\infty,\beta}$ if
$\log \Gamma(2\,m+1)\ge M_1^{\mathrm{smooth}}(m, \tstep, n+k)$ for $M_1^{\mathrm{smooth}}$ defined by \cref{eq:MMM1}.
Sum the three upper bounds to show that
\begin{align*}
e_N(\phi)%
&\le c\,\norm{\phi}_{2,\beta}\,\tstep\,\abs{\log \tstep}+c\,\norm{\phi}_{2,\beta}\,
\sum_{n=1}^{N-1}%
{\frac{\tstep}{(t_N-t_n)^{1/2} } W_{2, \beta}(P^\mu_{t_{n-1}},Q_{n-1})}%
\\&\qquad+ c \,\norm{\phi}_{2,\beta}\, \tstep\,W_{2,\beta}(P^\mu_{t_{N-1}}, Q_{N-1}),\qquad t_N\le 1.
\end{align*}
Take the supremum over $\phi \in F^{2,\beta}$,
\begin{align*}
&W_{2,\beta}(P^{\mu}_{t_{N}}, Q_{N})\\%
&\le
c\,\tstep\,\abs{\log \tstep}+%
\sum_{n=1}^{N-1}{%
\frac{\tstep}{(t_N-t_n)^{1/2}}\, W_{2,\beta}(P^\mu_{t_{n-1}},Q_{n-1})} + c \,\tstep\,W_{2,\beta}(P^\mu_{t_{N-1}}, Q_{N-1}).
\end{align*}
We assume that $W_{2,\beta}(P^\mu_0, Q_0) \le c\,\tstep$ in \cref{ass:initial}.
Gronwall's inequality completes the proof of the rough case.
In the smooth case, similar arguments show that
\begin{align*}
W_{\infty,\beta}(P^{\mu}_{t_{N}}, Q_{N})%
&\le
c\,\tstep+%
\sum_{n=1}^N{%
{\tstep} \,W_{\infty,\beta}(P^\mu_{t_{n-1}},Q_{n-1})}
\end{align*}
and Gronwall's inequality again gives the result.
\end{proof}
Consider \cref{eq:mf_sde_ref}, where a nonlinear dependence on the time-$t$ distribution is allowed via functions $A,B\colon\real\to\real$. Our numerical method generalises by replacing the definition of $\Psi$ in \cref{psi} with
\begin{equation}
\Psi(x,\tstep,Q)%
\coloneq x%
+ \tstep\,A( Q (a(x,\cdot)))%
+\sqrt{\tstep}\, B( Q( b(x,\cdot)))\,\xi.
\end{equation}
Gauss quadrature can be used in the same way with the same choice of $m_n$ and the same estimates apply as long as $A,B$ have regularity consistent with \cref{lemma:b,lemma:a}. This leads to the following convergence and complexity result.
\begin{corollary}
Let \cref{ass1,ass:initial,ass2} hold and $A,B \in C_K^k(\real^d)$. Let the number of Gauss points $m$ be given by \cref{eq:MMM} and $P^\mu_t$ be the solution of \cref{eq:mf_sde_ref} with initial distribution $\mu$. Then, for some $c>0$,
\[
\max_{t_N\le 1}
\abs{ P^\mu_{t_N}(\phi)%
-Q_N(\phi)}
\le c\,%
\norm{\phi}_{2,\beta}\,%
\tstep\,\abs{\log \tstep}%
,\qquad %
\forall \phi\in F^{2,\beta}.
\]
If $Q_0$ is cheap to compute (see \cref{thm:main_mf}) and $m_0=\order{(\abs{\log(\tstep)}/\tstep)^{1/2}}$, the total work is $\order{\abs{\log \tstep}^{3/2}/\tstep^{5/2}}$.
If in addition to \cref{ass1}, we have $W_{\infty,\beta}(\mu,Q_0)\le c\,\tstep$ and in addition to \cref{ass2}, we have $a,b\in C^\infty_K(\real^2)$ and $A,B\in C^\infty_K(\real)$, and the number of Gauss points $m$ is given by \cref{eq:MMM1}, then
\[
\max_{t_N\le 1}
\abs{ P^\mu_{t_N}(\phi)%
-Q_N(\phi)}
\le c\,%
\norm{\phi}_{\infty,\beta}\,%
\tstep%
,\qquad %
\forall \phi\in F^{\infty,\beta}.
\]
If $Q_0$ is cheap to compute and $m_0=\order{\abs{\log \tstep}}$, the total work is $\order{\abs{\log \tstep}^{3}/\tstep}$.
\end{corollary}
\section{Numerical experiments}\label{num}
We now present a set of numerical experiments, exhibiting the behaviour of GQ1 as described in \cref{sec:alg}. We also try two methods that converge with second order.
\textbf{GQ1e} The Richardson or Talay--Tubaro extrapolation involves taking two first-order approximations $P(\tstep)$ and $P(\tstep/2)$ of a quantity $P$, and computing $\hat P\coloneq 2\,P(\tstep/2)-P(\tstep)$. If $P$ has a second-order Taylor expansion, $\hat{P}$ is a second-order accurate approximation to $P$. In the case that $P$ is generated by GQ1, this is very simple to code and implement and is included in the experiments. Thus, we define GQ1e to be the quadrature rule $Q$ defined by $2Q^{\tstep/2}-Q^{\tstep}$, where $Q^\tstep$ is the result of applying GQ1 with time step $\tstep$. The method results in a quadrature with some negative weights, which can lead to non-physical results when used with highly oscillatory $\phi$ and the method should be used with caution.
\textbf{GQ2}
Suppose that the mean-field SDE has the following structure
\begin{equation}\label{factor}
dX^\mu(t)%
=a(X^\mu(t),P_t^\mu(r)) \,dt%
+ b(X^\mu(t), P_t^\mu(r) ) \,dW(t)
\end{equation}
for given functions $a,b\colon \real\times \real^d\to\real$ and $r\colon \real\to\real^d$. Mean-field SDEs of this type, involving moments of the solution in the coefficient functions or vectors of monomials $r(x)=[x,x^2,\dots,x^d]$, were introduced in \cite{peter2} for example. By working out the second-order Ito--Taylor expansion, the following generalisation, which we name GQ2, of the Euler--Maruyama-based method GQ1 can be derived: let $\Delta W= \tstep\, \xi$ for $\xi$ given by three-point distribution with $\prob{\xi=0}=2/3$ and $\prob{\xi=\pm \sqrt 3}=1/6$ (i.e., the three-point Gauss--Hermite rule for $\Nrm(0,1)$). For a given measure $Q_n$, define $Q_{n+1}$ as the distribution of $X_{n+1}$ given by
\begin{align*}
X_{n+1}%
&=X+a \,\tstep+ b\,\Delta W%
+\frac12 \partial_1 b\, b\,(\Delta W^2-\tstep)\\
&\qquad+\frac12\pp{\partial_1 a\, b+\nabla b\cdot \mathcal{L} a+\frac 12 \partial_{11} b \,b^2}\, \Delta W \,\tstep\\%
&\qquad+\frac 12 \pp{\nabla a\cdot \mathcal{L}a+\frac12\partial a_{11}\,b^2}\,\tstep ^2
\end{align*}
for
\[
\mathcal{L}a%
\coloneq\bp{a,%
Q_n\pp{\partial_1 r\, { a} +\frac 12 \partial_{11} r\, b^2},\dots,%
Q_n\pp{\partial_d r\,{ a} +\frac 12 \partial_{dd} r\, b^2}
},
\]
where $X\sim Q_n$ (independent of $\xi$) and all functions $a,b$ are evaluated at $(X, Q_n(r))$.
Here, $\partial_i$ and $\partial_{ii}$ denotes the first- and second-derivatives with respect to the $i$th argument, $\nabla a$ denotes the usual gradient in $\real^{d+1}$, and $\cdot$ the $\real^{d+1}$ inner product.
Though we do not include it, GQ2 submits to similar techniques of error analysis to GQ1. We expect second-order convergence in the Wasserstein distance $W_{4,\beta}$, so that test functions require two extra derivatives compared to GQ1. The equation for the number of Gauss points $m_n$ needs to be adjusted by taking $p=2$ in \eqref{eq:MM},\eqref{eq:MMM}, or \eqref{eq:MMM1} as appropriate.
The total work for a given accuracy $\varepsilon$ is given by replacing $\tstep$ replaced by $\varepsilon^{1/2}$ in \cref{blimey,blimey2} (and increasing the regularity by two for all coefficients). For smooth mean-field equations, the work is $\order{\abs{\log \varepsilon}^{3} \varepsilon^{-1/2}}$.
We expect second-order convergence for both these method and the initial distribution $Q_0$ should be chosen with
$W_{4,\beta}(\mu,Q_0)\le c\tstep^2$.
The code for running these experiments is available for download
\cite{sdelab}.
\subsection{Geometric Brownian motion}\label{example:gbm}
We consider the ordinary SDE for geometric Brownian motion given by
\[
dX(t)=\alpha\, X(t)\,dt + \sigma\, X(t) \,dW(t),\qquad %
X(0)=x,
\]
for parameters $\alpha,\sigma$ and initial data $x$.
For $\alpha=-1$, $\sigma=0.5$, and $x=1$, the exact value $\mean{X(1)}=e^{-1}$. We use this as a test case to compare with the multilevel Monte Carlo (MLMC) method, as in \cite[Example 8.49]{book}. The CPU time is compared against error, averaging over ten runs of MLMC to reduce the variance. The CPU time for the MLMC Matlab implementation (provided in \cite{book}) is scaled to match GQ1 at the first data point. See \cref{gq_gbm}. The errors for the Gauss quadrature methods are decaying at a much faster rate as the CPU time is increased. Theoretically, for a smooth problem like this, the work to achieve accuracy $\varepsilon$ for GQ1 behaves like $\varepsilon^{-1}\abs{\log \varepsilon}^{3}$, for GQ1e and GQ2 like $\varepsilon^{-1/2}\abs{\log \varepsilon}^{3}$, and for MLMC like $\varepsilon^{-2}$. This is observed in the figure. Notice however that the linearly growing coefficients do not satisfy our assumptions.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{eg0/mom1_.pdf}
\caption{Geometric Brownian motion: The green line shows GQ1; the blue-dashed line shows GQ1e; the black-dash-dot line shows GQ2; the red-dotted line shows MLMC. The cpu time for MLMC is scaled to match GQ1 at the first data point. Errors are computed relative to the exact value. The yellows lines indicate reference slopes of $-2$, $-1$, and $-1/2$.} \label{gq_gbm}
\end{figure}
\subsection{Generalised Ornstein--Uhlenbeck process}
Consider the following generalisation of the Ornstein--Uhlenbeck SDE to a linear mean-field SDE:
\[
dX(t)=\bp{\alpha \,X(t)+ \beta \,\mean{X(t)}\strutB}\,dt + \sigma \,dW(t),\qquad X(0)=x,
\]
for parameters $\alpha,\beta,\sigma\in\real$ and initial data $x\in\real$.
By using Ito's formula, its first two moments can easily be calculated as
\begin{equation}\label{gbm_exact}
\mean{X(t)}=x\, e^{(\alpha+\beta)\,t},\qquad
\mean{X(t)^2}=x^2\,e^{2\,(\alpha+\beta)\,t } + \frac{\sigma^2}{2\,\alpha} \bp{e^{2\,\alpha\,t}-1}.
\end{equation}
It is used as a test case in \cite{Ricketson2015-xv},
with $\alpha=-1/2$, $\beta=4/5$, $\sigma^2=1/2$, $x=1$. We use these parameters and the results are shown in \cref{eg2}. First-order convergence is observed for the first and second moments for GQ1, and second-order convergence is observed for both GQ1e and GQ2. The work is proportional to $\varepsilon^{-1}$ and $\varepsilon^{-1/2}$, reflecting the estimates (up to $\log$ terms) for smooth problems in \cref{blimey2}.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{eg2/mom1.pdf}
\includegraphics[width=0.45\textwidth]{eg2/mom2.pdf}\\
\includegraphics[width=0.45\textwidth]{eg2/mom1_.pdf}
\includegraphics[width=0.45\textwidth]{eg2/mom2_.pdf}
\caption{Generalised Ornstein--Uhlenbeck SDE: The green line shows GQ1; the blue-dashed line GQ1e; the black-dash-dot line shows GQ2. The yellow lines show reference slopes of $1$ and $2$ (top) and $-1/2$ and $-1$ (bottom) . The upper left- (resp., right-) hand plot shows the error in computing the mean (resp., second moment). The error is computed using reference values provided by \cref{gbm_exact}. The bottom plots shows the cpu time in seconds. } \label{eg2}
\end{figure}
\subsection{Polynomial drift}
The following mean-field Ito SDE
\begin{equation}\label{polydrift}
dX(t)%
=\bp{\alpha \,X(t)+\mean{X(t) }-X(t)\, \mean{X(t)^2}\strutB}\,dt%
+ X(t)\,dW(t),\qquad X(0)=x,
\end{equation}
for a parameter $\alpha\in\real$,
is considered in \cite{peter1}, where the first two moments of
$X(t)$ are shown to satisfy the system of ODEs
\begin{gather}\label{ode}\begin{split}
\frac{d\mean{X}}{dt}%
&=(\alpha+1)\,\mean{X}-\mean{X}\,\mean{X^2}\\
\frac{d\mean{X^2}}{dt}%
&=(2\,\alpha+1)\,\mean{X^2}+2\,\bp{\mean X}^2 - 2\,\bp{\mean {X^2}}^2,
\end{split}\end{gather}
with initial conditions $\mean{X}=x$ and $\mean{X^2}=x^2$.
We use this as a test with $\alpha=2$ and $x=1$ and results are shown in \cref{eg1}. Again first-order (GQ1) and second-order (GQ1e and GQ2) convergence is observed for the first and second moments and the cpu times behave in line with \cref{blimey2}.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{eg1/mom1.pdf}
\includegraphics[width=0.45\textwidth]{eg1/mom2.pdf}\\
\includegraphics[width=0.45\textwidth]{eg1/mom1_.pdf}
\includegraphics[width=0.45\textwidth]{eg1/mom2_.pdf}
\caption{Polynomial drift: As for \cref{eg2} with the mean-field SDE \eqref{polydrift}. The error is computed by using an accurate numerical solution of \cref{ode} as a reference value.} \label{eg1}
\end{figure}
\subsection{Plane rotator} \label{sseg3
The following is a model for coupled oscillators \cite{Kostur2002-nv} in the presence of noise:
\begin{equation}\label{plane}
dX^\mu(t)=\bp{ K \int_\real\sin(y-X^\mu(t))\,P^\mu_t(dy)-\sin(X^\mu(t))}\,dt + \sqrt{2\, k_B T}\,dW(t),
\end{equation}
for coupling parameter $K>0$, temperature $k_B T$, and initial condition $X^\mu(0)\sim \mu=\Nrm(\mu_0,\sigma_0^2)$.
In this case, we have a Gaussian initial distribution $\mu$, which can be approximated by Gauss--Hermite quadrature. The associated points and weights can be found tabulated or computed via the three-term recursion for the Hermite polynomials. In the implementation, we take the latter strategy and start with $Q_0$ equal to the $40$-point Gauss--Hermite rule.
The variable $X^\mu(t)$ represents an angle. In place of the the diameter reduction step in \cref{alg} , we shift each point modulo $2\pi$ into $[0,2\pi)$. Also, we partition $[0,2\pi)$ into ten sub-intervals and apply Gauss quadrature on sub-intervals of width $L=\pi/5$. This significantly improves performance in experiments.
Following \cite{Ricketson2015-xv}, we choose parameter values
for $K=1$, $k_B T=1/8$ and initial mean $\mu_0=\pi/4$ and variance $\sigma^2_0=3\pi/4$. Results are shown in \cref{eg3b}, which show errors for $P^\mu_1(\phi)$ for the test functions $\phi(x)=\sin^2(x)$ and $\phi(x)=\sin(x)$. Errors are computed by taking a reference solution given by GQ2.
First-order convergence is observed for GQ1 and second-order convergence is observed for GQ2. The methods work rapidly and the finest solution has 434 quadrature points.
In \cref{cpdf}, we show the pdf and cdf of the initial and final distribution.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{eg3b/mom1.pdf}
\includegraphics[width=0.45\textwidth]{eg3b/mom2.pdf} \includegraphics[width=0.45\textwidth]{eg3b/mom1_.pdf}
\includegraphics[width=0.45\textwidth]{eg3b/mom2_.pdf}
\caption{Plane rotator: error against time step and cpu time for computing $\mean{\phi(X(1)) }$ for $\phi(x)=\sin^2(x)$ (left) and $=\sin(x)$ (right), via GQ1 (green), QG1e (blue dashed), and GQ2 (black dash-dot) methods for \cref{plane}. The yellow lines in the upper plots show slopes of $1$ and $2$, similar to the theoretical rate. The error is computed by taking a well-resolved GQ2 calculation for the reference value. } \label{eg3b}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{eg3b/cdf.pdf}
\includegraphics[width=0.45\textwidth]{eg3b/pdf.pdf}
\caption{Plane rotator: the pdf and cdf for initial distribution $\Nrm(\pi/4,3\pi/4)$. The plots show initial (black) and final (blue) distributions. The pdf is computed by differentiating a spline approximation to the cdf.} \label{cpdf}
\end{figure}
\subsection{Viscous Burgers equation}
Consider the following mean-field SDE for a parameter $\sigma>0$:
\[
dX^\mu(t)=\int_\real \pp{1- H(X^\mu(t)-y)}\,P^\mu_t(dy) \,dt %
+ \sigma\,dW(t),
\]
where $H$ is the Heaviside step function with $H(x)=0$ for $x< 0$ and $=1$ for $x\ge 0$, and an initial distribution $X^\mu(0)$ is prescribed. The drift term here can also be written as $\bar{a}(X,t)=\prob{X^\mu(t) <X}$. Let $X^\mu(t)$ have cumulative distribution function (cdf) $u(t,x)$; then $V(t,x)=1-u(t,x)$ satisfies the viscous Burgers equation
\[
\frac{\partial V}{\partial t}%
=\frac 12 \,\sigma^2 \,\frac{\partial^2 V}{\partial x^2} - V\,\frac{\partial V}{\partial x},\qquad x\in\real.
\]
In general, the solution of the initial-value problem for viscous Burgers equation can be written as the difference of two cdfs defined by initial-value problems for a mean-field SDE \cite{Bossy1997-fs}.
For $X^\mu(0)$ equal to delta measure at zero, the exact cdf is $u(0,x)=H(x)$ and
\begin{equation}\label{burger_exact}
u(t,x)=\frac{\operatorname{erfc} (-x/\sqrt{2 \,\sigma^2\,t})}%
{\operatorname{erfc}(-x/\sqrt{2\,\sigma^2\,t})+\exp((t-2\,x)/2\,\sigma^2) (2-\operatorname{erfc}((t-x)/\sqrt{2\,\sigma^2\,t})},
\end{equation}
where $\operatorname{erfc}$ denotes the complementary error function \cite{Bossy1997-fs}. We see in particular the solution represents a soliton travelling to the right with speed $1/2$.
For the GQ methods, this problem presents two challenges. First, the mean-field term cannot be factored out as in \cref{factor} and $P_t^\mu (H( \cdot-X^\mu(t)))$ must be evaluated by quadrature for each particle representing $X^\mu(t)$. This increases computation time as $m$ quadratures are needed at each step, instead of one. The lack of structure also means GQ2 cannot be used.
Second, the Heaviside function has a jump discontinuity at $x=0$ and this lack of smoothness is evident in experiments.
Introduce the regularised function
\[
1-H(x)%
\approx \frac 12 \operatorname{erfc}(x/\ell),\qquad x\in\real,
\]
for a length scale $\ell>0$.
The equation
\begin{equation}\label{burger_reg}
dX^\mu(t)=\int_\real \frac 12 \operatorname{erfc}\pp{\frac{X^\mu(t)-y}{\ell}}\,P^\mu_t(dy) \,dt + \sigma\,dW(t)
\end{equation}
has smooth bounded coefficients and the behaviour of the GQ algorithms is shown in \cref{conv_burger_good}. The convergence behaviour is broadly in line with the theory for $\phi(x)=x^2$, though GQ1e looses accuracy for small $\tstep$ when $\ell$ is reduced to $\ell=0.001$ from $\ell=0.1$ and the drift more closely resembles the Heaviside function. GQ1 and GQ1e accurately compute the first moment, which gives the centre of the soliton at $x=1/2$, to high accuracy (the error is $10^{-12}$ even for $\tstep=0.05$ and $\ell=0.001$; not shown in the figures). \cref{cdf_burger} shows a comparison of the cdf of GQ1e using $\ell=0.001$ with the exact cdf for $\tstep=3\times 10^{-4}$ with $74$ quadrature points. The two agree with an $L^1(\real)$ error of approximately $10^{-2}$.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{burger1/cdf_cmp.pdf}
\includegraphics[width=0.45\textwidth]{burger1/cdf_cmp_del.pdf}\\
\caption{Burgers equation for $\frac 12\sigma^2=0.1$: Comparison of the exact cdf at $t=1$ given by \cref{burger_exact} and the numerical approximation by GQ1e of \cref{burger_reg} with for $\tstep=3\times 10^{-4}$ and $\ell=10^{-3}$ (using 74 quadrature points).} \label{cdf_burger}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{burger1/mom1newnew.pdf}
\includegraphics[width=0.45\textwidth]{burger1/mom1_new_.pdf}\\
\caption{Burgers equation for $\frac 12\sigma^2=0.1$: The error in approximating the second moment of \cref{burger_reg} for $\ell=0.1$ (left) and $\ell=0.001$ (right). The green line marks GQ1 and the blue dashed-line marks GQ1e.} \label{conv_burger_good}
\end{figure}
\section{Conclusion}\label{conc}
We have derived a time-stepping method based on Gauss quadrature for approximating the probability distribution of the solution of mean-field SDEs at a fixed time. The work per time step is dominated by the eigenvalue problem for determining the Gauss quadrature. The total work required depends on the smoothness of the underlying problem and in the best case is $\order{\varepsilon^{-1/p}\,\abs{\log \varepsilon}^{3}}$ operations when the underlying time-stepping method has $p$th order accuracy.
Though very effective for one-dimensional mean-field SDEs, their dependence on Gauss quadrature means the presented methods are difficult to extend to higher dimensions. The available methods for higher dimensions include \cite{Muller-Gronbach2015-vv,Ricketson2015-xv,McMurray2015-lv} and are not as efficient. One-dimensional mean-field SDEs remain an interesting case due to their use in understanding high-dimensional interacting particle systems and the proposed methods are far more efficient than currently available methods.
The drift $a$ and diffusion $b$ in this paper are assumed to be bounded with bounded derivatives, which is unrealistic for many problems (including those in \cref{num} with polynomial $a$ and $b$). Much work is currently being undertaken to extend the numerical analysis of SDEs to non-Lipschitz problems (for example, \cite{Hutzenthaler2014-ma,Hutzenthaler2015-eg}). Some of this will carry over to the Gauss-quadrature methods and mean-field SDEs, though nice properties such as \cref{mf_mom} (boundedness of exponential moments for Euler--Maruyama) no longer hold in general. Some extensions are presented in
\cite{Muller-Gronbach2015-vv}, who also consider bounded coefficients but allow more general regularity conditions on the test functions than presented here. They also provide a non-uniform time-stepping scheme that allows more efficient approximation of less smooth problems.
\bibliographystyle{siamplain} |
2205.11419 | \section{Introduction}
\thispagestyle{FirstPage}
Recent advances in data processing and sensor development are accelerating the advent of autonomous vehicles. Perceiving the world is an important issue for an autonomous vehicle, and using LiDAR sensors is an appealing choice, because of the accurate depth information it provides. Research is now moving on towards directly extracting semantic information from the raw LiDAR data. Because deep learning and neural networks have shown high potential in learning to extract meaningful information from data across different modalities, processing LiDAR data with neural networks has also become an active research area. Advances in sensor technology and the release of large-scale LiDAR datasets are nourishing the research area even further.
However, the problem of domain shift in LiDAR data is a significant yet prevalent problem that needs attention from the research community. Domain shift is a problem that the trained model fails to generalize because of the change in the data distribution. For natural images, one common situation is that a model trained to segment sunny weather images only may fail to generalize on cloudy weather images. Regarding LiDAR data, the difference between LiDAR sensor specs and even subtleties such as sensor displacements can result in data distribution differences. The problem needs to be addressed because, on the one hand, every time a new data sample is obtained, tremendous labeling cost and effort is required. Each LiDAR scan contains tens of thousands of points that need to be labeled. Although sensor fusion with other modalities such as camera or radar can mitigate the difficulty of the labelling process, LiDAR labeling requires supervision from skilled human resources.
On the other hand, the problem needs attention because due to the severeness of domain shift across datasets, the general benefits of the data-driven nature of deep learning do not hold. For example, in tasks that process 2D natural images, it is common to use models that are pre-trained using ImageNet~\cite{ILSVRC15} datasets. This is because, training models with more and diverse images as possible will contribute to the increased generalization capability and task performance of the model. However, due to the severeness of domain shift, such desiderata cannot be expected in LiDAR data. Because of these reasons, addressing LiDAR domain shift, and furthermore generalization is a research direction of significant importance. Among such research attempts, unsupervised domain adaptation (UDA) is a research area that, given a labeled source dataset and unlabeled target dataset from different domains, tries to correctly guess the target labels with access only to the source labels only.
Recently, \cite{yi2021complete} came up with a multi-staged method called Complete \& Label that directly tackles the UDA between LiDAR datasets. They interpreted that the domain shift results from different sampling patterns of the 3D world, and first trained a voxel-based completion network to reconstruct 3D surfaces from an input LiDAR scan. The reconstructed representation is called canonical domain, implying a dense representation regardless of the sampling pattern. Next, an additional network was trained to segment the scan from the canonical domain.
Although, Complete \& Label is a promising baseline in terms of performance, the expandability of the method is limited. Voxel-based methods generally suffer from large memory consumption and long inference time, meaning that they will be difficult to use in real-time applications requiring inference time of less than 100 ms per scan, or in conjunction with other downstream tasks such as Semantic SLAM~\cite{chen2019suma++}. Range image-based methods, which project the LiDAR scans on 2D images, satisfy such conditions but suffer from a low upper bound of performance. This necessitates the development for a method that is comparable in performance but light and fast enough to be used in realistic applications.
In this paper, we propose an effective and efficient method that solves the UDA problem in the LiDAR semantic segmentation task. Because our method is based on using 2D range images, it is free from the said complexity issues. Plus, its performance bound improves largely upon contemporary 2D methods, with the help of our method design. Our method utilizes source data prototypes to pseudo label target pixels, and reduces the domain gap by reducing the difference between source prototypes and pseudo labeled target features. The source prototypes are enhanced with the additional use of encoder features and moving averages, which are design choices tailored for the nature of LiDAR segmentation and UDA. Target pseudo labeling during training is done selectively, by only trusting a small portion of target pixels based on their distance to prototypes, whose portion increases during training. Unlike natural image segmentation, we do not have access to a pre-trained feature extraction model, which decreases the feasibility of prototypical approaches. To overcome this problem, our overall framework opts a two-staged training strategy that starts with a pre-training step that trains the model with a label-agnostic reconstruction objective, and a joint training step utilizing source labels. Benchmark performance and ablation studies support the validity of our model design. Our method exhibits remarkable performance among contemporary 2D and 3D methods.
\section{Related Work}
\subsection{Unsupervised domain adaptation methods}
In the computer vision community, many UDA methods have been developed especially on the image classification task. \cite{ganin2015unsupervised} trained a network that forgets to correctly classify the different domains. \cite{long2015learning, kang2019contrastive, zhang2019bridging} defined a measure that quantifies the divergence between the domains, and trained their networks to reduce the measure. \cite{sun2016deep, morerio2018minimalentropy} tried to reduce the difference of second order statistics between domains.
In recent years, prototypical methods~\cite{pan2019transferrable, zhang2021prototypical} have emerged, showing impressive performance. The known source labels are used to obtain class prototypes, which are in turn used for finding reliable target pseudo labels. Compared to former prototypical UDA methods, our method tries to tackle an extreme problem in which (i) due to the domain shift and the lack of pre-trained feature extractor, prototypical pseudo labels are less accurate (ii) the dataset size limits the use of calculating the source prototypes all over the dataset (iii) the representation size does not allow the use of large dimensional prototypes. In the forthcoming sections we explain and validate how each of the problems are solved.
\subsection{Unsupervised domain adaptation methods for LiDAR semantic segmentation}
In this section, we introduce UDA methods for LiDAR semantic segmentation. \cite{Wu2019SqueezeSegV2IM} addressed the UDA from simulation to real datasets. They proposed a new architecture and an extra network that learns to render the LiDAR intensity values for the simulated dataset. They also used geodesic correlation alignment~\cite{morerio2018minimalentropy} to reduce domain gap. \cite{jaritz2020xmuda} used both natural images and voxelized point clouds as input, and performed cross-modal training. They experimented on diverse realistic scenarios including day-to-night, cross-country and cross-dataset scenarios. \cite{langer2020transfer} proposed a method that generates a semi-synthetic dataset using the method from \cite{morerio2018minimalentropy} mentioned above. \cite{jiang2020lidarnet} used a multi-branch network to train the network by discerning domain-wise private and shared features. Recently, Complete \& Label~\cite{yi2021complete} proposed a new benchmark on LiDAR UDA and a completion based method to deal with the different sampling patterns of different sensors. \cite{rochan2021unsupervised} came up with a range image-based method that tackles part of the scenario proposed by \cite{yi2021complete} with the help of self-supervised auxiliary tasks. Our method outperforms \cite{yi2021complete, rochan2021unsupervised} in performance, while using a simple method and operating at a runtime that is suited for real-time applications.
\section{Method}
We propose an effective and efficient 2D projection-based method for solving the UDA problem in LiDAR semantic segmentation. Our method can be summarized as reducing the domain gap by reducing the difference between known source prototypes and pseudo labeled target features. First, the preliminaries on the data preparation, our principle for data processing and pre-training step are introduced. Next, we explain how to obtain and update the enhanced version of our prototypes. Lastly, we explain the pixel-wise pseudo labeling process on the target samples, filtering of pseudo labels, and confidence-based weighting criterion.
\subsection{Range Data and the Source First Principle}
We first provide preliminary explanations for converting a 3D LiDAR scan to a 2D range image. Given an input LiDAR scan containing points $p = (x, y, z)$, the points are converted to a 2D range image using the following projection formula:
\begin{align}
\left(\begin{array}{c}
u \\
v
\end{array}\right)=\left(\begin{array}{c}
\frac{1}{2}\left[1-\arctan (y, x) \pi^{-1}\right] w \\
{\left[1-\left(\arcsin \left(z r^{-1}\right)+f_{\mathrm{up}}\right) \mathrm{f}^{-1}\right] h}
\end{array}\right)
\end{align}
\noindent where $u$, $v$ are the image 2D coordinates and $w$, $h$ correspond to the desired width and height resolution of the range image. $h$ is usually selected as the number of LiDAR beams, and $w$ is usually selected in relation to the angular resolution of the sensor specs. Commonly used $(h, w)$ values are $(32, 1024)$ and $(64, 2048)$, and larger image resolutions are usually preferred because of better performance. Note that we only used the point coordinates, and did not use any additional features such as intensity or RGB values.
\begin{figure*}[!t]
\centering
\includegraphics[width=1\textwidth]{figure/3d_method_final3.pdf}
\caption{Overall outline of our method. First, we start with a task-agnostic reconstruction pre-training using the range images from both domains. Next, we use the pre-trained model and source labels to extract source prototypes. Then we utilize the prototypes to pseudo label target pixels, and selectively train a small portion of the target pixel features to become similar to the prototypes corresponding to their pseudo labels. Meanwhile, a task specific layer is directly trained using the source labels. Because the target features are aligned with source prototypes, the difference between domains is reduced. After training has finished, the target data can be segmented using the same task specific layer.}
\label{fig:method}
\vspace{-2.5mm}
\end{figure*}
However, one drawback of 2D range projection is that, if $h$ is larger than the number of input LiDAR beams, empty pixels forming a striped pattern appear at the converted range image. This can be problematic in the LiDAR UDA setup because different LiDAR datasets are taken from different sensors, and if they are unified with a single, incompatible $(h, w)$, unwanted byproducts are guaranteed to appear. Previous work has attempted to circumvent such unwanted byproducts via upsampling or hole filling algorithms, but such rule-based strategies are at the risk of contaminating the data.
\begin{table}[t]
\centering
\renewcommand{\arraystretch}{0.95}
\caption{Details on the Source First Principle. K stands for SemanticKITTI \cite{behley2019semantickitti} and N stands for nuScenes-lidarseg \cite{caesar2020nuscenes}}
\resizebox{\columnwidth}{!}{
\begin{tabular}{c|c|c|c}
\toprule
\multirow{2}{*}{Scenario} & \multirow{2}{*}{\shortstack{Best Input Setup\\ for Source Domain}} & \multirow{2}{*}{Target Processing} & \multirow{2}{*}{Network Channel} \\
& & & \\
\midrule
\midrule
\multirow{2}{*}{K2N} & \multirow{2}{*}{(64, 2048)} & \multirow{2}{*}{\shortstack{Upsampling\\ \& Pooling}} & \multirow{2}{*}{32} \\
& & & \\
\cmidrule{1-4}
\multirow{2}{*}{N2K} & \multirow{2}{*}{(32, 1024)} & \multirow{2}{*}{\shortstack{Project with \\ $(32, 1024)$}} & \multirow{2}{*}{128} \\
& & & \\
\bottomrule
\end{tabular}
}
\label{tab:SFprinciple}
\vspace{-2.5mm}
\end{table}
To minimize such risks as much as possible, we employ a simple strategy named \textbf{the Source First Principle}. As the name implies, we prioritize the source data since it is the only representation in our setup that can be fully trusted. Therefore, we choose the best scenario of $w$ and $h$ that best suits the source domain sensor specs, and upsample or re-project the target data to match that scenario. Details can be found on Table 1, in which we show the applied principle for two major LiDAR datasets, SemanticKITTI \cite{behley2019semantickitti} and nuScenes-lidarseg \cite{caesar2020nuscenes}. In the K2N scenario, we perform a pre-processing step based on nearest neighbor upsampling to upsample the target data representation from $(32, 1024)$ to $(64, 2048)$. During inference of the target data, we perform average pooling and reduce the resolution back to $(32, 1024)$. On the N2K scenario, the procedure is less complicated as projecting the SemanticKITTI data with (1) at a resolution of $(32, 1024)$ is sufficient enough. Because the volume of data being processed is different for each scenario, we correspondingly increase the model channel dimension to match the amount of information that is processed in every scenario.
Based on the obtained data representation, we explain the model architecture and pre-training process that is crucial for prototypical learning. Prototypical self-training in natural image segmentation is a blessing from ImageNet pre-training. Because pre-training equips the model with basic feature extraction capabilities, prototypes conditioned on source labels can act as anchors to discern reliable pseudo labels from the target domain. However, due to the domain shift in LiDAR data, pre-training and prototypical methods altogether have not been attempted in the LiDAR setup. Surprisingly, we found out that a simple task-agnostic pre-training step empowers the model enough such that cross-domain prototypes can be aligned. For a given number of epochs, we train a model with the following auto-encoder objective:
\begin{align}
\mathcal{L_{\mathrm{rec}}}=\mathbb{E}_{x\sim{R_s}\cup{R_t}}{\|{x} - {F}\left(x\right)\|^2}
\end{align}
where $x$ denotes the sampled range image from the source and target domain range image set, $R_s$ and $R_t$. $F$ denotes a task-agnostic neural autoencoder. Throughout our method, we use the SalsaNext \cite{cortinhal2021salsanext} architecture, which has shown good performance on range image-based LiDAR segmentation. $F$ is obtained by removing the task-specific fully-connected layers from SalsaNext, and the default channel of the architecture is set according to Table 1.
\subsection{Enhanced and Averaged Prototypes}
We aggregate the source data labels and features to obtain source prototypes, which can be used as anchors for target feature alignment. Similar to concurrent work~\cite{pan2019transferrable, zhang2021prototypical}, the source prototypes $\mu_{c}^{s}$ are calculated by the following equation:
\begin{align}
\mu_{c}^{s} = \sum_{n} \sum_{h} \sum_{w} \mathds{1}_{\mathbf{y}_{n,h,w}^{s} = c}
\frac{F(\mathbf{x}_{n,h,w}^{s} )}
{\| F(\mathbf{x}_{n,h,w}^{s} ) \|}
\end{align}
where $\mathbf{x}$ is a range image, the superscript denotes the domain it is sampled from, and the three subscripts are the sample-wise index over the dataset, and the row and column indices, respectively. $c \in \{0, \ldots, C\}$ denotes the class index and $\mathds{1}$ is an indicator variable denoting $1$ if the subscript holds and $0$ otherwise. This allows us to obtain target domain prototypical pseudo labels $\hat{\mathbf{y}}_{n,h,w}^{t, pr}$ using:
\begin{align}
& sim(F(\mathbf{x}_{n,h,w}^{t} ), \mu_{c}^{s}) = \frac{\langle F(\mathbf{x}_{n,h,w}^{t} ), \mu_{c}^{s} \rangle}
{\| F(\mathbf{x}_{n,h,w}^{t} ) \|\| \mu_{c}^{s} \|}\\
& \hat{\mathbf{y}}_{n,h,w}^{t, pr} =
\arg\max _{c} sim(F(\mathbf{x}_{n,h,w}^{t} ), \mu_{c}^{s})
\end{align}
where $sim$ is the abbreviation for similarity, $pr$ is the abbreviation of prototype and $n$ denotes sample-wise index over the dataset. $0$ corresponds to the ignored class, usually the background and the pixels with no points projected. Note that we do not ignore class $0$ in both target data and source data, which is different from the former work. This is because, if the class $0$ representations and prototypes are not taken into account, the target data class $0$ representations all collapse to the majority class. In the K2N scenario, this would mean that the background and noise pixels of nuScenes, even the easiest ones, would all be classified as the source data majority class, in this case, vegetation. This is a catastrophic situation that we naturally want to avoid.
Directly adapting the above formulation of prototypes for the segmentation task entails difficulties in the LiDAR segmentation setup. Due to the large size of input, our model capacity is limited in size, thus having a low number of channel dimension. Since our prototypes are obtained by averaging over the row and column indices $h$ and $w$, its size equals the number of channel dimensions, which means that our prototypes are generally small in size and the representative power is generally weak.
To compensate this weakness, we pay attention to the general architectural structures of segmentation networks. Usually it downsamples and upsamples a given input, meaning that a corresponding feature map $F'(\mathbf{x}_{n,h,w}^{s})$ whose resolution equals that of the final feature map $F(\mathbf{x}_{n,h,w}^{s})$ from which we average and obtain the prototypes, can be extracted from the encoder part of the network. Plugging that feature map into (3) we obtain enhanced prototypes $\widehat{\mu}_{c}^{s}$:
\begin{align}
& \mu_{c}^{'s} = \sum_{n} \sum_{h} \sum_{w} \mathds{1}_{\mathbf{y}_{n,h,w}^{s} = c}
\frac{F'(\mathbf{x}_{n,h,w}^{s} )}
{\| F'(\mathbf{x}_{n,h,w}^{s} ) \|}\\
& \widehat{\mu}_{c}^{s} = [\mu_{c}^{'s};\mu_{c}^{s}]
\end{align}
whose representative power is increased with the help of encoder features. By $[;]$ we denote that we have performed concatenation on the channel dimension.
An additional difficulty is that the LiDAR datasets are usually large, and it is expensive to iterate over the whole dataset to obtain class prototypes. An alternative approach would be to sample source and target data at every iteration, obtain the source prototypes on the fly and obtain the target pseudo labels from those prototypes. However, this approach still fails as there is no guarantee that the source classes will appear on the target scene, and vice-versa. Therefore, we propose to keep exponential moving averages of prototypes from every class using:
\begin{align}
\begin{split}
\widehat{\mu}_{c}^{current} = \sum_{n_{B}} &\sum_{h} \sum_{w} \mathds{1}_{\mathbf{y}_{n_{B},h,w}^{s}=c}\\ &\left[\frac{F'(\mathbf{x}_{n_{B},h,w}^{s} )}
{\| F'(\mathbf{x}_{n_{B},h,w}^{s} ) \|} ; \frac{F(\mathbf{x}_{n_{B},h,w}^{s} )}
{\| F(\mathbf{x}_{n_{B},h,w}^{s} ) \|}\right]
\end{split}\\
\widehat{\mu}_{c}^{s,i} \leftarrow \alpha\widehat{\mu}_{c}^{s,i-1}& + \left(1-\alpha\right)\widehat{\mu}_{c}^{s,current}
\end{align}
where $i$ denotes the index for the training iteration, and $\widehat{\mu}_{c}^{s,current}$ is calculated by (6) over the sampled source batch dataset, indexed by $n_{B}$. Note that we keep moving averages of the enhanced prototypes. During training, we plug $\widehat{\mu}_{c}^{s,i}$ into (4) and (5) to obtain the target pseudo labels.
\subsection{Pseudo Label Filtering and Confidence Weighting}
The lack of pre-trained feature extractor, and the severeness of domain shift are factors that degrade the accuracy of prototypical pseudo labeling. Although we have included a pre-training step to mitigate such shortcomings, the quality of features it provides will be still inadequate, compared to the ImageNet pre-trained features used in natural images. This necessitates a more cautious strategy in deploying the pseudo labels for network training, and we propose to filter pseudo labels based on their similarity to prototypes.
For every pixel in the sampled target batch data, we calculate the maximum class similarity using (4). Our network is only trained with the filtered pixels with the top $p_{pc}$ percentile of the class-wise maximum similarities, where $p_{pc}$ is a hyperparameter obtained by multiplying the epoch with a pre-defined per epoch portion increment $p_{inc}$. Setting $\tau_{c}$ as the top $p_{pc}$ percentile value of similarities for class $c$ we define a filtering indicator as:
\begin{align}
M_{n,h,w}^{t} =
\begin{cases}
1 & \text{if } \max _{c} sim(F(\mathbf{x}_{n,h,w}^{t} ), \mu_{c}^{s}) > \tau_{c}\\
0 & \text{otherwise}
\end{cases}
\end{align}
Since $\tau_{c}$ is a value ranging from $0$ to $1$, it can be also thought of as a class-wise confidence value for the current predictions. As we do not have accurate information on the target domain, it is best to first focus more on the accurate classes, and then progressively train on the harder classes. Therefore, we incorporate the $\tau_{c}$ values into our final loss function as a weighting term.
\subsection{Mask Deactivation and Background Down-weighting}
The datasets used in our adaptation scenarios do not contain the same set of classes, and therefore we have to ignore the non-overlapping classes. We map such classes to class $0$, but then, class $0$ becomes a mixture of the classes to ignore, background, noise, and even empty pixels on which no points are projected. Under such circumstances, ignoring class $0$ during training is an unnatural design choice, which we have pointed out in Section B. However, because class $0$ is the majority class, the training dynamics is expected to deteriorate as it becomes difficult to extract semantic representations from the majority class. To mitigate such issues we propose two solutions, mask deactivation and background down-weighting.
Originally, \cite{milioto2019rangenet} generated range images using (1) and applied normalization step based on mean subtraction and standard deviation division. Thereafter, the images were multiplied by a binary mask, which contains $1$ if any point is projected to the corresponding pixel and $0$ if no point is projected. If the original range image before normalization contained pixels with no projection their values would have become negative due to mean subtraction, and thus would have stood out compared to the surrounding range values. However, the process of multiplying the masks can be thought as dampening the apparent, easy-to-recognize regions of class $0$. To let the model recognize apparent cases of class $0$ with ease, we opt to deactivate the multiplication of binary masks.
In general, class $0$ pixels are easy to learn, which is further eased due to the above mask deactivation process. If we obtain class prototypes and target pseudo labels, class $0$ is the most accurate and dominant one, which makes the model excessively focus on class $0$ during training. To counteract such situations, we enforce a stricter condition on class $0$ by decreasing the filtering percentile value $p_{pc}$ of class $0$. Therefore, we propose to use a $p_{0}$ percentile value for class $0$ which is obtained by multiplying $p_{pc}$ and a down-weighting term $p_{dw}$.
\begin{table}[t!]
\vspace{-4mm}
\begin{algorithm}[H]
\small
\caption{Enhanced Prototypical Learning}
\label{alg:algorithm}
\textbf{Input}: Dataset $\left\{\left(\mathbf{x}_{i,:,:}^{s}, \mathbf{y}_{i,:,:}^{s}\right)\right\}_{i=1}^{n_{s}}$, and $\left\{\left(\mathbf{x}_{i,:,:}^{t}\right)\right\}_{i=1}^{n_{t}}$
\textbf{Parameter}: $max\_epoch, max\_iter, \alpha, \lambda$
\begin{algorithmic}[]
\STATE Let $epoch=1$
\WHILE{$ epoch \leq max\_epoch $}
\STATE Update $p_{pc}$ used in calculating $\tau_{c}$ in (10)
\STATE Let $iter=0$
\WHILE{$ iter < max\_iter $}
\STATE Forward $\mathbf{x}_{i,:,:}^{s}$ and obtain $\widehat{\mu}_{c}^{current}$ using $\mathbf{y}_{i,:,:}^{s}$ and (7)
\STATE Use $\widehat{\mu}_{c}^{current}$ to obtain $\widehat{\mu}_{c}^{s,iter}$ by updating (9)
\STATE Forward $\mathbf{x}_{i,:,:}^{t}$ and obtain $M_{i,:,:}^{t}$ using $\widehat{\mu}_{c}^{s,iter}$ and (10)
\STATE Train the network with (11), (12)
\STATE $ iter \gets iter + 1$
\ENDWHILE
\STATE $ epoch \gets epoch + 1$
\ENDWHILE
\end{algorithmic}
\textbf{Output}: Domain adapted $F$ and $G$
\end{algorithm}
\vspace{-6mm}
\end{table}
\subsection{Final loss function}
Integrating all the components of our method, the final loss will be as follows:
\begin{align}
&\mathcal{L} = \mathcal{L}_{wce} + \mathcal{L}_{ls} + \lambda \mathcal{L}_{epl}\\
\begin{split}
&\mathcal{L}_{epl} = \sum_{n_{B}} \sum_{h} \sum_{w} - \tau_{\hat{\mathbf{y}}_{n,h,w}^{t, pr}} M_{n_{B},h,w}^{t}\\
&\qquad\qquad\quad\; sim(F(\mathbf{x}_{n,h,w}^{t} ), \mu_{\hat{\mathbf{y}}_{n,h,w}^{t, pr}}^{s})
\end{split}
\end{align}
where $\mathcal{L}_{ls}$ is the lovasz softmax from \cite{berman2018lovasz, cortinhal2021salsanext}, and is used to maximize IoU. $\mathcal{L}_{wce}$ is the cross entropy loss on the source data, whose pixels are weighted by the reciprocals of the portion of their class labels. $\mathcal{L}_{epl}$ is a loss function that aligns the domains by increasing the similarity between source class prototypes and selectively pseudo labeled target features. $\lambda$ is a hyperparameter for balancing the loss functions.
\section{Experiments}
We evaluate the performance of our method on the recent adaptation scenarios proposed by \cite{yi2021complete}. The original scenario validated six adaptation setups across three datasets, SemanticKITTI~\cite{behley2019semantickitti}, nuScenes-lidarseg~\cite{caesar2020nuscenes} and Waymo Open Dataset~\cite{sun2020waymo}. Among the proposed scenarios, we evaluate our method on the adaptation scenarios between SemanticKITTI and nuScenes-lidarseg, because Waymo does not contain many classes and dense annotations. The performance is evaluated on the commonly used validation split of each dataset. For tables and experiments, we abbreviate SemanticKITTI as \textbf{K} and nuScenes as \textbf{N}.
\begin{table}[t]
\centering
\caption{Experiment results of UDA on the LiDAR semantic segmentation scenarios proposed by \cite{yi2021complete}.}
\begin{tabular}{c|c|ccc}
\toprule
\multicolumn{2}{c|}{Scenario} & K2N & N2K & mean \\
\midrule
\midrule
\multirow{6}{*}{3D} & Source Only~\cite{yi2021complete} & 27.9 &23.5 & 25.7\\
& FeaDA~\cite{yi2021complete} & 27.2 &21.4 & 24.3\\
& OutDA~\cite{yi2021complete} & 26.5 &22.7 & 24.6\\
& SWD~\cite{yi2021complete} & 27.2 &24.5 & 25.9\\
& 3DGCA~\cite{yi2021complete} & 27.4 &13.4 & 20.4\\
& CnL~\cite{yi2021complete} & 31.6 &33.7 & 32.7\\
\midrule
2D & SQSGV2~\cite{yi2021complete, Wu2019SqueezeSegV2IM} & 10.1 & 23.9 & 17.0\\
\midrule
\midrule
2D & Ours & \textbf{35.8} & \textbf{34.1} & \textbf{35.0}\\
\bottomrule
\end{tabular}
\label{tab:cnluda}
\vspace{-2mm}
\end{table}
\begin{table*}[t]
\footnotesize
\renewcommand{\arraystretch}{0.9}
\centering
\caption{Class-wise experiment results of UDA on the LiDAR semantic segmentation scenarios proposed by \cite{rochan2021unsupervised}. Numbers denote IoU.}
\begin{tabular}{c|c|cccccccccc|c}
\toprule
Method & Scenario & car & bcycl & mcycl & othvhc & pedest & truck & drvbl & sdwlk & trrn & vgtn & mIoU \\
\midrule
\midrule
CORAL+GA \cite{rochan2021unsupervised} & K2N & 51.0 & 0.9 & 6.0 & 4.0 & 25.9 & 29.9 & 82.6 & 27.1 & 27.0 & 55.3 & 31.0 \\
GA \cite{rochan2021unsupervised} & K2N & 54.4 & 3.0 & 1.9 & 7.6 & 27.7 & 15.8 & 82.2 & 29.6 & 34.0 & 57.9 & 31.4 \\
\midrule
\midrule
Ours & K2N &43.7 &1.5 &22.1 &40.6 &17.7 &32.9 &61.6 &29.3 &29.4 &79.4 &\textbf{35.8} \\
\midrule
CORAL+GA \cite{rochan2021unsupervised} & N2K & 47.3 & 10.4 & 6.9 & 5.1 & 10.8 & 0.7 & 24.8 & 13.8 & 31.7 & 58.8 & 21.0 \\
GA \cite{rochan2021unsupervised} & N2K & 49.6 & 4.6 & 6.3 & 2.0 & 12.5 & 1.8 & 25.2 & 25.2 & 42.3 & 43.4 & 21.3 \\
\midrule
\midrule
Ours & N2K &36.4 &9.8 &9.5 &2.3 &4.7 &12.1 &81.7 &55.3 &53.1 &76.0 & \textbf{34.1} \\
\bottomrule
\end{tabular}
\label{tab:gauda}
\vspace{-2mm}
\end{table*}
\begin{table*}[t]
\centering
\footnotesize
\caption{Ablation results of our method.}
\renewcommand{\arraystretch}{0.9}
\begin{tabular}{c|ccccccc|c}
\toprule
\multirow{2}{*}{Scenario} & \multirow{2}{*}{\shortstack{Source\\ First}} & \multirow{2}{*}{\shortstack{Recon\\pre-training}} & \multirow{2}{*}{\shortstack{Enhanced\\Prototypes}} & \multirow{2}{*}{\shortstack{Averaged\\Prototypes}} & \multirow{2}{*}{\shortstack{Deactivate\\Mask}} & \multirow{2}{*}{\shortstack{Confidence\\Weighting}} & \multirow{2}{*}{\shortstack{Background\\Down-weighting}} & \multirow{2}{*}{mIoU} \\
& & & & & & & \\
\midrule
\midrule
K2N & $\checkmark$ & $\cdot$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & \textbf{30.0}\\
K2N & $\checkmark$ & $\checkmark$ & $\cdot$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & \textbf{29.7}\\
K2N & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\cdot$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & 34.4\\
K2N & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\cdot$ & $\checkmark$ & $\checkmark$ & \textbf{27.9}\\
K2N & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\cdot$ & $\checkmark$ & \textbf{30.3}\\
K2N & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\cdot$ & 34.9\\
K2N & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & 35.8\\
\midrule
N2K & $\cdot$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & \textbf{29.3}\\
N2K & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & 34.1\\
\bottomrule
\end{tabular}
\label{tab:ablation}
\vspace{-2mm}
\end{table*}
\subsection{Scenario details}
Since the original paper~\cite{yi2021complete} of the proposed scenarios does not provide accurate mappings between scenarios, we have read through the documents of each dataset and implemented the scenarios on our own. The scenarios are evaluated on ten classes, \{car, bicycle, motorcycle, other vehicle, pedestrian, truck, drivable surface, sidewalk, terrain, vegetation\}. We only elaborate the mappings between classes that are confusing. On the SemanticKITTI dataset, we have mapped \{bus, other vehicle\} to \{other vehicle\}, \{road, lane marking\} to \{drivable surface\}, and \{vegetation, trunk\} to \{vegetation\}. The moving classes are mapped in accordance with their non-moving class counterparts. On nuScenes, \{adult, child, construction worker, police officer\} were mapped to \{pedestrian\}, and \{bus, construction vehicle, trailer\} were mapped to \{other vehicle\}.
\subsection{Implementation details}
We implemented our method using the PyTorch framework, and four NVIDIA RTX 2080 Ti GPUs were used to train our model. Our model is trained and inferred using the FP16 (half-precision) setup of PyTorch. Every iteration, we have sampled 4 samples from the source domain and 4 samples from the target domain to train our network. For optimization, we have used the Stochastic Gradient Descent (SGD) with learning rate 0.01, momentum 0.9 and weight decay 0.0001. For the pre-training stage, we used learning rate warmup, and linearly increased the learning rate from initial value 0.0001 to 0.01 during the first epoch. After warmup, we used the exponential scheduler, which decays the learning rate at every epoch with gamma 0.99. For the joint training stage, we did not use warmup and used an inverse scheduler that is widely used in domain adaptation setups \cite{ganin2015unsupervised}. The learning rate is scheduled with $\eta_{p}=\eta_{0}/(1+ap)^{b}$ with $\eta_{0}$ the initial learning rate, $a$ and $b$ hyperparameters which are 10 and 0.75 each, and $p$ a value that linearly increases from $0$ to $1$ iteration-wise. We have pre-trained our networks for 50 epochs, and performed joint training for 30 epochs. The $\alpha$ in (9) is set to 0.99, the $p_{inc}$ used for calculating the $\tau_{c}$ in (10) is set to 0.01, and the $p_{dw}$ used in down-weighting class $0$ is set to 0.1. The $\lambda$ in (11) is set to 1.0, and we did not finetune it further.
Following former projection-based methods \cite{milioto2019rangenet}, we use KNN post-processing, which propagates the 2D predictions to 3D neighbors. Details can be found in \cite{milioto2019rangenet}. The required parameters are $\{S, k, \text{cutoff}, \delta\}$ and we have used $\{5, 5, 1.0, 1.0\}$, each, following the original setup.
\subsection{Comparison against contemporary methods}
On Table 2 and Table 3 we compare our method against contemporary methods. Direct comparison against the two methods is inaccurate, as the detailed settings of \cite{yi2021complete} are unknown, and the settings of \cite{rochan2021unsupervised} are slightly different from ours. Still it can be seen that our method outperforms contemporary methods by a meaningful margin. Regarding \cite{yi2021complete}, the improvement over their method is meaningful, because 3D methods usually outperform 2D methods due to the richer input representation. Against \cite{rochan2021unsupervised}, the gap on the N2K scenario is noticeable, and it is also meaningful that the performance of our method is not imbalanced across scenarios.
\subsection{Ablation studies on method components}
On Table 4 we show the performance of our model with certain components ablated. The performance of the five components which contribute to a large mIoU decrease are denoted in bold letters. The N2K experiment without Source First Principle corresponds to a setup where the nuScenes data have been upsampled via nearest neighbor interpolation. The large mIoU reduction supports our claim that keeping the source representation as intact as possible is crucial. Mask deactivation, despite its simplicity, shows a large mIoU decrease if ablated. This shows that it is important to mitigate the learning difficulties induced from the inclusion of class $0$. When compared against background down-weighting, it can be seen that mask deactivation is a stronger component for dealing with class $0$. When averaged prototypes are ablated, we obtain the prototypes from the classes in the current source scene only. The small performance decrease of this setup means that the performance of our method is mainly maintained by classes that are prevalent in every scene.
\subsection{Runtime evaluation}
We evaluate the inference time of our method on a single RTX 2080 Ti, with CUDA synchronization for accurate time measurement and FP16 activated. On the K2N scenario, it operates at the speed of 56 FPS (CNN: 16.50 ms, KNN: 1.36 ms per image) and on the N2K scenario it operates at the speed of 32 FPS (CNN: 28.16 ms, KNN: 2.63 ms per image) which is a speed that supports real-time applications.
\subsection{Limitations and future directions}
In the scenarios proposed by \cite{yi2021complete} where the Waymo dataset is included, it is difficult to produce meaningful results, because the number of class $0$ pixels are excessively large. Modifying the model design to attend more on the semantic classes, and discriminating class $0$ pixels more aggressively could be a future research direction to pursue. Also, as stated in the ablation experiments, our method is focused on the prevalent classes, and could be further improved by attending on the minority classes.
\subsection{Acknowledgements}
This research was supported by the Engineering Research Center Program through the National Research Foundation of Korea (NRF) funded by the Korean Government MSIT (NRF-2018R1A5A1059921) and by the Institute of Information \& communications Technology Planning \& Evaluation (IITP) grant funded by the Korea government(MSIT) (No.2021-0-02068, Artificial Intelligence Innovation Hub). |
2001.00089 | \section*{Acknowledgments}
\jpd{Apparently these don't count toward the page limit for the body of the paper, ``It has now been decided that the acknowledgments section might appear after the 9th page (along with references). The paper checker has been updated accordingly.''}
Dickerson, McElfresh, and Schumann were supported in part by NSF CAREER Award IIS-1846237, DARPA GARD Award \#HR112020007, DARPA SI3-CMD Award \#S4761, DoD WHS Award \#HQ003420F0035, NIH R01 Award NLM-013039-01, and a Google Faculty Research Award.
We gratefully acknowledge funding support from the NSF (Grants 1844462 and 1844518).
The opinions in this paper are those of the authors and do not necessarily reflect the opinions of any funding sponsor or the United States Government.
\subsection{Decision Scenarios} \label{app:scenario_analysis}
For Study-1{} we designed three decision-making scenarios to test whether the perceived importance or realism of a particular scenario influenced comprehension score. They are as follows:
\begin{itemize} \itemsep=0cm
\item \textbf{Art Project (AP):} distributing awards for art projects to primary school students,
\item \textbf{Employee Awards (EA):} distributing employee awards at a sales company, and
\item \textbf{Hiring (HR):} distributing job offers to applicants.
\end{itemize}
In each scenario the students/employees/applicants are partitioned into two groups (parents' occupation for the first scenario, and binary gender for the other two scenarios).
We use a between-subjects design: participants are randomly partitioned into three conditions, one for each scenario (AP, EA, or HR).
For each condition we define the \emph{fairness rule} in the context of the decision-making scenario (see Appendix~\ref{app:survey} for the full surveys).
Next we describe our main conclusion related to the different decision-making scenarios in Study-1: the scenario does not influence comprehension score.
\subsubsection{Scenario does not Influence Comprehension Scores (RQ4)} \label{results:rq4}
We were concerned that less important and/or realistic scenarios would cause participants to take the survey less seriously, and therefore perform more poorly.
To test this,
participants were randomly assigned to a scenario, resulting in the following distribution: AP = 41, EA = 49, HR = 57.
A K-W test revealed no differences between scenarios in terms of comprehension score (mean comprehension scores: AP = 6.0, EA = 6.74, HR = 5.86%
). However, differences did exist between scenarios in terms of importance (assessed in Q2), measured in hours of effort deemed necessary to make the relevant decision (K-W, %
$p<0.001$). Post-hoc M-WU revealed that participants believed making a decision in the AP scenario merited fewer hours of effort (mean = 3.15hrs) than in the EA (13.52hrs, $p<0.001$)
or HR (15.23hrs, $p<0.001$)
scenarios (corrected $\alpha=0.05/3=0.017$). See Fig. \ref{fig:q2} for distributions of responses.
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\columnwidth]{fig/q2.png}
\vspace{-10pt}
\caption{Importance of a scenario by proxy of hours of effort necessary to make a decision in each scenario. AP merited less hours of effort than both EA and HR.}
\label{fig:q2}
\end{figure}
Of note, it is possible that perceived realism, assessed in Q1 on a five-point Likert scale, was also influenced by scenario (K-W, $p=0.051$), but we may need larger sample sizes to confirm this. Regardless, while the nature of a scenario does influence participant perception in terms of importance and (possibly) realism, it does not appear to influence comprehension (at least for the scenarios we chose). For this reason, we chose to test a single scenario (HR) in Study-2{}. %
\section{Introduction}
Research into algorithmic fairness has grown in both importance and volume over the past few years, driven in part by the emergence of a grassroots Fairness, Accountability, Transparency, and Ethics (FATE) in Machine Learning (ML) community. Different metrics and approaches to algorithmic fairness have been proposed, many of which are based on prior legal and philosophical concepts, such as disparate impact and disparate treatment~\cite{feldman2015certifying,chouldechova2017fair,binns2017fairness}. However, definitions of ML fairness do not always fit well within pre-existing legal and moral frameworks. The rapid expansion of this field makes it difficult for professionals to keep up, let alone the general public.
Furthermore, misinformation about notions of fairness can have significant legal implications.\footnote{\url{https://www.cato.org/blog/misleading-veritas-accusation-google-bias-could-result-bad-law}}
Computer scientists have largely focused on developing mathematical notions of fairness and incorporating them into ML systems. A much smaller collection of studies have measured public perception of bias and (un)fairness in algorithmic decision-making.
\newdcm{
However, as both the academic community and society in general continue to discuss issues of ML fairness, it remains unclear whether non-experts--who will be \emph{impacted} by ML-guided decisions--understand various mathematical definitions of fairness sufficiently to provide opinions and critiques.
We emphasize that these technologies are likely to have greater impact on marginalized populations, and those with lower levels of education, as in the case of hiring and criminal justice~\cite{barocas2016big,frey2017future}.
For this reason, we focus on a non-expert audience and a context (hiring) that most people would find relatively familiar.
}
\noindent\textbf{Our Contributions.}
We take a step toward addressing this issue by studying peoples' comprehension and perceptions of three definitions of ML fairness: \emph{demographic parity}, \emph{equal opportunity,} and \emph{equalized odds} \cite{Hardt16:Equality}.
Specifically, we address the following research questions:
\vspace{-9pt}
\begin{itemize}[itemsep=0cm,leftmargin=1cm]
\item[\textbf{RQ1}] When provided with an explanation intended for a non-technical audience, do non-experts comprehend each definition and its implications?
\item[\textbf{RQ2}] What factors play a role in comprehension?
\item[\textbf{RQ3}] How are comprehension and sentiment related?
\item[\textbf{RQ4}] How do the different definitions compare in terms of comprehension?
\end{itemize}
\vspace{-9pt}
We developed two online surveys to address these research questions. We presented participants with a simplified decision-making scenario and an accompanying \emph{fairness rule} expressed in the scenario's context. We asked questions related to the participants' comprehension of and sentiment toward this rule. Tallying the number of correct responses to the comprehension questions gives us a \emph{comprehension score} for each participant.
In Study-1{}, we found that this comprehension score is a consistent and reliable indicator of understanding demographic parity. %
Then, in Study-2{}, we used a similar approach to compare comprehension among all three definitions of interest. We find that (1) education is a significant predictor of rule understanding, (2) the counterintuitive definition of Equal Opportunity with False Negative Rate was significantly harder to understand than other definitions, and (3) participants with low comprehension scores tended to express less negative sentiment toward the fairness rule.
\newdcm{%
This underlines the importance of considering stakeholders before deploying a ``fair'' ML system, because some stakeholders may not understand or agree with an ML-specific notion of fairness.
Our goal is to help to designers and adopters of fairness approaches understand whether they are communicating with stakeholders effectively.
}
\section{Related Work}\label{sec:related}
In response to many instances of bias in fielded artificial intelligence (AI) and machine learning (ML) systems, ML fairness has received significant attention from the computer-science community.
Notable examples include gender bias in job-related ads~\cite{datta2015automated}, racial bias in evaluating names on resumes~\cite{caliskan2017semantics}, and racial bias in predicting criminal recidivism~\cite{angwin2016machine}.
To correct biased behavior, researchers have proposed several mathematical and algorithmic notions of fairness.
Most algorithmic fairness definitions found in literature are motivated by the philosophical notion of individual fairness (e.g., see~\cite{Rawls71a}), and legal definitions of disparate impact/treatment (e.g., see~\cite{barocas2016big}).
Several ML-specific definitions of fairness have been proposed which claim to uphold these philosophical and legal concepts.
These definitions of ``ML fairness'' fall loosely into two categories (for a review, see~\cite{chouldechova2018frontiers}). \emph{Statistical Parity} posits that in a \emph{fair} outcome, individuals from different protected groups have the same chance of receiving a positive (or negative) outcome.
Similarly, \emph{Predictive Parity}~\cite{Hardt16:Equality} asserts that the predictive accuracy should be similar across different protected groups--often measured by the false positive rate (FPR) or false negative rate (FNR) in binary classification settings.
Myriad other definitions have been proposed, based on concepts such as calibration~\cite{pleiss2017fairness} and causality~\cite{kusner2017counterfactual}.
Of course, all of these definitions make limiting assumptions; no concept of fairness is perfect~\cite{Hardt16:Equality}. The question remains, \emph{which} of these fairness definitions are appropriate, and in \emph{what context?}
There are two important components to answering this question: \emph{communicating} these fairness definitions to a general audience, and \emph{measuring their perception} of these definitions in context.
Communicating ML-related concepts is an active and growing research area.
In particular, \emph{interpretable ML} focuses on communicating the decision-making process and results of ML-based decisions to a general audience~\cite{lipton2018mythos}.
Many tools have been developed to make ML models more interpretable, and many demonstrably improve understanding of ML-based decisions~\cite{ribeiro2016should,Huysmans2011}.
These models often rely on concepts from probability and statistics---teaching these concepts has long been an active area of research.
\citet{batanero2016research} provide an overview of teaching probability and how students learn probability; our surveys use their method of communicating probability, which relies on proportions.
We draw on several other concepts from this literature for our study design; for example avoiding numerical and statistical representations~\cite{gigerenzer2003simple,gigerenzer2007helping}, which can be confusing to a general audience.
Instead we provide relatable examples, accompanied by examples and graphics~\cite{hogarth2015providing}.
Effectively communicating ML concepts is necessary to achieve our second goal of understanding peoples' perceptions of these concepts.
One particularly active research area focuses on how people perceive bias in algorithmic systems.
For example, \citet{woodruff2018qualitative} investigated perceptions of algorithmic bias among marginalized populations, using a focus group-style workshop;~\citet{grgic2018human} study the underlying factors causing perceptions of bias, highlighting the importance of selecting appropriate features in algorithmic decision-making; \citet{plane2017exploring} look at perceptions of discrimination of online advertising;~\newdcm{\citet{harrison2020empirical} studies perceptions of fairness in stylized machine learning models;} \dsnew{\citet{srivastava2019mathematical} note that perceived appropriateness of an ML notion of fairness may depend on the domain in which the decision-making system is deployed, but suggest that simpler notions may best capture lay perceptions of fairness.}
A related body of work studied how people perceive algorithmic decision-makers.
\citet{lee2018understanding} studies perceptions of fairness, trust, and emotional response of algorithmic decision-makers --- as compared to human decision-makers.
Similar work studies perception of fairness in the context of splitting goods or tasks, and in loan decisions~\cite{Lee2017,Lee2019,saxena2020fairness}.
\citet{binns2018s} studies how different explanation styles impact perceptions of algorithmic decision-makers.
This substantial body of prior research provided inspiration and guidance for our work.
Prior work has studied both the effective communication of, and perceptions of, ML-related concepts.
We hypothesize that these concepts are in fact related; to that end, we design experiments to simultaneously study peoples' \emph{comprehension} of and \emph{perceptions} of common ML fairness definitions.
\section{Methods}\label{sec:methods}
To study perceptions of ML fairness, we conducted two online surveys where participants were presented with a hypothetical decision-making scenario. Participants were then presented with a ``rule'' for enforcing fairness. We then asked each participant several questions on their comprehension and perceptions of this fairness rule.
We first conducted Study-1{} to validate our methodology; we then conducted the larger and broader Study-2{} to address our main research questions.
Both studies were approved by the University of Maryland Institutional Review Board (IRB).
\subsection{Study-1{}}\label{sec:studyA}
In Study-1{} we tested three different decision-making scenarios based on real-world decision problems: hiring, giving employee awards, and judging a student art project.
However, we observed no difference in participant responses between these scenarios; for this reason,
\dsnew{we focus exclusively on hiring in
Study-2{} (see \ref{sec:studyB}).}
Please see Appendix~\ref{app:survey} for a description of the Study-1{} scenarios, and \Appref{app:scenario_analysis} for relevant survey results.
In Study-1{}, we chose (what we believe is) the simplest definition of ML fairness, namely, demographic parity. In short, this rule requires that the fraction of one group who receives a \emph{positive} outcome (e.g., an award or job offer) is equal for both groups.
\subsubsection{Survey Design}\label{methods:design}
Here we provide a high-level discussion of the survey design; the full text of each survey can be found in Appendix~\ref{app:survey}.
The participant first receives a consent form (see Appendix~\ref{app:consent}). If consent is obtained, the participant sees a short paragraph explaining the decision-making scenario. To make demographic parity accessible to a non-technical audience, and to avoid bias related to algorithmic decision-making, we frame this notion of fairness as a \emph{rule} that the decision-maker must follow to be fair.
In the hiring scenario, we framed this decision rule as follows:
\emph{The fraction of applicants who receive job offers that are female should equal the fraction of applicants that are female. Similarly, the fraction of applicants who receive job offers that are male should equal the fraction of applicants that are male.}
We then ask two questions concerning participant evaluation of the scenario, nine comprehension questions about the fairness rule, two self-report questions on participant understanding and use of the rule, and four free response questions on comprehension and sentiment.
For example, one comprehension question is:
\emph{Is the following statement TRUE OR FALSE: This hiring rule always allows the hiring manager to send offers exclusively to the most qualified applicants}.
Finally, we collect demographic information (age, gender, race/ethnicity, education level, and expertise in a number of relevant fields).
We conducted in-person cognitive interviews ~\cite{harrell2009data} to pilot our survey, leading to several improvements in the question design. Most notably, because some cognitive interview participants appeared to use their own personal notions of fairness rather than our provided rule, we added questions to assess this compliance issue.
\subsubsection{Recruitment and Participants} \label{subsubsec:methods:study1:recruitment}
We recruited participants using the online service Cint \cite{cint}, which allowed us to loosely approximate the 2017 U.S. Census distributions \cite{census07} for ethnicity and education level, allowing for broad representation. %
We required that participants be 18 years of age or older, and fluent in English. Participants were compensated using Cint's rewards system; according to a Cint representative: ``[Participants] can choose to receive their rewards in cash sent to their bank accounts (e.g. via PayPal), online shopping opportunities with one of multiple online merchants, or donations to a charity."
\dsnew{Data was collected during August 2019.} In total 147 participants were included in the Study-1{} analysis, including 75 men (51.0\%), 71 women (48.3\%), and 1 (0.7\%) preferring not to answer. The average age was 46 years (SD = 16). Ethnicity and educational attainment are summarized in Table~\ref{tab:demo}. %
On average, participants completed the survey in 14 minutes.
\input{demo.tex}
\subsection{Study-2}\label{sec:studyB}
Study-2{} follows a very similar structure to Study-1{} with a few changes. First, we decided to use only the hiring (HR) decision scenario (See \Appref{app:scenario_analysis} for more in-depth discussion). %
Second, we expanded to three definitions of fairness: \emph{demographic parity} (DP), \emph{equal opportunity} (EP), and \emph{equalized odds} (EO) ~\cite{Hardt16:Equality}. Within EP, we tested both False Negative Rate (FNR) and False Positive Rate (FPR), resulting in a total of four conditions.
\subsubsection{Survey Design}
Here we provide a high-level discussion of the differences between Study-2{} and Study-1{}; the full text of each survey can be found in Appendix~\ref{app:survey}. We used a between-subjects design with random assignment among the four conditions (DP, FNR, FPR, EO). Again, we frame each notion of fairness as a \emph{hiring rule} that the decision-maker must follow to be fair. For example, in FPR we define the award rule as follows:
\emph{The fraction of unqualified male candidates who receive job offers should equal the fraction of unqualified female candidates who receive job offers.}
For this version, we added graphical examples to further clarify our explanations (see Fig.~\ref{fig:example_people} for an example).
We used the all the same %
questions as in Study-1{} but added two additional Likert-scale questions %
assessing participant sentiment: one asked whether they liked the rule, and the other asked whether they agreed with the rule. One free response question (asking how participants personally would go about the hiring process to ensure it was fair), which did not consistently provide useful responses in Study-1{}, was removed from the Study-2{} survey in an effort to keep the expected completion time similar. %
\begin{figure}[h]
\centering
\includegraphics[height=1in]{illustrations/eo/example_1_POOL.png}
\vspace{10pt}
\includegraphics[height=1in]{illustrations/eo/example_1b_offer.png}
\space\space
\includegraphics[height=1in]{illustrations/eo/example_1b_no_offer.png}
\caption{A graphical example to describe a fair hiring outcome for EO. Yellow people represent females while green people represent males. The darker colors represent qualified individuals while the lighter colors represent unqualified individuals. The gray box represents the original pool of applicants. The green box represent individuals that received job offers while the red box with a dashed border represents individuals that did \emph{not} receive job offers.}
\label{fig:example_people}
\end{figure}
\subsubsection{Recruitment and Participants}
We again used the Cint service to recruit participants. \dsnew{Compensation for participation was handled in the same manner as described in \S\ref{subsubsec:methods:study1:recruitment}.} Because our initial sample (intended to target education, ethnicity, gender and age distributions approximating the U.S. census) skewed more highly educated than we had hoped, we added a second round of recruitment one week
later primarily targeting participants without bachelor's degrees. Hereafter, we report on both samples together.
\dsnew{Data was collected during January and February 2020.} In total 349 participants were included in the Study-2{} analysis, including 142 men (40.7\%), 203 women (58.2\%), 1 other (0.3\%), and 3 (0.9\%) preferring not to answer. The average age was 45 years (SD = 15). Ethnicity and educational attainment are summarized in Table~\ref{tab:demo}. %
On average, participants completed the survey in 16 minutes. %
\subsection{Data Analysis}
Free response questions were qualitatively coded for statistical testing. In Study-1{}, one question was coded by a single researcher for simple correctness (see \Appref{results:1:rq1}), and the other was independently coded by three researchers (resolved to 100\%) to capture sentiment information (see \Appref{results:1:rq3}). In Study-2{}, both questions were independently coded by 2-3 researchers (resolved to 100\%). Participants who provided nonsensical answers, answers not in English, or other non-responsive answers to free response questions were excluded from all analysis.
The following methods were used for all statistical analyses unless otherwise specified. Correlations with nonparamentric ordinal data were assessed using Spearman's rho. Omnibus comparisons on nonparametric ordinal data were performed with a Kruskal--Wallis (K-W) test, and relevant post-hoc comparisons with Mann--Whitney U (M-WU) tests. Post-hoc $p$-values were adjusted for multiple comparisons using Bonferroni correction. $\chi^2$ tests were used for comparisons of nominal data.
Boxplots show median and first and third quartiles; whiskers extend to $1.5 * \text{IQR}$ (interquartile range), with outliers indicated by points. \dsnew{The full analysis script for both studies can be found on github. \footnote{\url{https://github.com/saharaja/ICML2020-fairness}}}
\subsection{Limitations}
As with all surveys, our study has certain limitations. We recruited a demographically broad population, but web panels are generally more tech-savvy than the broader population \cite{redmiles2019well}. We consider this acceptable for a first effort. Some participants may
be satisficing rather than answering carefully. We mitigate this by
disqualifying participants with off-topic or non-responsive free-text responses. Further, this limitation can be expected to be consistent across conditions, enabling reasonable comparison. Finally, better or clearer explanations of the fairness definitions we explored are certainly possible; we believe our explanations were sufficient to allow us to investigate our research questions, especially because they were designed to be consistent across conditions.
\section{Results}\label{sec:results}
In this section we first discuss the preliminary findings from Study-1{} (see \S\ref{results:a}). These findings were used as hypotheses for further exploration and testing in Study-2{}; we discuss those results second (see \S\ref{results:b}).
\subsection{Study-1{}} \label{results:a}
We analyze survey responses for Study-1{} and make several observations. We first validate our comprehension score as a measure of participant understanding; we then generate hypotheses for further exploration in Study-2{}.
\subsubsection{Our Survey Effectively Captures Rule Comprehension} \label{results:a:validity}
We find that we can measure comprehension of the fairness rule. The comprehension score was calculated as the total correct responses out of a possible 9. All questions were weighted equally. The relevant questions included 2 multiple choice, 4 true/false, and 3 yes/no questions. The average score was 6.2 (SD=2.3).
We validate our comprehension score using two methods: internal validity testing, and correlation against two self-report and one free response question included in our survey (see \Appref{results:1:rq1} for further details).
\vspace{-5pt}
\paragraph{Internal Validity}
Cronbach's $\alpha$ and item-total correlation were used to assess internal validity of the comprehension score. Both measures met established thresholds \cite{nunnally1978,everitt2010}: Cronbach's $\alpha = 0.71$, and item-total correlation for 8 of the 9 items (all but Q5) $> 0.3$. %
\vspace{-5pt}
\paragraph{Question Correlation}
We find that self-reported rule understanding and use are reflected in comprehension score. First, we compared comprehension score to self-reported rule understanding (Q13): ``I am confident I know how to apply the award rule described above,'' rated on a five-point Likert scale from strongly agree (1) to strongly disagree (5). The median response was ``agree'' ($\text{Q1}=1$, $\text{Q3}=3$). Higher comprehension scores tended to be associated with greater confidence in understanding (Spearman's $\rho = 0.39$, $p<0.001$), supporting the notion that comprehension score is a valid measure of rule comprehension.
Next, we compared comprehension score to a self-report question about the participant's use of the rule (Q14), with the following options: (a) ``I applied the provided award rule only,'' (b) ``I used my own ideas of what the correct award decision should be rather than the provided award rule,'' or (c) ``I used a combination of the provided award rule and my own ideas of what the correct award decision should be.'' We find that participants who claimed to use only the rule scored significantly higher (mean 7.09) than those who used their own notions (4.90) or a combination (4.68) (post-hoc M-WU,
$p<0.001$ for both tests; corrected $\alpha = 0.05/3 = 0.017$). This further corroborates our comprehension score.
Finally, we asked participants to explain the rule in their own words (Q12). Each response was then qualitatively coded as one of five categories -- \textbf{Correct}: describes rule correctly; \textbf{Partially correct}: description has some errors or is somewhat vague; \textbf{Neither}: vague description of purpose of the rule rather than how it works, or pure opinion; \textbf{Incorrect}: incorrect or irrelevant; and \textbf{None}: no answer, or expresses confusion. Participants whose responses were either correct (mean comprehension score = 7.71) or partially correct (7.03) performed significantly better on our survey than those responding with neither (5.13) or incorrect (4.24) (post-hoc M-WU, $p<0.001$ for these four comparisons, corrected $\alpha = 0.05/10 = 0.005$). These findings further validate our comprehension score. Additional details of these results and the associated statistical tests can be found in \Appref{results:1:rq1}.
\subsubsection{Hypotheses Generated} \label{results:a:hypotheses}
We analyzed the data from Study-1{} in an exploratory fashion intended to generate hypotheses that could be tested in Study-2{}.
We highlight here three key hypotheses that emerged from the data.
\paragraph{Education Influences Comprehension}
We used poisson regression models to explore whether various demographic factors were associated with differences in comprehension. We found that a model including education as a regressor had greater explanatory power than a model without (see \Appref{results:1:rq2} for further details).
\paragraph{Disagreement with the Rule is Associated with Higher Comprehension Scores}
We asked participants for their opinion on the presented rule in a free response question (Q15). These responses were qualitatively coded to capture participant sentiment toward the rule in one of five categories -- \textbf{Agree}: generally positive sentiment towards rule; \textbf{Depends}: describes both pros and cons of the given rule; \textbf{Disagree}: generally negative sentiment towards rule; \textbf{Not understood}: expresses confusion about rule; \textbf{None}: no answer, or lacks opinion on appropriateness of the rule. Participants who expressed disagreement with the rule performed better (mean comprehension score = 7.02) than those who expressed agreement (5.50), did not understand the rule (4.44), or provided no response (5.09) to the question (post-hoc M-WU, $p<0.005$ for these three comparisons;
corrected $\alpha=0.05/10=0.005$). \Appref{results:1:rq3} provides further details.
\begin{figure*}[h]
\centering
\begin{subfigure}[t]{0.30\textwidth}
\includegraphics[width=1\linewidth]{fig_studyB/q13.png}
\caption{Grouped by response to Q13}
\label{fig:studyB_q13}
\vspace{-5pt}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.30\textwidth}
\includegraphics[width=1\linewidth]{fig_studyB/q14.png}
\caption{Grouped by response to Q14.}
\label{fig:studyB_q14}
\vspace{-5pt}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.30\textwidth}
\includegraphics[width=1\linewidth]{fig_studyB/q12.png}
\caption{Grouped by coded response to Q12.}
\label{fig:studyB_q12}
\vspace{-5pt}
\end{subfigure}
\caption{Comprehension scores grouped by questions. In (a), self-reported understanding of the rule was not related to comprehension score. X-axis is reversed for figure and correlation test. In (b), rule compliance (leftmost on the x-axis) was associated with higher comprehension scores. One participant who did not provide a response was excluded from this figure and the relevant analysis. Finally, in (c), participants who provided either correct or partially correct responses tended to perform better.}
\end{figure*}
\paragraph{Non-Compliance is Associated with Lack of Understanding} \label{results:a:non-comp}
We were interested in understanding why some participants failed to adhere to the rule, as measured by their self-report of rule usage in Q14. We labeled those who responded with either having used their own personal notions of fairness ($n=29)$ or some combination of their personal notions and the rule ($n=28$) as ``non-compliant" (NC), with the remaining $n=89$ labeled as ``compliant" (C). One participant who did not provide a response was excluded from this analysis, conducted using $\chi^2$ tests.
Non-compliant participants were less likely to self-report high understanding of the rule in Q13 (see Fig. \ref{fig:q13q14}). Moreover, non-compliance also appears to be associated with a reduced ability to correctly explain the rule in Q12 (see Fig. \ref{fig:q12q14}). This fits with the overall strong relationship we observed among comprehension scores, self-reported understanding, ability to explain the rule, and compliance.
Further, negative participant sentiment towards the rule (Q15) also appears to be associated with greater compliance (see Fig. \ref{fig:q15q14}). %
Thus, non-compliant participants appear to behave this way because they do not \emph{understand} the rule, rather than because they do not \emph{like} it. Refer to \Appref{results:a:non-comp} for further details.
\subsection{Study-2{}} \label{results:b}
We first confirm the validity of our
comprehension score, then compare comprehension across
definitions and examine the hypotheses generated in Study-1{}.
\subsubsection{Score Validation} \label{results:b:validation}
We validated our metric using the same approach used in Study-1{}, i.e., assessing both internal validity and correlation with self-report and free-response questions. We report the results of this assessment here.
\paragraph{Internal Validity}
We again used Cronbach's $\alpha$ and item-total correlation to assess internal validity of the comprehension score. An initial assessment using all 349 responses yielded Cronbach's $\alpha = 0.38$, and item-total correlation $> 0.3$ for only four of the nine comprehension questions. Since both measures performed below established thresholds \cite{nunnally1978,everitt2010}, we investigated further and repeated these measurements individually for each fairness-definition condition (DP, FNR, FPR, EO). This procedure showed stark differences in Cronbach's $\alpha$ based on definition: DP = 0.64, FNR = 0.39, FPR = 0.49, EO = 0.62. Item-total correlations followed a similar pattern: best in DP, worst in FNR. Based on these differences, we iteratively removed problematic questions from the score on a per-definition basis until all remaining questions achieved an item-total correlation of $> 0.3$ \cite{everitt2010}.
By removing poorly performing questions, we
increase our confidence that the measured comprehension scores are meaningful for further analysis. Table \ref{tab:dropped_qs} specifies which questions were retained for analysis in each definition.
\vspace{-10pt}
\begin{table}[ht]
\centering
\caption{\label{tab:dropped_qs} Questions that were used for downstream analysis after iterative removal of questions with poor item-total correlation.}
\vspace{5pt}
{\small
\begin{tabular}{@{}lrrrrrrrrr@{}}
\toprule
& \multicolumn{9}{c}{\textbf{Questions}}\\
\midrule
& Q3 & Q4 & Q5 & Q6 & Q7 & Q8 & Q9 & Q10 & Q11\\
\midrule
DP & X & X & & & X & X & X & X & X \\
FNR & X & X & X & & & X & & & \\
FPR & X & X & X & X & & X & & X & X \\
EO & X & X & X & & X & X & X & X & X \\
\bottomrule
\end{tabular}%
}
\end{table}
Because questions were dropped on a per-definition basis, the maximum of the resulting scores varied from 4-8 depending on the definition, rather than being a uniform 9. We normalized this treating comprehension score as a percentage of the maximum for each condition rather than a raw score. %
We report this \textit{adjusted score} in the remainder of \S\ref{results:b}. The average score was 0.53 (SD=0.22).
\paragraph{Question Correlation} \label{qcorr}
As in Study-1{}, we compare comprehension scores with responses to self-report and free response questions included in our survey.
First, we compared comprehension score to self-reported rule understanding (Q13), as described in \S\ref{results:a:validity}.
The median response was ``agree'' ($\text{Q1}=2$, $\text{Q3}=3$). We assess the correlation between these responses and comprehension score using Spearman's rho (appropriate for ordinal data). Unlike in Study-1{}, there was no relationship between self-reported understanding and comprehension score (Fig.~\ref{fig:studyB_q13}).
Next, we compared comprehension score to a self-report question about the participant's use of the rule (Q14), as described in \S\ref{results:a:validity}.
A K-W test revealed a relationship between self-reported rule usage and comprehension score ($p<0.001$). %
We find that participants who claimed to use only the rule tended to score higher (mean comprehension score = 0.58) than those who used a combination of the rule and their own notions of fairness (0.47, $p<0.01$). No other differences were found (post-hoc M-WU; %
corrected $\alpha = 0.05/3 = 0.017$). This suggests that participants are answering at least somewhat honestly: when they try to apply the rule, comprehension scores improve (see Fig. \ref{fig:studyB_q14}).
Finally, we asked participants to explain the rule in their own words (Q12). Each response was then qualitatively coded as one of five categories, as described in \S\ref{results:a:validity}.
These results can be seen in Fig.~\ref{fig:studyB_q12}. A K-W test revealed a relationship between comprehension score and coded responses to Q12 %
($p<0.001$). Correct (mean comprehension score = 0.83) responses were associated with higher comprehension scores than partially correct (0.58), neither (0.44), incorrect (0.52), and none (0.48) responses %
($p<0.001$ for all); partially correct responses were also associated with higher comprehension scores than neither responses
($p<0.001$); and incorrect responses were associated with higher comprehension scores that neither responses
($p<0.005$). No other differences were found (post-hoc M-WU; corrected $\alpha = 0.05/10 = 0.005$). These findings support our claim that our comprehension score is a valid measure of fairness-rule comprehension.
\subsubsection{Education and Definition are Related to Comprehension Score} \label{results:b:edu}
One hypothesis generated by Study-1{} was that comprehension score is positively correlated with education level.
We investigated this hypothesis further in Study-2{} using linear regression models followed by model selection.
\dsnew{We believe this exploratory approach to be appropriate despite the previously formulated hypothesis, given the introduction of a new variable in Study-2{}, i.e. fairness definition.} %
Eleven models were tested, regressing different combinations of demographics (ethnicity, gender, education, and age) and condition (fairness definition). Models were compared using Akaike information criterion (AIC), a standard method of evaluating model quality and performing model selection \cite{akaike1974}. Comparison by AIC revealed that the model using just education (edu) and fairness definition (def) as regressors was the model of best fit. In this model, having a Bachelor's degree or above resulted in a score increase of 0.14, and the FNR condition caused a score decrease of -0.11 ($p < 0.004$ for both; corrected $\alpha = 0.05/11 = 0.0045$). A regression table of the best fit model can be found in Table \ref{tab:GLM}.
\begin{table}[h]
\centering
\caption{\label{tab:GLM} Regression table for the best fit model, with two covariates: education (baseline: no HS) and definition (baseline: DP). %
Est. = estimate, CI = confidence interval.}
\vspace{5pt}
{\small
\begin{tabular}{@{}lrrc@{}}
\toprule
\textbf{Covariate} & \textbf{Est.} & \textbf{95\% CI} & \textbf{$p$} \\
\midrule
\emph{Education} \\
HS & 0.00 & [-0.10, 0.10] & 0.989 \\ %
Post-secondary, no BS & 0.09 & [-0.01, 0.18] & 0.078\\ %
Bachelor's and above & 0.14 & [0.04, 0.23] & $<0.004$ \\ %
\addlinespace[1.5 ex]
\emph{Definition} \\
EO & -0.08 & [-0.14, 0.01] & 0.020 \\ %
FPR & -0.05 & [-0.11, 0.01] & 0.124 \\ %
FNR & -0.11 & [-0.18, -0.05] & $<0.001$ \\ %
\bottomrule
\end{tabular}%
}
\vspace{-7pt}
\end{table}
AIC results of each of the eleven models, along with the relevant regressors, can be seen in Table \ref{tab:AIC} in \Appref{app:b:model_selection}. Comprehension score as a function of education and fairness definition can be seen in Figs. \ref{fig:studyB_edu} and \ref{fig:studyB_scores}.
\begin{figure}[th]
\centering
\includegraphics[width=0.8\columnwidth]{fig_studyB/edu.png}
\vspace{-15pt}
\caption{Comprehension score grouped by education level. Higher education was associated with higher comprehension scores. Note that two participants who did not report their education level were removed from this figure and the relevant analysis.}
\label{fig:studyB_edu}
\vspace{-5pt}
\end{figure}
\begin{figure}[th]
\centering
\includegraphics[width=0.8\columnwidth]{fig_studyB/scores.png}
\vspace{-15pt}
\caption{Comprehension score grouped by fairness definition. The FNR condition was associated with lower comprehension sore.}
\label{fig:studyB_scores}
\vspace{-10pt}
\end{figure}
\subsubsection{Greater Negative Sentiment Toward the Rule is Associated with Higher Comprehension Scores} \label{results:b:sentiment}
In Study-1{}, we found a relationship between participant sentiment towards the rule and comprehension score. To better interrogate this phenomenon, in Study-2{} we added two more questions to the survey to directly address the issue of sentiment, rather than relying on a free-response question. One (Q15) asks, ``To what extent do you agree with the following statement: I like the hiring rule?", and is evaluated on a five-point Likert scale from ``strongly agree" (1) to ``strongly disagree" (5). The other (Q16) asks, ``To what extent do you agree with the following statement: I agree with the hiring rule?", and is also evaluated on a five-point Likert scale from ``strongly agree" (1) to ``strongly disagree" (5).
Using Spearman's rho, we assessed the correlation between responses to these two questions and comprehension score. A minor correlation was found between liking the rule and comprehension score, i.e. those who disliked the rule were more likely to have higher comprehension scores ($\rho = -0.11, p < 0.05$; see Fig.~\ref{fig:studyB_q15}).
A slight correlation was also found between agreeing with the rule and comprehension score, i.e. disagreement was associated with higher comprehension scores ($\rho = -0.11, p < 0.05$; see Fig.~\ref{fig:studyB_q16}).
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{fig_studyB/q15.png}
\vspace{-15pt}
\caption{Comprehension score grouped by response to Q15. Dislike of the rule was associated with higher comprehension scores. X-axis is reversed for figure and correlation test.}
\label{fig:studyB_q15}
\vspace{10pt}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{fig_studyB/q16.png}
\vspace{-15pt}
\caption{Comprehension score grouped by response to Q16. Disagreement with the rule was associated with higher comprehension score. X-axis is reversed for figure and correlation test.}
\label{fig:studyB_q16}
\vspace{-10pt}
\end{figure}
\subsubsection{Non-Compliance is Associated with Lack of Understanding} \label{results:b:non-comp}
A final hypothesis generated in Study-1{} involves non-compliance: i.e., why do participants who report \textit{not} using the rule to answer the comprehension questions behave this way?
In Study-1{}, we found that this was due to the fact that non-compliant participants were less able to \textit{understand} the rule, rather than because they did not \textit{like} it.
We also observed this in our results from Study-2{}:
compliant participants exhibited higher self-reported understanding of the rule ($p < 0.001$, Fig. \ref{fig:studyB_nc_q13q14}), were more likely to correctly explain the rule ($p < 0.001$, Fig. \ref{fig:studyB_nc_q12q14}), and were more likely to dislike the rule ($p < 0.05$, Fig. \ref{fig:studyB_nc_q15q14}). We observed no relationship between compliance and agreement with the rule (Fig. \ref{fig:studyB_nc_q16q14}). Refer to \Appref{app:b:compliance} for more details.
\section{Discussion} \label{sec:discussion}
Bias in machine learning is a growing threat to justice; to date, ML bias has been documented in both commercial and government applications, in sectors such as medicine, criminal justice, and employment. In response, ML researchers have proposed various notions of \emph{fairness} to correct these biases. Most ML fairness definitions are purely mathematical, and require some knowledge of machine learning. While they are intended to benefit the general public, it is unclear whether the general public agrees with --- or even understands --- these notions of ML fairness.
We take an initial step to bridge this gap by asking \emph{do people understand the notions of fairness put forth by ML researchers?} To answer this question we develop a short questionnaire to assess understanding of three particular notions of ML fairness (demographic parity, equal opportunity, and equalized odds). We find that our comprehension score (with some adjustments for each definition) appears to be a consistent and reliable indicator of understanding the fairness metrics.
The comprehension score demonstrated in this work lays a foundation for many future studies exploring other fairness definitions.
We do find, however, that comprehension is lower for equal opportunity, false negative rate than other definitions.
In general, comprehension scores for equal opportunity (both FNR and FPR) were less internally consistent than other fairness rules, suggesting participant responses were also more ``noisy'' for equal opportunity.
This is somewhat intuitive: equal opportunity is difficult to understand, as it only involves one type of error (FNR or FPR) rather than both.
Furthermore, FNR participants had the lowest comprehension scores \emph{and} the lowest consistency of all conditions.
We believe this finding also matches intuition: FNR is a strange notion in the context of hiring, as it concerns only those qualified applicants who were \emph{not} hired or offered jobs.
Indeed, in free-response questions several participants mentioned that they do not understand why qualified candidates are \emph{not} hired.
We believe many participants fixated on this strange setting, impacting their comprehension scores.
This finding is potentially problematic, as equal opportunity definitions are increasingly used in practice. Indeed, major fairness tools such as Google What-If tool \cite{wexler2019if} and the IBM AI Fairness 360 \cite{bellamy2019ai} specifically focus on equal opportunity. Further work should be put into making descriptions of nuanced fairness metrics more accessible.
Our analysis also identified other issues that should be considered when thinking about mathematical notions of fairness.
First, we find that education is a strong
predictor of comprehension. This is especially troubling, as the negative impacts of biased ML are expected to disproportionately impact the most marginalized~\cite{barocas2016big} and displace employment opportunities for those with the least education~\cite{frey2017future}. Lack of understanding may hamper these groups' ability to effectively advocate for themselves. Designing more accessible explanations of fairness should be a top research priority.
Second, we find that those with the weakest comprehension of fairness metrics also express the least negative sentiment toward them. When fairness is a concern, there are always trade-offs---between accuracy and equity, or between different stakeholders, and so on. Balancing these trade-offs is an uncomfortable dilemma often lacking an objectively correct solution. It is possible that those who comprehend this dilemma \emph{also} recognize the precarious trade-off struck by any mathematical definition of fairness, and are therefore dissatisfied with it. From another perspective, this finding is more insidious. If those with the weakest understanding of AI bias are also least likely to protest, then major problems in algorithmic fairness may remain uncorrected.
\input{acknowledgments}
\subsubsection{Scenario description and questions}\label{app:st2_scenarios}
The following is shown to each participant (note that Step 3 is not shown to participants with the DP definition):
It is very important that you read each question carefully and think about your answers. The success of our research relies on our respondents being thoughtful and taking this task seriously.
\begin{itemize}
\vspace{-8pt}
\item[\text{\fboxsep=-.15pt\fbox{\rule{0pt}{1.5ex}\rule{1.5ex}{0pt}}}] I have read the above instructions carefully.
\vspace{-8pt}
\end{itemize}
A company, Sales-a-lot, is reviewing their hiring process. They want to hire applicants who are high performing, and they also want to make sure that their hiring process is fair to their applicants, no matter their gender. To do this, Sales-a-lot employs an external firm, Recruit-a-matic, which keeps track of all applicants. This review will take place over one year.
For clarity at each stage of the hiring process we use images to represent the hiring pool.
\paragraph{Step 1: Applicant Pool.} At the beginning of the year, Sales-a-lot reviews all job applicants, and sends job offers to some of them. The initial applicant pool is shown with a gray background. For example, the following image shows an applicant pool with 15 female applicants and 25 male applicants:
\includegraphics[height=1in]{illustrations/intro/step_1_green_yellow.png}
\paragraph{Step 2: Sending Job Offers.} Next, Sales-a-lot sends job offers \space to some of these \space applicants, using the \space following criteria:
\begin{itemize} \itemsep=0pt
\vspace{-10pt}
\item Interview scores
\item Quality of recommendation letters
\item Number of years of prior experience in the field
\end{itemize}
Suppose that Sales-a-lot sends offers to 5 female applicants and 8 male applicants (so 10 female and 17 male applicants didn’t receive offers). In the following image, applicants who received a job offer are shown on the left (with a green background) and applicants who didn’t receive a job offer are shown on the right, with a red background):
\includegraphics[height=1in]{illustrations/intro/step_2_green_yellow_OFFER.png}
\space\space\space
\includegraphics[height=1in]{illustrations/intro/step_2_green_yellow_NO_OFFER.png}
\paragraph{Step 3: Applicant Evaluation.} For the rest of the year, Recruit-a-matic (the external firm) keeps track of all applicants in the initial pool, whether they received job offers or not. At the end of the year, Rectruit-a-matic finds out which applicants were high performers, i.e. qualified (shown in dark), and which applicants were low performers, i.e. unqualified (shown in light). For example, the following image shows that most of the high performers received job offers, but some did not.
\includegraphics[height=1in]{illustrations/intro/step_3_OFFER.png}
\space\space\space
\includegraphics[height=1in]{illustrations/intro/step_3_NO_OFFER.png}
\begin{tabular}{r|c|c}
& female & male \\
\midrule
qualified & \includegraphics[height=0.2in]{illustrations/intro/qual_female.png} & \includegraphics[height=0.2in]{illustrations/intro/qual_male.png} \\
unqualified & \includegraphics[height=0.2in]{illustrations/intro/uqual_female.png} & \includegraphics[height=0.2in]{illustrations/intro/uqual_male.png} \\
\end{tabular}
\paragraph{Questions}
\begin{enumerate}
\vspace{-5pt}
\item To what extent do you agree with the following statement: a scenario similar to the one described above might occur in real life.
\begin{itemize}
\item Strongly agree
\item Agree
\item Neither agree nor disagree
\item Disagree
\item Strongly disagree
\end{itemize}
\item How much effort, in hours, should Sales-a-lot put in to make sure these decisions were fair? [short answer - number of hours]
\end{enumerate}
\subsubsection{Rule descriptions and questions}\label{app:st2_fairness}
The following sections provide fairness definitions (presented to participants as \emph{rules}) for Demographic Parity, Equal Opportunity (FNR and FPR), and Equalized Odds. Unless otherwise noted the rule description is shown above each of the questions for reference. Correct answers are noted in \correct{red}.
\paragraph{Demographic Parity.}
Recruit-a-matic uses the following rule to determine whether Sales-a-lot’s hiring decisions were fair:
\emph{The fraction of male candidates who receive job offers should equal the fraction of female candidates who receive job offers.}
Example 1: Suppose that over the past year, Recruit-a-matic finds that Sales-a-lot received the following applicants (10 female and 12 male).
\includegraphics[height=1in]{illustrations/demo_parity/example_1_POOL.png}
If Sales-a-lot sent job offers to the following number of applicants (5 female and 6 male), then this would be fair according to the hiring rule (note that there are other possible outcomes that are fair according to the hiring rule).
\includegraphics[height=1in]{illustrations/demo_parity/example_1b_offer.png}
Example 2: Suppose that over the past year, Recruit-a-matic finds that Sales-a-lot reviewed a total of 100 applicants as follows (40 female and 60 male).
\includegraphics[height=2in]{illustrations/demo_parity/example_2_POOL.png}
If Sales-a-lot sent job offers to the following number of applicants (10 female and 15 male), then this would be fair according to the hiring rule (note that there are other possible outcomes that are fair according to the hiring rule).
\includegraphics[height=1in]{illustrations/demo_parity/example_2b_offer.png}
In the next section, we will ask you some questions about the information you have just read. Please note that this is not a test of your abilities. We want to measure the quality of the description you read, not your ability to take tests or answer questions.
\textbf{Please note that we ask you to apply and use ONLY the above hiring rule when answering the following questions. You will have an opportunity to state your opinions and feelings on the rule later in the survey.}
\begin{enumerate}
\setcounter{enumi}{2}
\item Suppose a different company considered applicants for a different job. There were 200 female applicants and 100 male applicants,
\includegraphics[height=1.3in]{illustrations/demo_parity/q3_POOL.png}
and they did send job offers to 90 male applicants.
\includegraphics[height=1.3in]{illustrations/demo_parity/q3_male_offer.png}
Assuming that Recruit-a-matic reviews their decisions using the hiring rule above, how many female applicants should have received job offers?
\begin{enumerate}
\item 190
\includegraphics[height=1.3in]{illustrations/demo_parity/q3_a_offer.png}
\item \correct{180}
\includegraphics[height=1.3in]{illustrations/demo_parity/q3_b_offer.png}
\item 160
\includegraphics[height=1.3in]{illustrations/demo_parity/q3_c_offer.png}
\item 150
\includegraphics[height=1.3in]{illustrations/demo_parity/q3_d_offer.png}
\end{enumerate}
\item Assuming Recruit-a-matic reviews decisions using the hiring rule above, in which of these cases could Sales-a-lot have accepted more qualified female applicants than qualified male applicants?
\begin{enumerate}
\item When there are more qualified female applicants than qualified male applicants (i.e., more women had low net sales at the end of the year).
\item \correct{When there are more female applicants than male applicants.}
\item When female applicants receive worse interview scores than male applicants.
\item This cannot happen under the hiring rule.
\end{enumerate}
\item Consider one male applicant and one female applicant, both of whom are similarly qualified for the job (they achieve about the same net sales at the end of their first year). Is the following statement \correct{TRUE} OR FALSE: The hiring rule above allows Sales-a-lot to make a job offer to one of these applicants and not the other.
\item Consider a situation where all female applicants were unqualified (they all achieve low net sales at the end of their first year), but some of them received job offers. Is the following statement TRUE OR \correct{FALSE}: The hiring rule above requires that some job offers made to male applicants must have been made to unqualified male applicants.
\item Suppose Sales-a-lot received 100 male and 100 female applicants, and eventually made 10 job offers. Is the following statement \correct{TRUE} OR FALSE: The hiring rule above requires that even if all male applicants were unqualified (they all achieve low net sales at the end of their first year), some of the unqualified males must have received job offers.
\item Is the following statement TRUE OR \correct{FALSE}: The hiring rule above always allows Sales-a-lot to send job offers only to the most qualified applicants (those who achieve high net sales at the end of their first year).
\end{enumerate}
Consider a different scenario than the two examples above, with 6 applicants -- 4 female and 2 male, as illustrated below. The next three questions each give a different potential outcome for all 6 applicants (i.e., which of the 6 applicants do receive job offers). Please indicate which of the outcomes follow the hiring rule above.
\includegraphics[height=0.4in]{illustrations/demo_parity/q9-11_POOL.png}
\begin{enumerate}
\setcounter{enumi}{8}
\item Sales-a-lot makes the following hiring decisions.
\includegraphics[height=0.4in]{illustrations/demo_parity/q9_offer.png}
\space\space\space
\includegraphics[height=0.4in]{illustrations/demo_parity/q9_no_offer.png}
Do these decisions obey the hiring rule? \correct{Yes}
\item Sales-a-lot makes the following hiring decisions.
\includegraphics[height=0.4in]{illustrations/demo_parity/q10_offer.png}
\space\space\space
\includegraphics[height=0.4in]{illustrations/demo_parity/q10_no_offer.png}
Do these decisions obey the hiring rule? \correct{No}
\item Sales-a-lot makes the following hiring decisions.
\includegraphics[height=0.4in]{illustrations/demo_parity/q11_offer.png}
\space\space\space
\includegraphics[height=0.4in]{illustrations/demo_parity/q11_no_offer.png}
Do these decisions obey the hiring rule? \correct{No}
\item In your own words, explain the hiring rule. [short answer] [The rule is not shown above this question]
\item To what extent do you agree with the following statement: I am confident I know how to apply the hiring rule described above?
\begin{itemize}
\item Strongly agree
\item Agree
\item Neither agree nor disagree
\item Disagree
\item Strongly Disagree
\end{itemize}
\item Please select the choice that best describes your experience: When I answered the previous questions...
\begin{enumerate}
\item I applied the provided hiring rule only.
\item I used a combination of the provided hiring rule and my own ideas of what the correct hiring rule should be.
\item I used only my own ideas of what the correct hiring decision should be rather than the provided hiring rule.
\end{enumerate}
\item To what extent do you agree with the following statement: I like the hiring rule?
\begin{itemize}
\item Strongly agree
\item Agree
\item Neither agree nor disagree
\item Disagree
\item Strongly Disagree
\end{itemize}
\item To what extent do you agree with the following statement: I agree with the hiring rule?
\begin{itemize}
\item Strongly agree
\item Agree
\item Neither agree nor disagree
\item Disagree
\item Strongly Disagree
\end{itemize}
\item Please explain your opinion on the hiring rule. [short answer]
\item Was there anything about this survey that was hard to understand or answer? [short answer]
\end{enumerate}
\paragraph{Equal Opportunity - FNR.}
Recruit-a-matic uses the following rule to determine whether Sales-a-lot’s hiring decisions were fair:
\emph{The fraction of qualified male candidates who do not receive job offers should equal the fraction of qualified female candidates who do not receive job offers.}
Example 1: Suppose that over the past year, Recruit-a-matic finds that Sales-a-lot received the following qualified applicants (10 female and 12 male).
\includegraphics[height=1in]{illustrations/fnr/example_1_POOL.png}
If Sales-a-lot did not send job offers to the following number of qualified applicants (5 female and 6 male), then this would be fair according to the hiring rule (note that there are other possible outcomes that are fair according to the hiring rule).
\includegraphics[height=1in]{illustrations/fnr/example_1b_no_offer.png}
Example 2: Suppose that over the past year, Recruit-a-matic finds that Sales-a-lot reviewed a total of 100 qualified applicants as follows (40 female and 60 male).
\includegraphics[height=2in]{illustrations/fnr/example_2_POOL.png}
If Sales-a-lot did not send job offers to the following number of qualified applicants (10 female and 15 male), then this would be fair according to the hiring rule (note that there are other possible outcomes that are fair according to the hiring rule).
\includegraphics[height=1in]{illustrations/fnr/example_2b_no_offer.png}
Note that in the above examples the remaining qualified applicants received job offers, but are not displayed here.
In the next section, we will ask you some questions about the information you have just read. Please note that this is not a test of your abilities. We want to measure the quality of the description you read, not your ability to take tests or answer questions.
\textbf{Please note that we ask you to apply and use ONLY the above hiring rule when answering the following questions. You will have an opportunity to state your opinions and feelings on the rule later in the survey.}
\begin{enumerate}
\setcounter{enumi}{2}
\item Suppose a different company considered applicants for a different job. There were 200 qualified female applicants and 100 qualified male applicants,
\includegraphics[height=1.2in]{illustrations/fnr/q3_POOL.png}
and they did not send job offers to 90 qualified male applicants.
\includegraphics[height=1.3in]{illustrations/fnr/q3_male_no_offer.png}
Assuming that Recruit-a-matic reviews their decisions using the hiring rule above, how many qualified female applicants should not have received job offers?
\begin{enumerate}
\item 190
\includegraphics[height=1.3in]{illustrations/fnr/q3_a_no_offer.png}
\item \correct{180}
\includegraphics[height=1.3in]{illustrations/fnr/q3_b_no_offer.png}
\item 160
\includegraphics[height=1.3in]{illustrations/fnr/q3_c_no_offer.png}
\item 150
\includegraphics[height=1.3in]{illustrations/fnr/q3_d_no_offer.png}
\end{enumerate}
\item Assuming Recruit-a-matic reviews decisions using the hiring rule above, in which of these cases could Sales-a-lot have rejected more qualified female applicants than qualified male applicants?
\begin{enumerate}
\item \correct{When there are more qualified female applicants than qualified male applicants (i.e., more women had low net sales at the end of the year).}
\item When there are more female applicants than male applicants.
\item When female applicants receive worse interview scores than male applicants.
\item This cannot happen under the hiring rule.
\end{enumerate}
\item Consider one male applicant and one female applicant, both of whom are similarly qualified for the job (they achieve about the same net sales at the end of their first year). Is the following statement \correct{TRUE} OR FALSE: The hiring rule above allows Sales-a-lot to make a job offer to one of these applicants and not the other.
\item Consider a situation where all female applicants were unqualified (they all achieve low net sales at the end of their first year), but some of them received job offers. Is the following statement TRUE OR \correct{FALSE}: The hiring rule above requires that some job offers made to male applicants must have been made to unqualified male applicants.
\item Suppose Sales-a-lot received 100 male and 100 female applicants, and eventually made 10 job offers. Is the following statement TRUE OR \correct{FALSE}: The hiring rule above requires that even if all male applicants were unqualified (they all achieve low net sales at the end of their first year), some of the unqualified males must have received job offers.
\item Is the following statement \correct{TRUE} OR FALSE: The hiring rule above always allows Sales-a-lot to send job offers only to the most qualified applicants (those who achieve high net sales at the end of their first year).
\end{enumerate}
Consider a different scenario than the two examples above, with 6 qualified applicants -- 4 female and 2 male, as illustrated below. The next three questions each give a different potential outcome for all 6 qualified applicants (i.e., which of the 6 applicants do not receive job offers). Please indicate which of the outcomes follow the hiring rule above.
\includegraphics[height=0.4in]{illustrations/fnr/q9-11_POOL.png}
\begin{enumerate}
\setcounter{enumi}{8}
\item Sales-a-lot makes the following hiring decisions.
\includegraphics[height=0.4in]{illustrations/fnr/q9_offer.png}
\space\space\space
\includegraphics[height=0.4in]{illustrations/fnr/q9_no_offer.png}
Do these decisions obey the hiring rule? \correct{Yes}
\item Sales-a-lot makes the following hiring decisions.
\includegraphics[height=0.4in]{illustrations/fnr/q10_offer.png}
\space\space\space
\includegraphics[height=0.4in]{illustrations/fnr/q10_no_offer.png}
Do these decisions obey the hiring rule? \correct{No}
\item Sales-a-lot makes the following hiring decisions.
\includegraphics[height=0.4in]{illustrations/fnr/q11_offer.png}
\space\space\space
\includegraphics[height=0.4in]{illustrations/fnr/q11_no_offer.png}
Do these decisions obey the hiring rule? \correct{No}
\item In your own words, explain the hiring rule. [short answer] [The rule is not shown above this question]
\item To what extent do you agree with the following statement: I am confident I know how to apply the hiring rule described above?
\begin{itemize}
\item Strongly agree
\item Agree
\item Neither agree nor disagree
\item Disagree
\item Strongly Disagree
\end{itemize}
\item Please select the choice that best describes your experience: When I answered the previous questions...
\begin{enumerate}
\item I applied the provided hiring rule only.
\item I used a combination of the provided hiring rule and my own ideas of what the correct hiring rule should be.
\item I used only my own ideas of what the correct hiring decision should be rather than the provided hiring rule.
\end{enumerate}
\item To what extent do you agree with the following statement: I like the hiring rule?
\begin{itemize}
\item Strongly agree
\item Agree
\item Neither agree nor disagree
\item Disagree
\item Strongly Disagree
\end{itemize}
\item To what extent do you agree with the following statement: I agree with the hiring rule?
\begin{itemize}
\item Strongly agree
\item Agree
\item Neither agree nor disagree
\item Disagree
\item Strongly Disagree
\end{itemize}
\item Please explain your opinion on the hiring rule. [short answer]
\item Was there anything about this survey that was hard to understand or answer? [short answer]
\end{enumerate}
\paragraph{Equal Opportunity - FPR.}
Recruit-a-matic uses the following rule to determine whether Sales-a-lot’s hiring decisions were fair:
\emph{The fraction of unqualified male candidates who receive job offers should equal the fraction of unqualified female candidates who receive job offers.}
Example 1: Suppose that over the past year, Recruit-a-matic finds that Sales-a-lot received the following unqualified applicants (10 female and 12 male).
\includegraphics[height=1in]{illustrations/fpr/example_1_POOL.png}
If Sales-a-lot sent job offers to the following number of unqualified applicants (5 female and 6 male), then this would be fair according to the hiring rule (note that there are other possible outcomes that are fair according to the hiring rule).
\includegraphics[height=1in]{illustrations/fpr/example_1b_offer.png}
Example 2: Suppose that over the past year, Recruit-a-matic finds that Sales-a-lot reviewed a total of 100 unqualified applicants as follows (40 female and 60 male).
\includegraphics[height=2in]{illustrations/fpr/example_2_POOL.png}
If Sales-a-lot sent job offers to the following number of unqualified applicants (10 female and 15 male), then this would be fair according to the hiring rule (note that there are other possible outcomes that are fair according to the hiring rule).
\includegraphics[height=1in]{illustrations/fpr/example_2b_offer.png}
Note that in the above examples the remaining unqualified applicants did not receive job offers, but are not displayed here.
In the next section, we will ask you some questions about the information you have just read. Please note that this is not a test of your abilities. We want to measure the quality of the description you read, not your ability to take tests or answer questions.
\textbf{Please note that we ask you to apply and use ONLY the above hiring rule when answering the following questions. You will have an opportunity to state your opinions and feelings on the rule later in the survey.}
\begin{enumerate}
\setcounter{enumi}{2}
\item Suppose a different company considered applicants for a different job. There were 200 unqualified female applicants and 100 unqualified male applicants,
\includegraphics[height=1.3in]{illustrations/fpr/q3_POOL.png}
and they did send job offers to 10 unqualified male applicants.
\includegraphics[height=1.3in]{illustrations/fpr/q3_male_offer.png}
Assuming that Recruit-a-matic reviews their decisions using the hiring rule above, how many unqualified female applicants should have received job offers?
\begin{enumerate}
\item 10
\includegraphics[height=1.3in]{illustrations/fpr/q3_a_offer.png}
\item \correct{20}
\includegraphics[height=1.3in]{illustrations/fpr/q3_b_offer.png}
\item 40
\includegraphics[height=1.3in]{illustrations/fpr/q3_c_offer.png}
\item 50
\includegraphics[height=1.3in]{illustrations/fpr/q3_d_offer.png}
\end{enumerate}
\item Assuming Recruit-a-matic reviews decisions using the hiring rule above, in which of these cases could Sales-a-lot have accepted more unqualified female applicants than unqualified male applicants?
\begin{enumerate}
\item \correct{When there are more unqualified female applicants than unqualified male applicants (i.e., more women had low net sales at the end of the year).}
\item When there are more female applicants than male applicants.
\item When female applicants receive worse interview scores than male applicants.
\item This cannot happen under the hiring rule.
\end{enumerate}
\item Consider one male applicant and one female applicant, both of whom are similarly qualified for the job (they achieve about the same net sales at the end of their first year). Is the following statement \correct{TRUE} OR FALSE: The hiring rule above allows Sales-a-lot to make a job offer to one of these applicants and not the other.
\item Consider a situation where all female applicants were unqualified (they all achieve low net sales at the end of their first year), but some of them received job offers. Is the following statement \correct{TRUE} OR FALSE: The hiring rule above requires that some job offers made to male applicants must have been made to unqualified male applicants.
\item Suppose Sales-a-lot received 100 male and 100 female applicants, and eventually made 10 job offers. Is the following statement TRUE OR \correct{FALSE}: The hiring rule above requires that even if all male applicants were unqualified (they all achieve low net sales at the end of their first year), some of the unqualified males must have received job offers.
\item Is the following statement \correct{TRUE} OR FALSE: The hiring rule above always allows Sales-a-lot to send job offers only to the most qualified applicants (those who achieve high net sales at the end of their first year).
\end{enumerate}
Consider a different scenario than the two examples above, with 6 unqualified applicants -- 4 female and 2 male, as illustrated below. The next three questions each give a different potential outcome for all 6 applicants (i.e., which of the 6 applicants receive job offers). Please indicate which of the outcomes follow the hiring rule above.
\includegraphics[height=0.4in]{illustrations/fpr/q9-11_POOL.png}
\begin{enumerate}
\setcounter{enumi}{8}
\item Sales-a-lot makes the following hiring decisions.
\includegraphics[height=0.4in]{illustrations/fpr/q9_offer.png}
Do these decisions obey the hiring rule? \correct{Yes}
\item Sales-a-lot makes the following hiring decisions.
\includegraphics[height=0.4in]{illustrations/fpr/q10_offer.png}
Do these decisions obey the hiring rule? \correct{No}
\item Sales-a-lot makes the following hiring decisions.
\includegraphics[height=0.4in]{illustrations/fpr/q11_offer.png}
Do these decisions obey the hiring rule? \correct{No}
\item In your own words, explain the hiring rule. [short answer] [The rule is not shown above this question]
\item To what extent do you agree with the following statement: I am confident I know how to apply the hiring rule described above?
\begin{itemize}
\item Strongly agree
\item Agree
\item Neither agree nor disagree
\item Disagree
\item Strongly Disagree
\end{itemize}
\item Please select the choice that best describes your experience: When I answered the previous questions...
\begin{enumerate}
\item I applied the provided hiring rule only.
\item I used a combination of the provided hiring rule and my own ideas of what the correct hiring rule should be.
\item I used only my own ideas of what the correct hiring decision should be rather than the provided hiring rule.
\end{enumerate}
\item To what extent do you agree with the following statement: I like the hiring rule?
\begin{itemize}
\item Strongly agree
\item Agree
\item Neither agree nor disagree
\item Disagree
\item Strongly Disagree
\end{itemize}
\item To what extent do you agree with the following statement: I agree with the hiring rule?
\begin{itemize}
\item Strongly agree
\item Agree
\item Neither agree nor disagree
\item Disagree
\item Strongly Disagree
\end{itemize}
\item Please explain your opinion on the hiring rule. [short answer]
\item Was there anything about this survey that was hard to understand or answer? [short answer]
\end{enumerate}
\paragraph{Equalized Odds.}
Recruit-a-matic uses the following rule to determine whether Sales-a-lot’s hiring decisions were fair:
\emph{The fraction of qualified male candidates who do not receive job offers should equal the fraction of qualified female candidates who do not receive job offers. Similarly, the fraction of unqualified male candidates who receive job offers should equal the fraction of unqualified female candidates who receive job offers.}
Example 1: Suppose that over the past year, Recruit-a-matic finds that Sales-a-lot received the following qualified applicants (10 female and 12 male) and unqualified applicants (10 female and 12 male).
\includegraphics[height=1in]{illustrations/eo/example_1_POOL.png}
If Sales-a-lot did send offers to the following number of unqualified applicants (left, 5 female and 6 male), and did not send job offers to the following number of qualified applicants (right, 5 female and 6 male), then this would be fair according to the hiring rule (note that there are other possible outcomes that are fair according to the hiring rule).
\includegraphics[height=1in]{illustrations/eo/example_1b_offer.png}
\space\space\space
\includegraphics[height=1in]{illustrations/eo/example_1b_no_offer.png}
Example 2: Suppose that over the past year, Recruit-a-lot finds that Sales-a-lot reviewed a total of 100 qualified applicants (40 female and 60 male) and 100 unqualified applicants (40 female and 60 male).
\includegraphics[height=2in]{illustrations/eo/example_2_POOL.png}
If Sales-a-lot did send offers to the following number of unqualified applicants (left, 10 female and 15 male), and did not send job offers to the following number of qualified applicants (right, 10 female and 15 male), then this would be fair according to the hiring rule (note that there are other possible outcomes that are fair according to the hiring rule).
\includegraphics[height=1in]{illustrations/eo/example_2b_offer.png}
\space\space\space
\includegraphics[height=1in]{illustrations/eo/example_2b_no_offer.png}
Note that in the above examples the remaining unqualified applicants did not receive job offers, but are not displayed here. Similarly, the remaining qualified applicants received job offers, but are not displayed here.
In the next section, we will ask you some questions about the information you have just read. Please note that this is not a test of your abilities. We want to measure the quality of the description you read, not your ability to take tests or answer questions.
\textbf{Please note that we ask you to apply and use ONLY the above hiring rule when answering the following questions. You will have an opportunity to state your opinions and feelings on the rule later in the survey.}
\begin{enumerate}
\setcounter{enumi}{2}
\item Suppose a different company considered applicants for a different job. There were 200 qualified female applicants and 100 qualified male applicants,
\includegraphics[height=1.2in]{illustrations/eo/q3_POOL.png}
and they did not send job offers to 90 qualified male applicants.
\includegraphics[height=1.3in]{illustrations/eo/q3_male_no_offer.png}
Assuming that Recruit-a-matic reviews their decisions using the hiring rule above, how many qualified female applicants should not have received job offers?
\begin{enumerate}
\item 190
\includegraphics[height=1.3in]{illustrations/eo/q3_a_no_offer.png}
\item \correct{180}
\includegraphics[height=1.3in]{illustrations/eo/q3_b_no_offer.png}
\item 160
\includegraphics[height=1.3in]{illustrations/eo/q3_c_no_offer.png}
\item 150
\includegraphics[height=1.3in]{illustrations/eo/q3_d_no_offer.png}
\end{enumerate}
\item Assuming Recruit-a-matic reviews decisions using the hiring rule above, in which of these cases could Sales-a-lot have accepted more unqualified female applicants than unqualified male applicants?
\begin{enumerate}
\item \correct{When there are more unqualified female applicants than unqualified male applicants (i.e., more women had low net sales at the end of the year).}
\item When there are more female applicants than male applicants.
\item When female applicants receive worse interview scores than male applicants.
\item This cannot happen under the hiring rule.
\end{enumerate}
\item Consider one male applicant and one female applicant, both of whom are similarly qualified for the job (they achieve about the same net sales at the end of their first year). Is the following statement \correct{TRUE} OR FALSE: The hiring rule above allows Sales-a-lot to make a job offer to one of these applicants and not the other.
\item Consider a situation where all female applicants were unqualified (they all achieve low net sales at the end of their first year), but some of them received job offers. Is the following statement \correct{TRUE} OR FALSE: The hiring rule above requires that some job offers made to male applicants must have been made to unqualified male applicants.
\item Suppose Sales-a-lot received 100 male and 100 female applicants, and eventually made 10 job offers. Is the following statement TRUE OR \correct{FALSE}: The hiring rule above requires that even if all male applicants were unqualified (they all achieve low net sales at the end of their first year), some of the unqualified males must have received job offers.
\item Is the following statement \correct{TRUE} OR FALSE: The hiring rule above always allows Sales-a-lot to send job offers only to the most qualified applicants (those who achieve high net sales at the end of their first year).
\end{enumerate}
Consider a different scenario than the two examples above, with 6 qualified applicants -- 4 female and 2 male; and 6 unqualified applicants -- 4 female and 2 male. The next three questions each give a different potential outcome for the applicants (i.e., which of the applicants did or did not receive job offers). Please indicate which of the outcomes follow the hiring rule above.
\includegraphics[height=0.4in]{illustrations/eo/q9-11_POOL.png}
\begin{enumerate}
\setcounter{enumi}{8}
\item Sales-a-lot makes the following hiring decisions.
\includegraphics[height=0.4in]{illustrations/eo/q9_offer.png}
\space\space\space
\includegraphics[height=0.4in]{illustrations/eo/q9_no_offer.png}
Do these decisions obey the hiring rule? \correct{Yes}
\item Sales-a-lot makes the following hiring decisions.
\includegraphics[height=0.4in]{illustrations/eo/q10_offer.png}
\space\space\space
\includegraphics[height=0.4in]{illustrations/eo/q10_no_offer.png}
Do these decisions obey the hiring rule? \correct{No}
\item Sales-a-lot makes the following hiring decisions.
\includegraphics[height=0.4in]{illustrations/eo/q11_offer.png}
\space\space\space
\includegraphics[height=0.4in]{illustrations/eo/q11_no_offer.png}
Do these decisions obey the hiring rule? \correct{No}
\item In your own words, explain the hiring rule. [short answer] [The rule is not shown above this question]
\item To what extent do you agree with the following statement: I am confident I know how to apply the hiring rule described above?
\begin{itemize}
\item Strongly agree
\item Agree
\item Neither agree nor disagree
\item Disagree
\item Strongly Disagree
\end{itemize}
\item Please select the choice that best describes your experience: When I answered the previous questions...
\begin{enumerate}
\item I applied the provided hiring rule only.
\item I used a combination of the provided hiring rule and my own ideas of what the correct hiring rule should be.
\item I used only my own ideas of what the correct hiring decision should be rather than the provided hiring rule.
\end{enumerate}
\item To what extent do you agree with the following statement: I like the hiring rule?
\begin{itemize}
\item Strongly agree
\item Agree
\item Neither agree nor disagree
\item Disagree
\item Strongly Disagree
\end{itemize}
\item To what extent do you agree with the following statement: I agree with the hiring rule?
\begin{itemize}
\item Strongly agree
\item Agree
\item Neither agree nor disagree
\item Disagree
\item Strongly Disagree
\end{itemize}
\item Please explain your opinion on the hiring rule. [short answer]
\item Was there anything about this survey that was hard to understand or answer? [short answer]
\end{enumerate}
\subsection{Model Selection} \label{app:b:model_selection}
In \S\ref{results:b:edu} we assessed eleven linear regression models for predicting comprehension scores. The best fit model, determined by model selection via AIC, included only education (edu) and fairness definition (def) as regressors. The results of model selection are below in Table \ref{tab:AIC}.
\begin{table}[bht]
\centering
\caption{\label{tab:AIC} Models tested in \S\ref{results:b:edu}, sorted by best to least fit. The first model in the table (edu + def) is the model of best fit. dAIC = difference from model with lowest AIC value.}
\vspace{7pt}
{\small
\begin{tabular}{@{}lrr@{}}
\toprule
\textbf{Model regressors} & \textbf{AIC} & \textbf{dAIC} \\
\midrule
edu + def & -80.4 & 0 \\
edu & -72.8 & 7.6 \\
gender + edu & -70.3 & 10.1 \\
age + edu & -63.7 & 16.7 \\
gender + age + edu & -61.1 & 19.2 \\
gender + age + eth + edu + def & -61.1 & 19.2 \\
def & -60.8 & 19.6 \\
gender + age + eth + edu & -55.5 & 24.9 \\
gender + age + def & -46.4 & 34 \\
gender + age + eth + def & -41.6 & 38.8 \\
gender + age + eth & -37.2 & 43.2 \\
\bottomrule
\end{tabular}%
}
\vspace{-10pt}
\end{table}
\subsection{Non-Compliance} \label{app:b:compliance}
In \S\ref{results:b:non-comp} we sought to further investigate the findings of Study-1{} with regards to compliance (Q14). To do so, we labeled those who responded (in Study-2{}) with either having used their own personal notions of fairness ($n=26)$ or some combination of their personal notions and the rule ($n=148$) as ``non-compliant" (NC), with the remaining $n=174$ labeled as ``compliant" (C). One participant who did not provide a response was excluded from this analysis, conducted using KW and $\chi^2$ tests.
Non-compliant participants were less likely to self-report high understanding of the rule in Q13 %
(KW test, $p<0.001$, see Fig. \ref{fig:studyB_nc_q13q14}). Moreover, non-compliance also appears to be associated with a reduced ability to correctly explain the rule in Q12 %
($\chi^2$ test, $p<0.001$, see Fig. \ref{fig:studyB_nc_q12q14}). This fits with the overall strong relationship we observed among comprehension scores, %
ability to explain the rule, and compliance.
Further, greater dislike towards the rule (Q15) also appears to be associated with greater compliance %
(KW test, $p<0.05$, see Fig. \ref{fig:studyB_nc_q15q14}). %
However, there was no relationship between disagreement towards the rule (Q16) and compliance (see Fig. \ref{fig:studyB_nc_q16q14}).
These results largely corroborate the notion that non-compliant participants appear to behave this way because they do not \emph{understand} the rule, rather than because they do not \emph{like} it. %
\begin{figure}[h]
\centering
\includegraphics[width=0.8\columnwidth]{fig_studyB/nc_q13q14.png}
\vspace{-15pt}
\caption{Self-report of understanding (Q13) split by compliance (Q14). NC participants tend to report less confidence in their ability to apply the rule. SD = strongly disagree, D = disagree, N = neither agree nor disagree, A = agree, SA = strongly agree.}
\label{fig:studyB_nc_q13q14}
\vspace{-5pt}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.8\columnwidth]{fig_studyB/nc_q12q14.png}
\vspace{-15pt}
\caption{Correctness of rule explanation (Q12) split by compliance (Q14). NC participants tend to be less able to explain the presented rule in their own words. NA = none, I = incorrect, N = neither, PC = partially correct, C = correct.}
\label{fig:studyB_nc_q12q14}
\vspace{-5pt}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.8\columnwidth]{fig_studyB/nc_q15q14.png}
\vspace{-15pt}
\caption{Participant liking for rule (Q15) split by compliance (Q14). NC participants tend to dislike the rule less than C participants. SD = strongly disagree, D = disagree, N = neither agree nor disagree, A = agree, SA = strongly agree.}
\label{fig:studyB_nc_q15q14}
\vspace{-5pt}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.8\columnwidth]{fig_studyB/nc_q16q14.png}
\vspace{-15pt}
\caption{Participant agreement with rule (Q16) split by compliance (Q14). No differences were found between NC and C participants. SD = strongly disagree, D = disagree, N = neither agree nor disagree, A = agree, SA = strongly agree.}
\label{fig:studyB_nc_q16q14}
\vspace{-5pt}
\end{figure}
\subsection{Our Survey Effectively Captures Rule Comprehension} \label{results:1:rq1}
We find that our survey is internally consistent, and effectively measures participant comprehension of demographic parity. The former we evaluated using Cronbach's $\alpha$ and item-total correlation (discussed in \S\ref{results:a:validity}), and the latter using two self-report measures and one free response question.
See Fig.~\ref{fig:question_breakdown} for participant performance per question.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\columnwidth]{fig/question_breakdown.png}
\vspace{-10pt}
\caption{Number of participants answering each question correctly. Each panel contains all 147 participants.
}
\label{fig:question_breakdown}
\vspace{-5pt}
\end{figure}
\subsubsection{Self-reported rule understanding and use are reflected in comprehension score}
First, we compared comprehension score to self-reported rule understanding (Q13). Higher comprehension scores were associated with greater confidence in understanding (Spearman's rho), suggesting that participants were accurately assessing their ability to apply the rule (see Fig. \ref{fig:q13}).
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\columnwidth]{fig/q13.png}
\vspace{-10pt}
\caption{Comprehension score grouped by response to Q13. Self-reported understanding of the rule was associated with higher comprehension scores. X-axis is reversed for figure and correlation test.}
\label{fig:q13}
\end{figure}
Next, we compared comprehension score to a self-report question about the participant's use of the rule (Q14)
Participants who claimed to use only the rule tended to score higher than those who used their own notions of fairness or a combination thereof (K-W test, and post-hoc M-WU), suggesting that participants are answering somewhat honestly: when they try to apply the rule, comprehension scores improve (see Fig.~\ref{fig:q14}).
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\columnwidth]{fig/q14.png}
\vspace{-10pt}
\caption{Comprehension score grouped by response to Q14. Rule compliance (leftmost on the x-axis) was associated with higher comprehension scores. One participant who did not provide a response was excluded from the figure and relevant analysis.}
\label{fig:q14}
\end{figure}
\subsubsection{Participants with higher comprehension scores are better able to explain the rule}
To further validate our comprehension score, we asked participants to explain the rule in their own words (Q12). %
Responses were qualitatively coded as one of five categories: \textbf{correct}, \textbf{partially correct}, \textbf{neither}, \textbf{incorrect}, or \textbf{none} (as discussed in \S\ref{results:a:validity}). The results of this coding can be seen can be seen in Fig. \ref{fig:q12}. Participants providing correct explanations of the rule attained higher comprehension scores (k-W test, and post-hoc M-WU), further corroborating our claim that our comprehension score is a valid measure of fairness rule comprehension.
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\columnwidth]{fig/q12.png}
\vspace{-10pt}
\caption{Comprehension score grouped by code assigned to Q12 response. Participants who provided either correct or partially correct responses tended to perform better.}
\label{fig:q12}
\vspace{-10pt}
\end{figure}
\subsection{Education Influences Comprehension} \label{results:1:rq2}
During the cognitive interview phase, we observed a possible trend of comprehension scores being lower for older participants and those with less educational attainment. If true, this would suggest that fairness explanations should be carefully validated to ensure they can be used with diverse populations. We investigated this hypothesis, in an exploratory fashion, using poisson regression models.
Three models were tested. The first regressed score against all four demographic categories as predictors (gender, age, ethnicity, and education), the second omitted education, and the third tested only education. Models were compared using Akaike information criterion (AIC), a standard method of evaluating model quality and performing model selection \cite{akaike1974}. Comparison by AIC revealed that model 1 (all four categories) was a better predictor for comprehension score than models 2 or 3 (AIC = 643.3, 651.2, and 660.5, respectively; difference = 0.0, 7.9, and 17.1).
In model 1, only education showed correlation with comprehension score (effect size = $1.40$, $p<0.05$). %
Further work is needed to confirm this exploratory result.
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\columnwidth]{fig/edu.png}
\vspace{-10pt}
\caption{Comprehension score grouped by education level. Higher education level was associated with higher comprehension scores.}
\label{fig:edu}
\vspace{-10pt}
\end{figure}
\subsection{Disagreement with the Rule is Associated with Higher Comprehension Scores} \label{results:1:rq3}
Participants were asked for their opinion on the presented rule in another free response question (Q15). These responses were then qualitatively coded to capture participant sentiment towards the rule as one of five categories: \textbf{agree}, \textbf{depends}, \textbf{disagree}, \textbf{not understood}, or \textbf{none} (as discussed in \S\ref{results:a:hypotheses}).
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\columnwidth]{fig/q15.png}
\caption{Comprehension score grouped by code assigned to Q15 response. Participants who exhibited negative sentiment toward the rule responses tended to perform better.}
\label{fig:q15}
\vspace{-10pt}
\end{figure}
This question was added based on the cognitive interviews (see \Appref{methods:design:cog}), where perception seemed to influence compliance.
The results of coding Q15 can be seen in Fig. \ref{fig:q15}. Participants who expressed disagreement with the rule performed better than those who expressed agreement, did not understand the rule, or provided no response to the question (K-W test, post-hoc M-WU). Note that this result should not be interpreted as an overall finding on the appropriateness of demographic parity. Instead we anticipate the perceptions of appropriateness of any fairness definition will be highly context-dependent.
\subsection{Non-Compliance is Associated with Lack of Understanding} \label{results:a:non-comp}
We were interested in understanding why some participants failed to adhere to the rule, as measured by their self-report of rule usage in Q14.
After labeling participants as either ``non-compliant" (NC, $n=57)$ or ``compliant" (C, $n=89$), we conducted a series of $\chi^2$ tests to investigate this phenomenon.
Non-compliant participants were less likely to self-report high understanding of the rule in Q13 (see Fig. \ref{fig:q13q14}). %
Moreover, non-compliance also appears to be associated with a reduced ability to correctly explain the rule in Q12 (see Fig. \ref{fig:q12q14}). %
Further, negative participant sentiment towards the rule (Q15) also appears to be associated with greater compliance (see Fig. \ref{fig:q15q14}). %
Thus, non-compliant participants appear to behave this way because they do not \emph{understand} the rule, rather than because they do not \emph{like} it.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\columnwidth]{fig/nc_q13q14.png}
\vspace{-15pt}
\caption{Self-report of understanding (Q13) split by compliance (Q14). NC participants tend to report less confidence in their ability to apply the rule. SD = strongly disagree, D = disagree, N = neither agree nor disagree, A = agree, SA = strongly agree.}
\label{fig:q13q14}
\vspace{-5pt}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.8\columnwidth]{fig/nc_q12q14.png}
\vspace{-15pt}
\caption{Correctness of rule explanation (Q12) split by compliance (Q14). NC participants tend to be less able to explain the presented rule in their own words. NA = none, I = incorrect, N = neither, PC = partially correct, C = correct.}
\label{fig:q12q14}
\vspace{-5pt}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.8\columnwidth]{fig/nc_q15q14.png}
\vspace{-15pt}
\caption{Participant agreement with rule (Q15) split by compliance (Q14). NC participants tend to harbor less negative sentiment towards the rule. NA = none, NU = not understood, D = disagree, De = depends, A = agree.}
\label{fig:q15q14}
\vspace{-5pt}
\end{figure}
\section{Methods}
\subsection{Cognitive Interviews} \label{methods:design:cog}
We recruited $9$ participants from the DC Metropolitan area using Craigslist. We required participants to be over 18 years of age and fluent in English. Participants ranged between the ages of 20 and 66.
These interviews took place on the University of Maryland campus and lasted about $1$ hour. All participants signed a written consent form prior to the interview, and were paid \$30 for their time.
During these interviews, participants completed a preliminary version of the survey used in Study-1{}.
After each survey question, we asked the participants several interview questions related to their comprehension of and feelings toward the survey. We found that some participants tended to use their own personal notions of fairness when answering comprehension questions rather than using the definition we provided. We were concerned that this would limit our ability to effectively measure comprehension. To address this problem, we rewrote several parts of our survey and added two new questions (Q14 and Q15).
\subsection{Non-Expert Verification}
We designed this study to assess \emph{non-expert} understanding and opinions of ML fairness metrics. To this end, we asked respondents to self-rate their level of expertise in a variety of fields, including ML, at the end of the survey (see \Appref{app:demographics}). A number of participants did report having ``expert" level experience in ML ($n = 2$ out of 147 in Study-1{}, and $n = 15$ out of 349 in Study-2{}). We considered removing these participants from the analyses, but ultimately did not because there was no relationship between self-reported ML expertise and comprehension score (Spearman's rho, for both studies).
\section{Study-1{}: Detailed Results} \label{app:studyA_results}
\input{studyA_results}
\input{studyA_scenario_analysis}
\section{Study-2{}: Detailed Results}
\input{studyB_details}
\section{Surveys} \label{app:survey}
\subsection{Study-1{} Survey} \label{app:surveyA}
Each of the surveys are split into four main sections. The first section is the consent form which can be found in Appendix~\ref{app:consent}. The second section describes the scenario and asks questions about the given scenario (\S\ref{app:scenarios}). The third section describes the fairness metric, defined as the rule, used (in this case it is demographic parity) and asks specific questions about the metric (\S\ref{app:rules}). Finally the last section asks for demographic information (\S\ref{app:demographics}).
\subsubsection{Scenario descriptions and questions}\label{app:scenarios}
The following is shown to each participant:
It is very important that you read each question carefully and think about your answers. The success of our research relies on our respondents being thoughtful and taking this task seriously.
\begin{itemize}
\item[\text{\fboxsep=-.15pt\fbox{\rule{0pt}{1.5ex}\rule{1.5ex}{0pt}}}] I have read the above instructions carefully.
\end{itemize}
We then introduce one of three different decision making scenarios, described below, followed by two questions. Words that vary across scenario in the questions are shown as $<$art project, employee awards, hiring$>$.
\paragraph{Art project}
A fourth grade teacher is reviewing 20 student art projects. They will award lollipops to the top 4 students who put the most effort into their projects. The teacher knows that some of the students have artists as parents, who might have helped their children with their art project. The teacher’s goal is to give out lollipops only based on the amount of effort that the student \emph{themselves} put into their projects.
The teacher uses the following criteria to decide who should get a lollipop:
\begin{itemize}
\item Elaborateness of each project.
\item Creativity of each project.
\end{itemize}
About 50\% of the students have artists as parents, and 50\% do not.
In the past, students with artists as parents typically put more effort into their projects.
In this group of students there is a wide range of project quality (as measured by elaborateness and creativity). However, this range of quality is about the same between students with artists as parents and those without.
The teacher wants to make sure that they award lollipops in a fair way, no matter whether the students’ parents are artists or not.
\paragraph{Employee awards}
A manager at a sales company is deciding which of their 100 employees should receive each of 10 mid-year awards. The manager’s goal is to give awards to employees who \emph{will} have high net sales at the end of the year.
The manager uses the following criteria to decide who should get an award:
\begin{itemize}
\item Recent performance reviews
\item Mid-year net sales
\item Number of years on the job
\end{itemize}
About 50\% of the employees are men, and 50\% are women.
In the past, men have achieved higher end-of-year net sales than women.
In this group of employees, there is a wide range of qualifications (as measured by performance reviews, mid-year net sales, and number of years on the job). However, this range of qualifications is about the same between male and female employees.
The manager wants to make sure that this awards process is fair to the employees, no matter their gender.
\paragraph{Hiring}
A hiring manager at a new sales company is reviewing 100 new job applications. Each applicant has submitted a resume, and has had an interview. The manager will send job offers to 10 out of the 100 applicants. Their goal is to make offers to applicants who will have high net sales after a year on the job.
The manager will use the following to decide which applicants should receive job offers:
\begin{itemize}
\item Interview scores
\item Quality of recommendation letters
\item Number of years of prior experience in the field
\end{itemize}
About 50\% of the applicants are men, and 50\% are women.
In the past, men have achieved higher net sales than women, after one year on the job.
In this applicant pool there is a wide range of applicant quality (as measured by interview scores, recommendation letters, and years of prior experience in the field). However, the range of quality is about the same for both male and female applicants.
The hiring manager wants to make sure that this hiring process is fair to applicants, no matter their gender.
\paragraph{Questions}
\begin{enumerate}
\item To what extent do you agree with the following statement: a scenario similar to the one described above might occur in real life.
\begin{itemize}
\item Strongly agree
\item Agree
\item Neither agree nor disagree
\item Disagree
\item Strongly Disagree
\end{itemize}
\item How much effort should the $<$teacher, manager, hiring manager$>$ put in to make sure this decision is fair? [short answer - number of hours]
\end{enumerate}
\subsubsection{Rule descriptions and questions}\label{app:rules}
Unless otherwise noted the rule description is shown above each of the questions for reference. Correct answers are noted in \correct{red}.
\paragraph{Art project}
The teacher uses the following award rule to distribute lollipops: \emph{The fraction of students who receive lollipops that have artist parents should equal the fraction of students in the class that have artist parents. Similarly, the fraction of students who receive lollipops that do not have artist parents should equal the fraction of students in the class that do not have artist parents.}
Example 1: If 10 out of the 20 students in the class have artist parents, then 2 out of the 4 lollipops would be awarded to students with artist parents (and the remaining 2 would be awarded to students without artist parents).
Example 2: If 5 out of the 20 students in the class have artist parents, then 1 out of the 4 lollipops would be awarded to students with artist parents (and the remaining 3 would be awarded to students without artist parents).
In the next section, we will ask you some questions about the information you have just read. Please note that this is not a test of your abilities. We want to measure the quality of the description you read, not your ability to take tests or answer questions.
\textbf{Please note that we ask you to apply and use ONLY the above award rule when answering the following questions. You will have an opportunity to state your opinions and feelings on the rule later in the survey.}
\begin{enumerate}
\setcounter{enumi}{2}
\item Suppose a different teacher is considering awarding lollipops to the whole 4th grade. There are 100 students with artist parents, and 200 students without artist parents. The teacher decides to award 10 lollipops to students with artist parents. \textbf{Assuming the teacher is required to use the award rule above}, how many students without artist parents need to receive lollipops?
\begin{enumerate}
\item 10
\item \correct{20}
\item 40
\item 50
\end{enumerate}
\item \textbf{Assuming the teacher is required to use the award rule above}, in which of these cases can a teacher award more lollipops to students without artist parents than to students with artist parents?
\begin{enumerate}
\item When the students without artist parents have higher-quality projects (i.e., more elaborate and more creative) than those with artist parents.
\item \correct{When there are more students without artist parents than those with artist parents.}
\item When students without artist parents have more creative projects than those with artist parents.
\item This cannot happen under the award rule.
\end{enumerate}
\item \textbf{Assuming the teacher is required to use the award rule above}, is the following statement \correct{TRUE} OR FALSE: Even if a student with artist parents has a project that is of the same quality (i.e., equally elaborate and equally creative) as another project by a student without artist parents, they can be treated differently (ie., only one of the students might get a lollipop).
\item \textbf{Assuming the teacher is required to use the award rule above}, is the following statement TRUE OR \correct{FALSE}: If all students without artist parents have low-quality projects (i.e., low elaborateness and low creativity), but the teacher awards lollipops to some of them, then any lollipops awarded to students with artist parents must be awarded to those who have low-quality projects.
\item \textbf{Assuming the teacher is required to use the award rule above}, is the following statement \correct{TRUE} OR FALSE: Suppose the teacher is distributing 10 lollipops amongst a pool of students that includes students with and without artist parents. Even if all students with artist parents have low-quality (i.e., low elaborateness and low creativity) projects, some of them must still receive lollipops.
\item \textbf{Assuming the teacher is required to use the award rule above}, is the following statement TRUE OR \correct{FALSE}: This award rule always allows the teacher to award lollipops exclusively to the students who have the highest quality (i.e., most elaborate and most creative) projects.
\end{enumerate}
In the two examples above there are 20 students. Consider a different scenario, with \textbf{6 students -- 4 with artist parents and 2 without, as illustrated below}. The next three questions each give a potential outcome for all six students (i.e., which of the 6 students receive awards). Please indicate which of the outcomes follow \textbf{the award rule above}.
\vspace{10pt}
\includegraphics[height=1in]{illustrations/total_students.png}
\begin{enumerate}
\setcounter{enumi}{8}
\item Alternative scenario 1:
\vspace{10pt}
\includegraphics[height=1in]{illustrations/case1_accept_reject_students.png}
Does this distribution of awards obey the \textbf{award rule}? \correct{Yes}
\item Alternative scenario 2:
\vspace{10pt}
\includegraphics[height=1in]{illustrations/case2_accept_case3_reject_students.png}
Does this distribution of awards obey the \textbf{award rule}? \correct{No}
\item Alternative scenario 3:
\vspace{10pt}
\includegraphics[height=1in]{illustrations/case2_reject_case3_accept_students.png}
Does this distribution of awards obey the \textbf{award rule}? \correct{No}
\end{enumerate}
\begin{enumerate}
\setcounter{enumi}{11}
\item In your own words, explain the \textbf{award rule}. [short answer] (The rule is not shown above this question)
\item To what extent do you agree with the following statement: I am confident I know how to \textbf{apply the award rule described above}?
\begin{itemize}
\item Strongly agree
\item Agree
\item Neither agree nor disagree
\item Disagree
\item Strongly Disagree
\end{itemize}
\item Please select the choice that best describes your experience: When I answered the previous questions...
\begin{enumerate}
\item I applied the provided award rule only.
\item I used my own ideas of what the correct award decision should be rather than the provided award rule.
\item I used a combination of the provided award rule and my own ideas of what the correct award decision should be.
\end{enumerate}
\item What is your opinion on the award rule? Please explain why. [short answer]
\item Suppose that you are the teacher whose job it is to distribute lollipops to students based on the criteria listed above (i.e., elaborateness of each project, creativity of each project). How would you ensure that this process is fair? [short answer]
\item Was there anything about this survey that was hard to understand or answer? [short answer]
\end{enumerate}
\paragraph{Employee awards}
The manager uses the following award rule to distribute awards: \emph{The fraction of employees who receive awards that are female should equal the fraction of employees that are female. Similarly, fraction of employees who receive awards that are male should equal the fraction of employees that are male.}
Example 1: If there are 50 female employees out of 100, then 5 out of the 10 awards should be awarded to female employees (and the remaining 5 would be made to male employees).
Example 2: If there are 30 female employees out of 100, then 3 out of the 10 awards should be awarded to female employees (and the remaining 7 would be made to male employees).
In the next section, we will ask you some questions about the information you have just read. Please note that this is not a test of your abilities. We want to measure the quality of the description you read, not your ability to take tests or answer questions.
\textbf{Please note that we ask you to apply and use ONLY the above award rule when answering the following questions. You will have an opportunity to state your opinions and feelings on the rule later in the survey.}
\begin{enumerate}
\setcounter{enumi}{2}
\item Suppose a different manager is considering employees for a different award. There are 100 male employees and 200 female employees, and they decide to give awards to 10 male employees. \textbf{Assuming the manager is required to use the award rule above}, how many female employees do they need to give awards to?
\begin{enumerate}
\item 10
\item \correct{20}
\item 40
\item 50
\end{enumerate}
\item \textbf{Assuming the manager is required to use the award rule above}, in which of these cases can a manager give more awards to female employees than to male employees?
\begin{enumerate}
\item When there are more well-qualified female employees than well-qualified male employees (i.e., more women have better performance reviews, higher mid-year net sales, and more years on the job).
\item \correct{When there are more female employees than male employees.}
\item When female employees receive higher performance reviews than male employees.
\item This cannot happen under the award rule.
\end{enumerate}
\item \textbf{Assuming the manager is required to use the award rule above}, is the following statement \correct{TRUE} OR FALSE: Even if a male employee’s qualifications look similar to a female employee’s (in terms of performance reviews, mid-year net sales, and years on the job), he can be treated differently (i.e., only one of the employees gets an award).
\item \textbf{Assuming the manager is required to use the award rule above}, is the following statement TRUE OR \correct{FALSE}: If all female employees are unqualified (i.e., have low performance reviews, low mid-year net sales, and few years on the job), but you give awards to some of them, then awards given to male employees must be made to unqualified male employees.
\item \textbf{Assuming the manager is required to use the award rule above}, is the following statement \correct{TRUE} OR FALSE: Suppose the manager is distributing 10 awards amongst a pool that includes both male and female employees. Even if all male employees are unqualified for an award (i.e., have low performance reviews, low mid-year net sales, and few years on the job), some of them must still receive awards.
\item \textbf{Assuming the manager is required to use the award rule above}, is the following statement TRUE OR \correct{FALSE}: This award rule always allows the manager to distribute awards exclusively to the most qualified employees (i.e., employees with better performance reviews, high mid-year net sales, and high number of years on the job).
\end{enumerate}
In the two examples above there are 100 employees. Consider a different scenario, with \textbf{6 employees-- 4 female and 2 male, as illustrated below}. The next three questions each give a potential outcome for all six employees (i.e., which of the 6 employees receive awards). Please indicate which of the outcomes follow \textbf{the award rule above}.
\vspace{10pt}
\includegraphics[height=1in]{illustrations/total_employees_applicants.png}
\begin{enumerate}
\setcounter{enumi}{8}
\item Alternative scenario 1:
\vspace{10pt}
\includegraphics[height=1in]{illustrations/case1_accept_reject_employees_applicants.png}
Does this distribution of awards obey the \textbf{award rule}? \correct{Yes}
\item Alternative scenario 2:
\vspace{10pt}
\includegraphics[height=1in]{illustrations/case2_accept_employees_applicants.png}
Does this distribution of awards obey the \textbf{award rule}? \correct{No}
\item Alternative scenario 3:
\vspace{10pt}
\includegraphics[height=1in]{illustrations/case2_reject_case3_accept_employees_applicants.png}
Does this distribution of awards obey the \textbf{award rule}? \correct{No}
\end{enumerate}
\begin{enumerate}
\setcounter{enumi}{11}
\item In your own words, explain the \textbf{award rule}. [short answer] (The rule is not shown above this question)
\item To what extent do you agree with the following statement: I am confident I know how to \textbf{apply the award rule described above}?
\begin{itemize}
\item Strongly agree
\item Agree
\item Neither agree nor disagree
\item Disagree
\item Strongly Disagree
\end{itemize}
\item Please select the choice that best describes your experience: When I answered the previous questions...
\begin{enumerate}
\item I applied the provided award rule only.
\item I used my own ideas of what the correct award decision should be rather than the provided award rule.
\item I used a combination of the provided award rule and my own ideas of what the correct award decision should be.
\end{enumerate}
\item What is your opinion on the award rule? Please explain why. [short answer]
\item Suppose that you are the manager whose job it is to distribute mid-year awards to employees based on the criteria listed above (i.e., recent performance reviews, mid-year net sales, number of years on the job). How would you ensure that this process is fair? [short answer]
\item Was there anything about this survey that was hard to understand or answer? [short answer]
\end{enumerate}
\paragraph{Hiring}
The hiring manager uses the following hiring rule to send out offers: \emph{The fraction of applicants who receive job offers that are female should equal the fraction of applicants that are female. Similarly, fraction of applicants who receive job offers that are male should equal the fraction of applicants that are male.}
Example 1: If there are 50 female applicants out of the 100 applicants, then 5 out of the 10 offers would be made to female applicants (and the remaining 5 would be made to male applicants).
Example 2: If there are 30 female applicants out of the 100 applicants, then 3 out of the 10 offers would be made to female applicants (and the remaining 7 would be made to male applicants).
In the next section, we will ask you some questions about the information you have just read. Please note that this is not a test of your abilities. We want to measure the quality of the description you read, not your ability to take tests or answer questions.
\textbf{Please note that we ask you to apply and use ONLY the above hiring rule when answering the following questions. You will have an opportunity to state your opinions and feelings on the rule later in the survey.}
\begin{enumerate}
\setcounter{enumi}{2}
\item Suppose a different hiring manager is considering applicants for a different job. There are 100 male applicants and 200 female applicants, and they decide to send offers to 10 male applicants. \textbf{Assuming the hiring manager is required to use the hiring rule above}, how many female applicants do they need to send offers to?
\begin{enumerate}
\item 10
\item \correct{20}
\item 40
\item 50
\end{enumerate}
\item \textbf{Assuming the hiring manager is required to use the hiring rule above}, in which of these cases can a hiring manager make more job offers to female applicants than to male applicants?
\begin{enumerate}
\item When there are more well-qualified female applicants than well-qualified male applicants (i.e., more women have higher interview scores, higher quality recommendation letters, and more years of prior experience in the field).
\item \correct{When there are more female applicants than male applicants.}
\item When female applicants receive better interview scores than male applicants.
\item This cannot happen under the hiring rule.
\end{enumerate}
\item \textbf{Assuming the hiring manager is required to use the hiring rule above}, is the following statement \correct{TRUE} OR FALSE: Even if a male applicant’s qualifications look similar to a female applicant’s (in terms of interview scores, recommendation letters, and years of prior experience in the field), he can be treated differently (i.e., only one of the applicants will receive a job offer).
\item \textbf{Assuming the hiring manager is required to use the hiring rule above}, is the following statement TRUE OR \correct{FALSE}: If all female applicants are unqualified (i.e., have low interview scores, low-quality recommendation letters, and few years of prior experience in the field), but you send job offers to some of them, then any job offers made to male applicants must be made to unqualified male applicants.
\item \textbf{Assuming the hiring manager is required to use the hiring rule above}, is the following statement \correct{TRUE} OR FALSE: Suppose the hiring manager is sending out 10 job offers to a pool that includes male and female applicants. Even if all male applicants are unqualified (i.e., have low interview scores, low-quality recommendation letters, and few years of prior experience in the field), some of them must still receive job offers.
\item \textbf{Assuming the hiring manager is required to use the hiring rule above}, is the following statement TRUE OR \correct{FALSE}: This hiring rule always allows the hiring manager to send offers exclusively to the most qualified applicants (i.e., applicants with high interview scores, high quality recommendation letters, and high number years of prior experience in the field).
\end{enumerate}
In the two examples above there are 100 applicants. Consider a different scenario, with \textbf{6 applicants -- 4 female and 2 male, as illustrated below}. The next three questions each give a potential outcome for all 6 applicants (i.e., which of the 6 applicants receive job offers). Please indicate which of the outcomes follow \textbf{the hiring rule above}.
\vspace{10pt}
\includegraphics[height=1in]{illustrations/total_employees_applicants.png}
\begin{enumerate}
\setcounter{enumi}{8}
\item Alternative scenario 1:
\vspace{10pt}
\includegraphics[height=1in]{illustrations/case1_accept_reject_employees_applicants.png}
Does this distribution of job offers obey the \textbf{hiring rule}? \correct{Yes}
\item Alternative scenario 2:
\vspace{10pt}
\includegraphics[height=1in]{illustrations/case2_accept_employees_applicants.png}
Does this distribution of job offers obey the \textbf{hiring rule}? \correct{No}
\item Alternative scenario 3:
\vspace{10pt}
\includegraphics[height=1in]{illustrations/case2_reject_case3_accept_employees_applicants.png}
Does this distribution of job offers obey the \textbf{hiring rule}? \correct{No}
\end{enumerate}
\begin{enumerate}
\setcounter{enumi}{11}
\item In your own words, explain the \textbf{hiring rule}. [short answer] (The rule is not shown above this question)
\item To what extent do you agree with the following statement: I am confident I know how to \textbf{apply the hiring rule described above}?
\begin{itemize}
\item Strongly agree
\item Agree
\item Neither agree nor disagree
\item Disagree
\item Strongly Disagree
\end{itemize}
\item Please select the choice that best describes your experience: When I answered the previous questions...
\begin{enumerate}
\item I applied the provided hiring rule only.
\item I used my own ideas of what the correct hiring decision should be rather than the provided hiring rule.
\item I used a combination of the provided hiring rule and my own ideas of what the correct hiring decision should be.
\end{enumerate}
\item What is your opinion on the hiring rule? Please explain why. [short answer]
\item Suppose that you are the hiring manager whose job it is to send job offers to applicants based on the criteria listed above (i.e., interview scores, quality of recommendation letters, number of years of prior experience in the field). How would you ensure that this process is fair? [short answer]
\vspace{-5pt}
\item Was there anything about this survey that was hard to understand or answer? [short answer]
\vspace{-5pt}
\end{enumerate}
\subsection{Study-2{}: Survey} \label{app:surveyB}
\input{studyB_survey}
\subsection{Demographic Information}\label{app:demographics}
\begin{enumerate}
\item Please specify the gender with which you most closely identify:
\begin{itemize}
\item Male
\item Female
\item Other
\item Prefer not to answer
\end{itemize}
\item Please specify your year of birth
\item Please specify your ethnicity (you may select more than one):
\begin{itemize}
\item White
\item Hispanic or Latinx
\item Black or African American
\item American Indian or Alaska Native
\item Asian, Native Hawaiian, or Pacific Islander
\item Other
\end{itemize}
\item Please specify the highest degree or level of school you have completed:
\begin{itemize}
\item Some high school credit, no diploma or equivalent
\item High school graduate, diploma or the equivalent (for example: GED)
\item Some college credit, no degree
\item Trade/technical/vocational training
\item Associate’s degree
\item Bachelor’s degree
\item Master’s degree
\item Professional or doctoral degree (JD, MD, PhD)
\end{itemize}
\item How much experience do you have in each of the following areas? (1 - no experience, 2 - limited experience, 3 - significant experience, 4 - expert)
\begin{enumerate}
\item Human resources (making hiring decisions)
\item Management (of employees)
\item Education (teaching)
\item IT infrastructure/systems administration
\item Computer science/programming
\item Machine learning/data science
\end{enumerate}
\end{enumerate}
\textbf{We will maintain privacy of the information you have provided here. Your information will only be used for data analysis purposes.}
\section{Consent} \label{app:consent}
\subsection{Online Survey Consent Form} \label{app:survey_consent}
\subsubsection{Project Title}
Fairness Evaluation and Comprehension
\subsubsection{Purpose of the Study}
This research is being conducted by Michelle Mazurek at the University of Maryland, College Park. We are inviting you to participate in this research project because you are above 18. The purpose of this research project is to understand lay comprehension of different fairness metrics.
\subsubsection{Procedures}
The procedures will start with reading a brief description of a decision-making scenario. You will then be asked to answer some comprehension questions about the scenario. The questions will look like the following: What are the pros and cons of the notion of fairness described above?
Finally, you will be asked some demographics questions. The entire survey will take approximately 20 minutes or less.
\subsubsection{Potential Risks and Discomforts}
There are several questions to answer over the course of this study, so you may find yourself growing tired towards the end. Outside of this, there are minimal risks to participating in this research study. All data collected in this study will be maintained securely (see Confidentiality section) and will be deleted at the conclusion of the study.
However, if at any time you feel that you wish to terminate your participation for any reason, you are permitted to do so.
\subsubsection{Potential Benefits}
There are no direct benefits from participating in this research. We hope that, in the future, other people might benefit from this study through improved understanding of fairness metrics and their applications.
\subsubsection{Confidentiality}
Any potential loss of confidentiality will be minimized by storing all data (including information such as MTurk IDs and demographics) will be stored securely (a) in a password-protected computer located at the University of Maryland, College Park or (b) using a trusted third party (Qualtrics). Personally identifiable information that is collected (MTurk IDs, IP addresses, cookies) will be deleted upon study completion. All other data gathered will be stored for three years post study completion, after which it will be erased.
The only persons that will have access to the data are the Principle Investigator and the Co-Investigators.
If we write a report or article about this research project, your identity will be protected to the maximum extent possible. Your information may be shared with representatives of the University of Maryland, College Park or governmental authorities if you or someone else is in danger or if we are required to do so by law.
\subsubsection{Compensation}
You will receive \$3. You will be responsible for any taxes assessed on the compensation.
If you will earn \$100 or more as a research participant in this study, you must provide your name, address and SSN to receive compensation.
If you do not earn over \$100 only your name and address will be collected to receive compensation.
\subsubsection{Right to Withdraw and Questions}
Your participation in this research is completely voluntary. You may choose not to take part at all. If you decide to participate in this research, you may stop participating at any time. If you decide not to participate in this study or if you stop participating at any time, you will not be penalized or lose any benefits to which you otherwise qualify.
If you decide to stop taking part in the study, if you have questions, concerns, or complaints, or if you need to report an injury related to the research, please contact the investigator:
{\centering
Michelle Mazurek \\
5236 Iribe Center, \\University of Maryland, College Park 20742\\
mmazurek@cs.umd.edu\\
(301) 405-6463\\}
\subsubsection{Participant Rights}
If you have questions about your rights as a research participant or wish to report a research-related injury, please contact:
{\centering
University of Maryland College Park \\
Institutional Review Board Office\\
1204 Marie Mount Hall \\
College Park, Maryland, 20742 \\
E-mail: irb@umd.edu \\
Telephone: 301-405-0678 \\}
\vspace{5pt}
For more information regarding participant rights, please visit:
\url{https://research.umd.edu/irb-research-participants}
This research has been reviewed according to the University of Maryland, College Park IRB procedures for research involving human subjects.
\subsubsection{Statement of Consent}
By agreeing below you indicate that you are at least 18 years of age; you have read this consent form or have had it read to you; your questions have been answered to your satisfaction and you voluntarily agree to participate in this research study.
Please ensure you have made a copy of the above consent form for your records.
Pease ensure you have made a copy of the above consent form for your records. A copy of this consent form can be found here [link to digital copy].
\begin{itemize}
\item[\text{\fboxsep=-.15pt\fbox{\rule{0pt}{1.5ex}\rule{1.5ex}{0pt}}}] I am age 18 or older
\item[\text{\fboxsep=-.15pt\fbox{\rule{0pt}{1.5ex}\rule{1.5ex}{0pt}}}] I have read this consent form
\item[\text{\fboxsep=-.15pt\fbox{\rule{0pt}{1.5ex}\rule{1.5ex}{0pt}}}] I voluntarily agree to participate in this research study
\end{itemize}
\subsection{Cognitive Interview Consent Form} \label{app:cognitive_consent}
\subsubsection{Project Title}
Fairness Cognitive Interview
\subsubsection{Purpose of the Study}
This research is being conducted by Michelle Mazurek at the University of Maryland, College Park. We are inviting you to participate in this research project because you are above the age of 18, and fluent in English. The purpose of this research project is to understand lay comprehension of different fairness metrics.
\subsubsection{Procedures}
The procedure involves completing an interview. The full procedure will be approximately 1 hour in duration.
During the interview you will be audio recorded, if you agree to be recorded. You will be asked to first read a brief description of a decision-making scenario. You will then be asked to fill out a survey about the scenario. While answering questions you will be asked verbal questions related to how you reached your answer in the survey.
Sample survey question:
Is the following statement true or false? This hiring rule allows the hiring manager to send offers exclusively to the most qualified applicants.
Sample interview question:
How did you reach your answer to that survey question?
\subsubsection{Potential Risks and Discomforts}
There are several questions to answer over the course of this study, so you may find yourself growing tired towards the end. Outside of this, there are minimal risks to participating in this research study. All data collected in this study will be maintained securely (see Confidentiality section) and will be deleted at the conclusion of the study.
However, if at any time you feel that you wish to terminate your participation for any reason, you are permitted to do so.
\subsubsection{Potential Benefits}
There are no direct benefits from participating in this research. We hope that, in the future, other people might benefit from this study through improved understanding of fairness metrics and their applications.
\subsubsection{Confidentiality}
Any potential loss of confidentiality will be minimized by storing all data (including information such as demographics) securely (a) in a password protected computer located at the University of Maryland, College Park or (b) using a trusted third party (Qualtrics). Personally identifiable information that is collected will be deleted upon study completion. All other data gathered will be stored for three years post study completion, after which it will be erased. The only persons that will have access to the data are the principle Investigator and the Co-Investigators.
If we write a report or article about this research project, your identity will be protected to the maximum extent possible. Your information may be shared with representatives of the University of Maryland, College Park or governmental authorities if you or someone else is in danger or if we are required to do so by law.
\subsubsection{Compensation}
You will receive \$30. You will be responsible for any taxes assessed on the compensation.
If you will earn \$100 or more as a research participant in this study, you must provide your name, address and SSN to receive compensation.
If you do not earn over \$100 only your name and address will be collected to receive compensation.
\subsubsection{Right to Withdraw and Questions}
Your participation in this research is completely voluntary. You may choose not to take part at all. If you decide to participate in this research, you may stop participating at any time. If you decide not to participate in this study or if you stop participating at any time, you will not be penalized or lose any benefits to which you otherwise qualify.
If you decide to stop taking part in the study, if you have questions, concerns, or complaints, or if you need to report an injury related to the research, please contact the investigator:
{\centering
Michelle Mazurek \\
5236 Iribe Center, \\University of Maryland, College Park 20742\\
mmazurek@cs.umd.edu\\
(301) 405-6463\\}
\subsubsection{Participant Rights}
If you have questions about your rights as a research participant or wish to report a research-related injury, please contact:
{\centering
University of Maryland College Park \\
Institutional Review Board Office\\
1204 Marie Mount Hall \\
College Park, Maryland, 20742 \\
E-mail: irb@umd.edu \\
Telephone: 301-405-0678 \\}
\vspace{5pt}
For more information regarding participant rights, please visit:
\url{https://research.umd.edu/irb-research-participants}
This research has been reviewed according to the University of Maryland, College Park IRB procedures for research involving human subjects.
\subsubsection{Statement of Consent}
Your signature indicates that you are at least 18 years of age; you have read this consent form or have had it read to you; your questions have been answered to your satisfaction and you voluntarily agree to participate in this research study. You will receive a copy of this signed consent form.
Please initial all that apply (you may choose any number of these statements):
\begin{itemize}
\item[\text{\fboxsep=-.15pt\fbox{\rule{0pt}{1.5ex}\rule{1.5ex}{0pt}}}] I agree to be audio recorded
\item[\text{\fboxsep=-.15pt\fbox{\rule{0pt}{1.5ex}\rule{1.5ex}{0pt}}}] I agree to allow researchers to use my audio recording in research publications and presentations.
\end{itemize}
\begin{itemize}
\item[\text{\fboxsep=-.15pt\fbox{\rule{0pt}{1.5ex}\rule{1.5ex}{0pt}}}] I do not agree to be audio recorded
\end{itemize}
If you agree to participate, please sign your name below.
\section*{Acknowledgments}
\jpd{Apparently these don't count toward the page limit for the body of the paper, ``It has now been decided that the acknowledgments section might appear after the 9th page (along with references). The paper checker has been updated accordingly.''}
Dickerson, McElfresh, and Schumann were supported in part by NSF CAREER Award IIS-1846237, DARPA GARD Award \#HR112020007, DARPA SI3-CMD Award \#S4761, DoD WHS Award \#HQ003420F0035, NIH R01 Award NLM-013039-01, and a Google Faculty Research Award.
We gratefully acknowledge funding support from the NSF (Grants 1844462 and 1844518).
The opinions in this paper are those of the authors and do not necessarily reflect the opinions of any funding sponsor or the United States Government.
\section{Methods}
\subsection{Cognitive Interviews} \label{methods:design:cog}
We recruited $9$ participants from the DC Metropolitan area using Craigslist. We required participants to be over 18 years of age and fluent in English. Participants ranged between the ages of 20 and 66.
These interviews took place on the University of Maryland campus and lasted about $1$ hour. All participants signed a written consent form prior to the interview, and were paid \$30 for their time.
During these interviews, participants completed a preliminary version of the survey used in Study-1{}.
After each survey question, we asked the participants several interview questions related to their comprehension of and feelings toward the survey. We found that some participants tended to use their own personal notions of fairness when answering comprehension questions rather than using the definition we provided. We were concerned that this would limit our ability to effectively measure comprehension. To address this problem, we rewrote several parts of our survey and added two new questions (Q14 and Q15).
\subsection{Non-Expert Verification}
We designed this study to assess \emph{non-expert} understanding and opinions of ML fairness metrics. To this end, we asked respondents to self-rate their level of expertise in a variety of fields, including ML, at the end of the survey (see \Appref{app:demographics}). A number of participants did report having ``expert" level experience in ML ($n = 2$ out of 147 in Study-1{}, and $n = 15$ out of 349 in Study-2{}). We considered removing these participants from the analyses, but ultimately did not because there was no relationship between self-reported ML expertise and comprehension score (Spearman's rho, for both studies).
\section{Study-1{}: Detailed Results} \label{app:studyA_results}
\input{studyA_results}
\input{studyA_scenario_analysis}
\section{Study-2{}: Detailed Results}
\input{studyB_details}
\section{Surveys} \label{app:survey}
\subsection{Study-1{} Survey} \label{app:surveyA}
Each of the surveys are split into four main sections. The first section is the consent form which can be found in Appendix~\ref{app:consent}. The second section describes the scenario and asks questions about the given scenario (\S\ref{app:scenarios}). The third section describes the fairness metric, defined as the rule, used (in this case it is demographic parity) and asks specific questions about the metric (\S\ref{app:rules}). Finally the last section asks for demographic information (\S\ref{app:demographics}).
\subsubsection{Scenario descriptions and questions}\label{app:scenarios}
The following is shown to each participant:
It is very important that you read each question carefully and think about your answers. The success of our research relies on our respondents being thoughtful and taking this task seriously.
\begin{itemize}
\item[\text{\fboxsep=-.15pt\fbox{\rule{0pt}{1.5ex}\rule{1.5ex}{0pt}}}] I have read the above instructions carefully.
\end{itemize}
We then introduce one of three different decision making scenarios, described below, followed by two questions. Words that vary across scenario in the questions are shown as $<$art project, employee awards, hiring$>$.
\paragraph{Art project}
A fourth grade teacher is reviewing 20 student art projects. They will award lollipops to the top 4 students who put the most effort into their projects. The teacher knows that some of the students have artists as parents, who might have helped their children with their art project. The teacher’s goal is to give out lollipops only based on the amount of effort that the student \emph{themselves} put into their projects.
The teacher uses the following criteria to decide who should get a lollipop:
\begin{itemize}
\item Elaborateness of each project.
\item Creativity of each project.
\end{itemize}
About 50\% of the students have artists as parents, and 50\% do not.
In the past, students with artists as parents typically put more effort into their projects.
In this group of students there is a wide range of project quality (as measured by elaborateness and creativity). However, this range of quality is about the same between students with artists as parents and those without.
The teacher wants to make sure that they award lollipops in a fair way, no matter whether the students’ parents are artists or not.
\paragraph{Employee awards}
A manager at a sales company is deciding which of their 100 employees should receive each of 10 mid-year awards. The manager’s goal is to give awards to employees who \emph{will} have high net sales at the end of the year.
The manager uses the following criteria to decide who should get an award:
\begin{itemize}
\item Recent performance reviews
\item Mid-year net sales
\item Number of years on the job
\end{itemize}
About 50\% of the employees are men, and 50\% are women.
In the past, men have achieved higher end-of-year net sales than women.
In this group of employees, there is a wide range of qualifications (as measured by performance reviews, mid-year net sales, and number of years on the job). However, this range of qualifications is about the same between male and female employees.
The manager wants to make sure that this awards process is fair to the employees, no matter their gender.
\paragraph{Hiring}
A hiring manager at a new sales company is reviewing 100 new job applications. Each applicant has submitted a resume, and has had an interview. The manager will send job offers to 10 out of the 100 applicants. Their goal is to make offers to applicants who will have high net sales after a year on the job.
The manager will use the following to decide which applicants should receive job offers:
\begin{itemize}
\item Interview scores
\item Quality of recommendation letters
\item Number of years of prior experience in the field
\end{itemize}
About 50\% of the applicants are men, and 50\% are women.
In the past, men have achieved higher net sales than women, after one year on the job.
In this applicant pool there is a wide range of applicant quality (as measured by interview scores, recommendation letters, and years of prior experience in the field). However, the range of quality is about the same for both male and female applicants.
The hiring manager wants to make sure that this hiring process is fair to applicants, no matter their gender.
\paragraph{Questions}
\begin{enumerate}
\item To what extent do you agree with the following statement: a scenario similar to the one described above might occur in real life.
\begin{itemize}
\item Strongly agree
\item Agree
\item Neither agree nor disagree
\item Disagree
\item Strongly Disagree
\end{itemize}
\item How much effort should the $<$teacher, manager, hiring manager$>$ put in to make sure this decision is fair? [short answer - number of hours]
\end{enumerate}
\subsubsection{Rule descriptions and questions}\label{app:rules}
Unless otherwise noted the rule description is shown above each of the questions for reference. Correct answers are noted in \correct{red}.
\paragraph{Art project}
The teacher uses the following award rule to distribute lollipops: \emph{The fraction of students who receive lollipops that have artist parents should equal the fraction of students in the class that have artist parents. Similarly, the fraction of students who receive lollipops that do not have artist parents should equal the fraction of students in the class that do not have artist parents.}
Example 1: If 10 out of the 20 students in the class have artist parents, then 2 out of the 4 lollipops would be awarded to students with artist parents (and the remaining 2 would be awarded to students without artist parents).
Example 2: If 5 out of the 20 students in the class have artist parents, then 1 out of the 4 lollipops would be awarded to students with artist parents (and the remaining 3 would be awarded to students without artist parents).
In the next section, we will ask you some questions about the information you have just read. Please note that this is not a test of your abilities. We want to measure the quality of the description you read, not your ability to take tests or answer questions.
\textbf{Please note that we ask you to apply and use ONLY the above award rule when answering the following questions. You will have an opportunity to state your opinions and feelings on the rule later in the survey.}
\begin{enumerate}
\setcounter{enumi}{2}
\item Suppose a different teacher is considering awarding lollipops to the whole 4th grade. There are 100 students with artist parents, and 200 students without artist parents. The teacher decides to award 10 lollipops to students with artist parents. \textbf{Assuming the teacher is required to use the award rule above}, how many students without artist parents need to receive lollipops?
\begin{enumerate}
\item 10
\item \correct{20}
\item 40
\item 50
\end{enumerate}
\item \textbf{Assuming the teacher is required to use the award rule above}, in which of these cases can a teacher award more lollipops to students without artist parents than to students with artist parents?
\begin{enumerate}
\item When the students without artist parents have higher-quality projects (i.e., more elaborate and more creative) than those with artist parents.
\item \correct{When there are more students without artist parents than those with artist parents.}
\item When students without artist parents have more creative projects than those with artist parents.
\item This cannot happen under the award rule.
\end{enumerate}
\item \textbf{Assuming the teacher is required to use the award rule above}, is the following statement \correct{TRUE} OR FALSE: Even if a student with artist parents has a project that is of the same quality (i.e., equally elaborate and equally creative) as another project by a student without artist parents, they can be treated differently (ie., only one of the students might get a lollipop).
\item \textbf{Assuming the teacher is required to use the award rule above}, is the following statement TRUE OR \correct{FALSE}: If all students without artist parents have low-quality projects (i.e., low elaborateness and low creativity), but the teacher awards lollipops to some of them, then any lollipops awarded to students with artist parents must be awarded to those who have low-quality projects.
\item \textbf{Assuming the teacher is required to use the award rule above}, is the following statement \correct{TRUE} OR FALSE: Suppose the teacher is distributing 10 lollipops amongst a pool of students that includes students with and without artist parents. Even if all students with artist parents have low-quality (i.e., low elaborateness and low creativity) projects, some of them must still receive lollipops.
\item \textbf{Assuming the teacher is required to use the award rule above}, is the following statement TRUE OR \correct{FALSE}: This award rule always allows the teacher to award lollipops exclusively to the students who have the highest quality (i.e., most elaborate and most creative) projects.
\end{enumerate}
In the two examples above there are 20 students. Consider a different scenario, with \textbf{6 students -- 4 with artist parents and 2 without, as illustrated below}. The next three questions each give a potential outcome for all six students (i.e., which of the 6 students receive awards). Please indicate which of the outcomes follow \textbf{the award rule above}.
\vspace{10pt}
\includegraphics[height=1in]{illustrations/total_students.png}
\begin{enumerate}
\setcounter{enumi}{8}
\item Alternative scenario 1:
\vspace{10pt}
\includegraphics[height=1in]{illustrations/case1_accept_reject_students.png}
Does this distribution of awards obey the \textbf{award rule}? \correct{Yes}
\item Alternative scenario 2:
\vspace{10pt}
\includegraphics[height=1in]{illustrations/case2_accept_case3_reject_students.png}
Does this distribution of awards obey the \textbf{award rule}? \correct{No}
\item Alternative scenario 3:
\vspace{10pt}
\includegraphics[height=1in]{illustrations/case2_reject_case3_accept_students.png}
Does this distribution of awards obey the \textbf{award rule}? \correct{No}
\end{enumerate}
\begin{enumerate}
\setcounter{enumi}{11}
\item In your own words, explain the \textbf{award rule}. [short answer] (The rule is not shown above this question)
\item To what extent do you agree with the following statement: I am confident I know how to \textbf{apply the award rule described above}?
\begin{itemize}
\item Strongly agree
\item Agree
\item Neither agree nor disagree
\item Disagree
\item Strongly Disagree
\end{itemize}
\item Please select the choice that best describes your experience: When I answered the previous questions...
\begin{enumerate}
\item I applied the provided award rule only.
\item I used my own ideas of what the correct award decision should be rather than the provided award rule.
\item I used a combination of the provided award rule and my own ideas of what the correct award decision should be.
\end{enumerate}
\item What is your opinion on the award rule? Please explain why. [short answer]
\item Suppose that you are the teacher whose job it is to distribute lollipops to students based on the criteria listed above (i.e., elaborateness of each project, creativity of each project). How would you ensure that this process is fair? [short answer]
\item Was there anything about this survey that was hard to understand or answer? [short answer]
\end{enumerate}
\paragraph{Employee awards}
The manager uses the following award rule to distribute awards: \emph{The fraction of employees who receive awards that are female should equal the fraction of employees that are female. Similarly, fraction of employees who receive awards that are male should equal the fraction of employees that are male.}
Example 1: If there are 50 female employees out of 100, then 5 out of the 10 awards should be awarded to female employees (and the remaining 5 would be made to male employees).
Example 2: If there are 30 female employees out of 100, then 3 out of the 10 awards should be awarded to female employees (and the remaining 7 would be made to male employees).
In the next section, we will ask you some questions about the information you have just read. Please note that this is not a test of your abilities. We want to measure the quality of the description you read, not your ability to take tests or answer questions.
\textbf{Please note that we ask you to apply and use ONLY the above award rule when answering the following questions. You will have an opportunity to state your opinions and feelings on the rule later in the survey.}
\begin{enumerate}
\setcounter{enumi}{2}
\item Suppose a different manager is considering employees for a different award. There are 100 male employees and 200 female employees, and they decide to give awards to 10 male employees. \textbf{Assuming the manager is required to use the award rule above}, how many female employees do they need to give awards to?
\begin{enumerate}
\item 10
\item \correct{20}
\item 40
\item 50
\end{enumerate}
\item \textbf{Assuming the manager is required to use the award rule above}, in which of these cases can a manager give more awards to female employees than to male employees?
\begin{enumerate}
\item When there are more well-qualified female employees than well-qualified male employees (i.e., more women have better performance reviews, higher mid-year net sales, and more years on the job).
\item \correct{When there are more female employees than male employees.}
\item When female employees receive higher performance reviews than male employees.
\item This cannot happen under the award rule.
\end{enumerate}
\item \textbf{Assuming the manager is required to use the award rule above}, is the following statement \correct{TRUE} OR FALSE: Even if a male employee’s qualifications look similar to a female employee’s (in terms of performance reviews, mid-year net sales, and years on the job), he can be treated differently (i.e., only one of the employees gets an award).
\item \textbf{Assuming the manager is required to use the award rule above}, is the following statement TRUE OR \correct{FALSE}: If all female employees are unqualified (i.e., have low performance reviews, low mid-year net sales, and few years on the job), but you give awards to some of them, then awards given to male employees must be made to unqualified male employees.
\item \textbf{Assuming the manager is required to use the award rule above}, is the following statement \correct{TRUE} OR FALSE: Suppose the manager is distributing 10 awards amongst a pool that includes both male and female employees. Even if all male employees are unqualified for an award (i.e., have low performance reviews, low mid-year net sales, and few years on the job), some of them must still receive awards.
\item \textbf{Assuming the manager is required to use the award rule above}, is the following statement TRUE OR \correct{FALSE}: This award rule always allows the manager to distribute awards exclusively to the most qualified employees (i.e., employees with better performance reviews, high mid-year net sales, and high number of years on the job).
\end{enumerate}
In the two examples above there are 100 employees. Consider a different scenario, with \textbf{6 employees-- 4 female and 2 male, as illustrated below}. The next three questions each give a potential outcome for all six employees (i.e., which of the 6 employees receive awards). Please indicate which of the outcomes follow \textbf{the award rule above}.
\vspace{10pt}
\includegraphics[height=1in]{illustrations/total_employees_applicants.png}
\begin{enumerate}
\setcounter{enumi}{8}
\item Alternative scenario 1:
\vspace{10pt}
\includegraphics[height=1in]{illustrations/case1_accept_reject_employees_applicants.png}
Does this distribution of awards obey the \textbf{award rule}? \correct{Yes}
\item Alternative scenario 2:
\vspace{10pt}
\includegraphics[height=1in]{illustrations/case2_accept_employees_applicants.png}
Does this distribution of awards obey the \textbf{award rule}? \correct{No}
\item Alternative scenario 3:
\vspace{10pt}
\includegraphics[height=1in]{illustrations/case2_reject_case3_accept_employees_applicants.png}
Does this distribution of awards obey the \textbf{award rule}? \correct{No}
\end{enumerate}
\begin{enumerate}
\setcounter{enumi}{11}
\item In your own words, explain the \textbf{award rule}. [short answer] (The rule is not shown above this question)
\item To what extent do you agree with the following statement: I am confident I know how to \textbf{apply the award rule described above}?
\begin{itemize}
\item Strongly agree
\item Agree
\item Neither agree nor disagree
\item Disagree
\item Strongly Disagree
\end{itemize}
\item Please select the choice that best describes your experience: When I answered the previous questions...
\begin{enumerate}
\item I applied the provided award rule only.
\item I used my own ideas of what the correct award decision should be rather than the provided award rule.
\item I used a combination of the provided award rule and my own ideas of what the correct award decision should be.
\end{enumerate}
\item What is your opinion on the award rule? Please explain why. [short answer]
\item Suppose that you are the manager whose job it is to distribute mid-year awards to employees based on the criteria listed above (i.e., recent performance reviews, mid-year net sales, number of years on the job). How would you ensure that this process is fair? [short answer]
\item Was there anything about this survey that was hard to understand or answer? [short answer]
\end{enumerate}
\paragraph{Hiring}
The hiring manager uses the following hiring rule to send out offers: \emph{The fraction of applicants who receive job offers that are female should equal the fraction of applicants that are female. Similarly, fraction of applicants who receive job offers that are male should equal the fraction of applicants that are male.}
Example 1: If there are 50 female applicants out of the 100 applicants, then 5 out of the 10 offers would be made to female applicants (and the remaining 5 would be made to male applicants).
Example 2: If there are 30 female applicants out of the 100 applicants, then 3 out of the 10 offers would be made to female applicants (and the remaining 7 would be made to male applicants).
In the next section, we will ask you some questions about the information you have just read. Please note that this is not a test of your abilities. We want to measure the quality of the description you read, not your ability to take tests or answer questions.
\textbf{Please note that we ask you to apply and use ONLY the above hiring rule when answering the following questions. You will have an opportunity to state your opinions and feelings on the rule later in the survey.}
\begin{enumerate}
\setcounter{enumi}{2}
\item Suppose a different hiring manager is considering applicants for a different job. There are 100 male applicants and 200 female applicants, and they decide to send offers to 10 male applicants. \textbf{Assuming the hiring manager is required to use the hiring rule above}, how many female applicants do they need to send offers to?
\begin{enumerate}
\item 10
\item \correct{20}
\item 40
\item 50
\end{enumerate}
\item \textbf{Assuming the hiring manager is required to use the hiring rule above}, in which of these cases can a hiring manager make more job offers to female applicants than to male applicants?
\begin{enumerate}
\item When there are more well-qualified female applicants than well-qualified male applicants (i.e., more women have higher interview scores, higher quality recommendation letters, and more years of prior experience in the field).
\item \correct{When there are more female applicants than male applicants.}
\item When female applicants receive better interview scores than male applicants.
\item This cannot happen under the hiring rule.
\end{enumerate}
\item \textbf{Assuming the hiring manager is required to use the hiring rule above}, is the following statement \correct{TRUE} OR FALSE: Even if a male applicant’s qualifications look similar to a female applicant’s (in terms of interview scores, recommendation letters, and years of prior experience in the field), he can be treated differently (i.e., only one of the applicants will receive a job offer).
\item \textbf{Assuming the hiring manager is required to use the hiring rule above}, is the following statement TRUE OR \correct{FALSE}: If all female applicants are unqualified (i.e., have low interview scores, low-quality recommendation letters, and few years of prior experience in the field), but you send job offers to some of them, then any job offers made to male applicants must be made to unqualified male applicants.
\item \textbf{Assuming the hiring manager is required to use the hiring rule above}, is the following statement \correct{TRUE} OR FALSE: Suppose the hiring manager is sending out 10 job offers to a pool that includes male and female applicants. Even if all male applicants are unqualified (i.e., have low interview scores, low-quality recommendation letters, and few years of prior experience in the field), some of them must still receive job offers.
\item \textbf{Assuming the hiring manager is required to use the hiring rule above}, is the following statement TRUE OR \correct{FALSE}: This hiring rule always allows the hiring manager to send offers exclusively to the most qualified applicants (i.e., applicants with high interview scores, high quality recommendation letters, and high number years of prior experience in the field).
\end{enumerate}
In the two examples above there are 100 applicants. Consider a different scenario, with \textbf{6 applicants -- 4 female and 2 male, as illustrated below}. The next three questions each give a potential outcome for all 6 applicants (i.e., which of the 6 applicants receive job offers). Please indicate which of the outcomes follow \textbf{the hiring rule above}.
\vspace{10pt}
\includegraphics[height=1in]{illustrations/total_employees_applicants.png}
\begin{enumerate}
\setcounter{enumi}{8}
\item Alternative scenario 1:
\vspace{10pt}
\includegraphics[height=1in]{illustrations/case1_accept_reject_employees_applicants.png}
Does this distribution of job offers obey the \textbf{hiring rule}? \correct{Yes}
\item Alternative scenario 2:
\vspace{10pt}
\includegraphics[height=1in]{illustrations/case2_accept_employees_applicants.png}
Does this distribution of job offers obey the \textbf{hiring rule}? \correct{No}
\item Alternative scenario 3:
\vspace{10pt}
\includegraphics[height=1in]{illustrations/case2_reject_case3_accept_employees_applicants.png}
Does this distribution of job offers obey the \textbf{hiring rule}? \correct{No}
\end{enumerate}
\begin{enumerate}
\setcounter{enumi}{11}
\item In your own words, explain the \textbf{hiring rule}. [short answer] (The rule is not shown above this question)
\item To what extent do you agree with the following statement: I am confident I know how to \textbf{apply the hiring rule described above}?
\begin{itemize}
\item Strongly agree
\item Agree
\item Neither agree nor disagree
\item Disagree
\item Strongly Disagree
\end{itemize}
\item Please select the choice that best describes your experience: When I answered the previous questions...
\begin{enumerate}
\item I applied the provided hiring rule only.
\item I used my own ideas of what the correct hiring decision should be rather than the provided hiring rule.
\item I used a combination of the provided hiring rule and my own ideas of what the correct hiring decision should be.
\end{enumerate}
\item What is your opinion on the hiring rule? Please explain why. [short answer]
\item Suppose that you are the hiring manager whose job it is to send job offers to applicants based on the criteria listed above (i.e., interview scores, quality of recommendation letters, number of years of prior experience in the field). How would you ensure that this process is fair? [short answer]
\vspace{-5pt}
\item Was there anything about this survey that was hard to understand or answer? [short answer]
\vspace{-5pt}
\end{enumerate}
\subsection{Study-2{}: Survey} \label{app:surveyB}
\input{studyB_survey}
\subsection{Demographic Information}\label{app:demographics}
\begin{enumerate}
\item Please specify the gender with which you most closely identify:
\begin{itemize}
\item Male
\item Female
\item Other
\item Prefer not to answer
\end{itemize}
\item Please specify your year of birth
\item Please specify your ethnicity (you may select more than one):
\begin{itemize}
\item White
\item Hispanic or Latinx
\item Black or African American
\item American Indian or Alaska Native
\item Asian, Native Hawaiian, or Pacific Islander
\item Other
\end{itemize}
\item Please specify the highest degree or level of school you have completed:
\begin{itemize}
\item Some high school credit, no diploma or equivalent
\item High school graduate, diploma or the equivalent (for example: GED)
\item Some college credit, no degree
\item Trade/technical/vocational training
\item Associate’s degree
\item Bachelor’s degree
\item Master’s degree
\item Professional or doctoral degree (JD, MD, PhD)
\end{itemize}
\item How much experience do you have in each of the following areas? (1 - no experience, 2 - limited experience, 3 - significant experience, 4 - expert)
\begin{enumerate}
\item Human resources (making hiring decisions)
\item Management (of employees)
\item Education (teaching)
\item IT infrastructure/systems administration
\item Computer science/programming
\item Machine learning/data science
\end{enumerate}
\end{enumerate}
\textbf{We will maintain privacy of the information you have provided here. Your information will only be used for data analysis purposes.}
\section{Consent} \label{app:consent}
\subsection{Online Survey Consent Form} \label{app:survey_consent}
\subsubsection{Project Title}
Fairness Evaluation and Comprehension
\subsubsection{Purpose of the Study}
This research is being conducted by Michelle Mazurek at the University of Maryland, College Park. We are inviting you to participate in this research project because you are above 18. The purpose of this research project is to understand lay comprehension of different fairness metrics.
\subsubsection{Procedures}
The procedures will start with reading a brief description of a decision-making scenario. You will then be asked to answer some comprehension questions about the scenario. The questions will look like the following: What are the pros and cons of the notion of fairness described above?
Finally, you will be asked some demographics questions. The entire survey will take approximately 20 minutes or less.
\subsubsection{Potential Risks and Discomforts}
There are several questions to answer over the course of this study, so you may find yourself growing tired towards the end. Outside of this, there are minimal risks to participating in this research study. All data collected in this study will be maintained securely (see Confidentiality section) and will be deleted at the conclusion of the study.
However, if at any time you feel that you wish to terminate your participation for any reason, you are permitted to do so.
\subsubsection{Potential Benefits}
There are no direct benefits from participating in this research. We hope that, in the future, other people might benefit from this study through improved understanding of fairness metrics and their applications.
\subsubsection{Confidentiality}
Any potential loss of confidentiality will be minimized by storing all data (including information such as MTurk IDs and demographics) will be stored securely (a) in a password-protected computer located at the University of Maryland, College Park or (b) using a trusted third party (Qualtrics). Personally identifiable information that is collected (MTurk IDs, IP addresses, cookies) will be deleted upon study completion. All other data gathered will be stored for three years post study completion, after which it will be erased.
The only persons that will have access to the data are the Principle Investigator and the Co-Investigators.
If we write a report or article about this research project, your identity will be protected to the maximum extent possible. Your information may be shared with representatives of the University of Maryland, College Park or governmental authorities if you or someone else is in danger or if we are required to do so by law.
\subsubsection{Compensation}
You will receive \$3. You will be responsible for any taxes assessed on the compensation.
If you will earn \$100 or more as a research participant in this study, you must provide your name, address and SSN to receive compensation.
If you do not earn over \$100 only your name and address will be collected to receive compensation.
\subsubsection{Right to Withdraw and Questions}
Your participation in this research is completely voluntary. You may choose not to take part at all. If you decide to participate in this research, you may stop participating at any time. If you decide not to participate in this study or if you stop participating at any time, you will not be penalized or lose any benefits to which you otherwise qualify.
If you decide to stop taking part in the study, if you have questions, concerns, or complaints, or if you need to report an injury related to the research, please contact the investigator:
{\centering
Michelle Mazurek \\
5236 Iribe Center, \\University of Maryland, College Park 20742\\
mmazurek@cs.umd.edu\\
(301) 405-6463\\}
\subsubsection{Participant Rights}
If you have questions about your rights as a research participant or wish to report a research-related injury, please contact:
{\centering
University of Maryland College Park \\
Institutional Review Board Office\\
1204 Marie Mount Hall \\
College Park, Maryland, 20742 \\
E-mail: irb@umd.edu \\
Telephone: 301-405-0678 \\}
\vspace{5pt}
For more information regarding participant rights, please visit:
\url{https://research.umd.edu/irb-research-participants}
This research has been reviewed according to the University of Maryland, College Park IRB procedures for research involving human subjects.
\subsubsection{Statement of Consent}
By agreeing below you indicate that you are at least 18 years of age; you have read this consent form or have had it read to you; your questions have been answered to your satisfaction and you voluntarily agree to participate in this research study.
Please ensure you have made a copy of the above consent form for your records.
Pease ensure you have made a copy of the above consent form for your records. A copy of this consent form can be found here [link to digital copy].
\begin{itemize}
\item[\text{\fboxsep=-.15pt\fbox{\rule{0pt}{1.5ex}\rule{1.5ex}{0pt}}}] I am age 18 or older
\item[\text{\fboxsep=-.15pt\fbox{\rule{0pt}{1.5ex}\rule{1.5ex}{0pt}}}] I have read this consent form
\item[\text{\fboxsep=-.15pt\fbox{\rule{0pt}{1.5ex}\rule{1.5ex}{0pt}}}] I voluntarily agree to participate in this research study
\end{itemize}
\subsection{Cognitive Interview Consent Form} \label{app:cognitive_consent}
\subsubsection{Project Title}
Fairness Cognitive Interview
\subsubsection{Purpose of the Study}
This research is being conducted by Michelle Mazurek at the University of Maryland, College Park. We are inviting you to participate in this research project because you are above the age of 18, and fluent in English. The purpose of this research project is to understand lay comprehension of different fairness metrics.
\subsubsection{Procedures}
The procedure involves completing an interview. The full procedure will be approximately 1 hour in duration.
During the interview you will be audio recorded, if you agree to be recorded. You will be asked to first read a brief description of a decision-making scenario. You will then be asked to fill out a survey about the scenario. While answering questions you will be asked verbal questions related to how you reached your answer in the survey.
Sample survey question:
Is the following statement true or false? This hiring rule allows the hiring manager to send offers exclusively to the most qualified applicants.
Sample interview question:
How did you reach your answer to that survey question?
\subsubsection{Potential Risks and Discomforts}
There are several questions to answer over the course of this study, so you may find yourself growing tired towards the end. Outside of this, there are minimal risks to participating in this research study. All data collected in this study will be maintained securely (see Confidentiality section) and will be deleted at the conclusion of the study.
However, if at any time you feel that you wish to terminate your participation for any reason, you are permitted to do so.
\subsubsection{Potential Benefits}
There are no direct benefits from participating in this research. We hope that, in the future, other people might benefit from this study through improved understanding of fairness metrics and their applications.
\subsubsection{Confidentiality}
Any potential loss of confidentiality will be minimized by storing all data (including information such as demographics) securely (a) in a password protected computer located at the University of Maryland, College Park or (b) using a trusted third party (Qualtrics). Personally identifiable information that is collected will be deleted upon study completion. All other data gathered will be stored for three years post study completion, after which it will be erased. The only persons that will have access to the data are the principle Investigator and the Co-Investigators.
If we write a report or article about this research project, your identity will be protected to the maximum extent possible. Your information may be shared with representatives of the University of Maryland, College Park or governmental authorities if you or someone else is in danger or if we are required to do so by law.
\subsubsection{Compensation}
You will receive \$30. You will be responsible for any taxes assessed on the compensation.
If you will earn \$100 or more as a research participant in this study, you must provide your name, address and SSN to receive compensation.
If you do not earn over \$100 only your name and address will be collected to receive compensation.
\subsubsection{Right to Withdraw and Questions}
Your participation in this research is completely voluntary. You may choose not to take part at all. If you decide to participate in this research, you may stop participating at any time. If you decide not to participate in this study or if you stop participating at any time, you will not be penalized or lose any benefits to which you otherwise qualify.
If you decide to stop taking part in the study, if you have questions, concerns, or complaints, or if you need to report an injury related to the research, please contact the investigator:
{\centering
Michelle Mazurek \\
5236 Iribe Center, \\University of Maryland, College Park 20742\\
mmazurek@cs.umd.edu\\
(301) 405-6463\\}
\subsubsection{Participant Rights}
If you have questions about your rights as a research participant or wish to report a research-related injury, please contact:
{\centering
University of Maryland College Park \\
Institutional Review Board Office\\
1204 Marie Mount Hall \\
College Park, Maryland, 20742 \\
E-mail: irb@umd.edu \\
Telephone: 301-405-0678 \\}
\vspace{5pt}
For more information regarding participant rights, please visit:
\url{https://research.umd.edu/irb-research-participants}
This research has been reviewed according to the University of Maryland, College Park IRB procedures for research involving human subjects.
\subsubsection{Statement of Consent}
Your signature indicates that you are at least 18 years of age; you have read this consent form or have had it read to you; your questions have been answered to your satisfaction and you voluntarily agree to participate in this research study. You will receive a copy of this signed consent form.
Please initial all that apply (you may choose any number of these statements):
\begin{itemize}
\item[\text{\fboxsep=-.15pt\fbox{\rule{0pt}{1.5ex}\rule{1.5ex}{0pt}}}] I agree to be audio recorded
\item[\text{\fboxsep=-.15pt\fbox{\rule{0pt}{1.5ex}\rule{1.5ex}{0pt}}}] I agree to allow researchers to use my audio recording in research publications and presentations.
\end{itemize}
\begin{itemize}
\item[\text{\fboxsep=-.15pt\fbox{\rule{0pt}{1.5ex}\rule{1.5ex}{0pt}}}] I do not agree to be audio recorded
\end{itemize}
If you agree to participate, please sign your name below.
\section{Introduction}
Research into algorithmic fairness has grown in both importance and volume over the past few years, driven in part by the emergence of a grassroots Fairness, Accountability, Transparency, and Ethics (FATE) in Machine Learning (ML) community. Different metrics and approaches to algorithmic fairness have been proposed, many of which are based on prior legal and philosophical concepts, such as disparate impact and disparate treatment~\cite{feldman2015certifying,chouldechova2017fair,binns2017fairness}. However, definitions of ML fairness do not always fit well within pre-existing legal and moral frameworks. The rapid expansion of this field makes it difficult for professionals to keep up, let alone the general public.
Furthermore, misinformation about notions of fairness can have significant legal implications.\footnote{\url{https://www.cato.org/blog/misleading-veritas-accusation-google-bias-could-result-bad-law}}
Computer scientists have largely focused on developing mathematical notions of fairness and incorporating them into ML systems. A much smaller collection of studies have measured public perception of bias and (un)fairness in algorithmic decision-making.
\newdcm{
However, as both the academic community and society in general continue to discuss issues of ML fairness, it remains unclear whether non-experts--who will be \emph{impacted} by ML-guided decisions--understand various mathematical definitions of fairness sufficiently to provide opinions and critiques.
We emphasize that these technologies are likely to have greater impact on marginalized populations, and those with lower levels of education, as in the case of hiring and criminal justice~\cite{barocas2016big,frey2017future}.
For this reason, we focus on a non-expert audience and a context (hiring) that most people would find relatively familiar.
}
\noindent\textbf{Our Contributions.}
We take a step toward addressing this issue by studying peoples' comprehension and perceptions of three definitions of ML fairness: \emph{demographic parity}, \emph{equal opportunity,} and \emph{equalized odds} \cite{Hardt16:Equality}.
Specifically, we address the following research questions:
\vspace{-9pt}
\begin{itemize}[itemsep=0cm,leftmargin=1cm]
\item[\textbf{RQ1}] When provided with an explanation intended for a non-technical audience, do non-experts comprehend each definition and its implications?
\item[\textbf{RQ2}] What factors play a role in comprehension?
\item[\textbf{RQ3}] How are comprehension and sentiment related?
\item[\textbf{RQ4}] How do the different definitions compare in terms of comprehension?
\end{itemize}
\vspace{-9pt}
We developed two online surveys to address these research questions. We presented participants with a simplified decision-making scenario and an accompanying \emph{fairness rule} expressed in the scenario's context. We asked questions related to the participants' comprehension of and sentiment toward this rule. Tallying the number of correct responses to the comprehension questions gives us a \emph{comprehension score} for each participant.
In Study-1{}, we found that this comprehension score is a consistent and reliable indicator of understanding demographic parity. %
Then, in Study-2{}, we used a similar approach to compare comprehension among all three definitions of interest. We find that (1) education is a significant predictor of rule understanding, (2) the counterintuitive definition of Equal Opportunity with False Negative Rate was significantly harder to understand than other definitions, and (3) participants with low comprehension scores tended to express less negative sentiment toward the fairness rule.
\newdcm{%
This underlines the importance of considering stakeholders before deploying a ``fair'' ML system, because some stakeholders may not understand or agree with an ML-specific notion of fairness.
Our goal is to help to designers and adopters of fairness approaches understand whether they are communicating with stakeholders effectively.
}
\section{Related Work}\label{sec:related}
In response to many instances of bias in fielded artificial intelligence (AI) and machine learning (ML) systems, ML fairness has received significant attention from the computer-science community.
Notable examples include gender bias in job-related ads~\cite{datta2015automated}, racial bias in evaluating names on resumes~\cite{caliskan2017semantics}, and racial bias in predicting criminal recidivism~\cite{angwin2016machine}.
To correct biased behavior, researchers have proposed several mathematical and algorithmic notions of fairness.
Most algorithmic fairness definitions found in literature are motivated by the philosophical notion of individual fairness (e.g., see~\cite{Rawls71a}), and legal definitions of disparate impact/treatment (e.g., see~\cite{barocas2016big}).
Several ML-specific definitions of fairness have been proposed which claim to uphold these philosophical and legal concepts.
These definitions of ``ML fairness'' fall loosely into two categories (for a review, see~\cite{chouldechova2018frontiers}). \emph{Statistical Parity} posits that in a \emph{fair} outcome, individuals from different protected groups have the same chance of receiving a positive (or negative) outcome.
Similarly, \emph{Predictive Parity}~\cite{Hardt16:Equality} asserts that the predictive accuracy should be similar across different protected groups--often measured by the false positive rate (FPR) or false negative rate (FNR) in binary classification settings.
Myriad other definitions have been proposed, based on concepts such as calibration~\cite{pleiss2017fairness} and causality~\cite{kusner2017counterfactual}.
Of course, all of these definitions make limiting assumptions; no concept of fairness is perfect~\cite{Hardt16:Equality}. The question remains, \emph{which} of these fairness definitions are appropriate, and in \emph{what context?}
There are two important components to answering this question: \emph{communicating} these fairness definitions to a general audience, and \emph{measuring their perception} of these definitions in context.
Communicating ML-related concepts is an active and growing research area.
In particular, \emph{interpretable ML} focuses on communicating the decision-making process and results of ML-based decisions to a general audience~\cite{lipton2018mythos}.
Many tools have been developed to make ML models more interpretable, and many demonstrably improve understanding of ML-based decisions~\cite{ribeiro2016should,Huysmans2011}.
These models often rely on concepts from probability and statistics---teaching these concepts has long been an active area of research.
\citet{batanero2016research} provide an overview of teaching probability and how students learn probability; our surveys use their method of communicating probability, which relies on proportions.
We draw on several other concepts from this literature for our study design; for example avoiding numerical and statistical representations~\cite{gigerenzer2003simple,gigerenzer2007helping}, which can be confusing to a general audience.
Instead we provide relatable examples, accompanied by examples and graphics~\cite{hogarth2015providing}.
Effectively communicating ML concepts is necessary to achieve our second goal of understanding peoples' perceptions of these concepts.
One particularly active research area focuses on how people perceive bias in algorithmic systems.
For example, \citet{woodruff2018qualitative} investigated perceptions of algorithmic bias among marginalized populations, using a focus group-style workshop;~\citet{grgic2018human} study the underlying factors causing perceptions of bias, highlighting the importance of selecting appropriate features in algorithmic decision-making; \citet{plane2017exploring} look at perceptions of discrimination of online advertising;~\newdcm{\citet{harrison2020empirical} studies perceptions of fairness in stylized machine learning models;} \dsnew{\citet{srivastava2019mathematical} note that perceived appropriateness of an ML notion of fairness may depend on the domain in which the decision-making system is deployed, but suggest that simpler notions may best capture lay perceptions of fairness.}
A related body of work studied how people perceive algorithmic decision-makers.
\citet{lee2018understanding} studies perceptions of fairness, trust, and emotional response of algorithmic decision-makers --- as compared to human decision-makers.
Similar work studies perception of fairness in the context of splitting goods or tasks, and in loan decisions~\cite{Lee2017,Lee2019,saxena2020fairness}.
\citet{binns2018s} studies how different explanation styles impact perceptions of algorithmic decision-makers.
This substantial body of prior research provided inspiration and guidance for our work.
Prior work has studied both the effective communication of, and perceptions of, ML-related concepts.
We hypothesize that these concepts are in fact related; to that end, we design experiments to simultaneously study peoples' \emph{comprehension} of and \emph{perceptions} of common ML fairness definitions.
\section{Methods}\label{sec:methods}
To study perceptions of ML fairness, we conducted two online surveys where participants were presented with a hypothetical decision-making scenario. Participants were then presented with a ``rule'' for enforcing fairness. We then asked each participant several questions on their comprehension and perceptions of this fairness rule.
We first conducted Study-1{} to validate our methodology; we then conducted the larger and broader Study-2{} to address our main research questions.
Both studies were approved by the University of Maryland Institutional Review Board (IRB).
\subsection{Study-1{}}\label{sec:studyA}
In Study-1{} we tested three different decision-making scenarios based on real-world decision problems: hiring, giving employee awards, and judging a student art project.
However, we observed no difference in participant responses between these scenarios; for this reason,
\dsnew{we focus exclusively on hiring in
Study-2{} (see \ref{sec:studyB}).}
Please see Appendix~\ref{app:survey} for a description of the Study-1{} scenarios, and \Appref{app:scenario_analysis} for relevant survey results.
In Study-1{}, we chose (what we believe is) the simplest definition of ML fairness, namely, demographic parity. In short, this rule requires that the fraction of one group who receives a \emph{positive} outcome (e.g., an award or job offer) is equal for both groups.
\subsubsection{Survey Design}\label{methods:design}
Here we provide a high-level discussion of the survey design; the full text of each survey can be found in Appendix~\ref{app:survey}.
The participant first receives a consent form (see Appendix~\ref{app:consent}). If consent is obtained, the participant sees a short paragraph explaining the decision-making scenario. To make demographic parity accessible to a non-technical audience, and to avoid bias related to algorithmic decision-making, we frame this notion of fairness as a \emph{rule} that the decision-maker must follow to be fair.
In the hiring scenario, we framed this decision rule as follows:
\emph{The fraction of applicants who receive job offers that are female should equal the fraction of applicants that are female. Similarly, the fraction of applicants who receive job offers that are male should equal the fraction of applicants that are male.}
We then ask two questions concerning participant evaluation of the scenario, nine comprehension questions about the fairness rule, two self-report questions on participant understanding and use of the rule, and four free response questions on comprehension and sentiment.
For example, one comprehension question is:
\emph{Is the following statement TRUE OR FALSE: This hiring rule always allows the hiring manager to send offers exclusively to the most qualified applicants}.
Finally, we collect demographic information (age, gender, race/ethnicity, education level, and expertise in a number of relevant fields).
We conducted in-person cognitive interviews ~\cite{harrell2009data} to pilot our survey, leading to several improvements in the question design. Most notably, because some cognitive interview participants appeared to use their own personal notions of fairness rather than our provided rule, we added questions to assess this compliance issue.
\subsubsection{Recruitment and Participants} \label{subsubsec:methods:study1:recruitment}
We recruited participants using the online service Cint \cite{cint}, which allowed us to loosely approximate the 2017 U.S. Census distributions \cite{census07} for ethnicity and education level, allowing for broad representation. %
We required that participants be 18 years of age or older, and fluent in English. Participants were compensated using Cint's rewards system; according to a Cint representative: ``[Participants] can choose to receive their rewards in cash sent to their bank accounts (e.g. via PayPal), online shopping opportunities with one of multiple online merchants, or donations to a charity."
\dsnew{Data was collected during August 2019.} In total 147 participants were included in the Study-1{} analysis, including 75 men (51.0\%), 71 women (48.3\%), and 1 (0.7\%) preferring not to answer. The average age was 46 years (SD = 16). Ethnicity and educational attainment are summarized in Table~\ref{tab:demo}. %
On average, participants completed the survey in 14 minutes.
\input{demo.tex}
\subsection{Study-2}\label{sec:studyB}
Study-2{} follows a very similar structure to Study-1{} with a few changes. First, we decided to use only the hiring (HR) decision scenario (See \Appref{app:scenario_analysis} for more in-depth discussion). %
Second, we expanded to three definitions of fairness: \emph{demographic parity} (DP), \emph{equal opportunity} (EP), and \emph{equalized odds} (EO) ~\cite{Hardt16:Equality}. Within EP, we tested both False Negative Rate (FNR) and False Positive Rate (FPR), resulting in a total of four conditions.
\subsubsection{Survey Design}
Here we provide a high-level discussion of the differences between Study-2{} and Study-1{}; the full text of each survey can be found in Appendix~\ref{app:survey}. We used a between-subjects design with random assignment among the four conditions (DP, FNR, FPR, EO). Again, we frame each notion of fairness as a \emph{hiring rule} that the decision-maker must follow to be fair. For example, in FPR we define the award rule as follows:
\emph{The fraction of unqualified male candidates who receive job offers should equal the fraction of unqualified female candidates who receive job offers.}
For this version, we added graphical examples to further clarify our explanations (see Fig.~\ref{fig:example_people} for an example).
We used the all the same %
questions as in Study-1{} but added two additional Likert-scale questions %
assessing participant sentiment: one asked whether they liked the rule, and the other asked whether they agreed with the rule. One free response question (asking how participants personally would go about the hiring process to ensure it was fair), which did not consistently provide useful responses in Study-1{}, was removed from the Study-2{} survey in an effort to keep the expected completion time similar. %
\begin{figure}[h]
\centering
\includegraphics[height=1in]{illustrations/eo/example_1_POOL.png}
\vspace{10pt}
\includegraphics[height=1in]{illustrations/eo/example_1b_offer.png}
\space\space
\includegraphics[height=1in]{illustrations/eo/example_1b_no_offer.png}
\caption{A graphical example to describe a fair hiring outcome for EO. Yellow people represent females while green people represent males. The darker colors represent qualified individuals while the lighter colors represent unqualified individuals. The gray box represents the original pool of applicants. The green box represent individuals that received job offers while the red box with a dashed border represents individuals that did \emph{not} receive job offers.}
\label{fig:example_people}
\end{figure}
\subsubsection{Recruitment and Participants}
We again used the Cint service to recruit participants. \dsnew{Compensation for participation was handled in the same manner as described in \S\ref{subsubsec:methods:study1:recruitment}.} Because our initial sample (intended to target education, ethnicity, gender and age distributions approximating the U.S. census) skewed more highly educated than we had hoped, we added a second round of recruitment one week
later primarily targeting participants without bachelor's degrees. Hereafter, we report on both samples together.
\dsnew{Data was collected during January and February 2020.} In total 349 participants were included in the Study-2{} analysis, including 142 men (40.7\%), 203 women (58.2\%), 1 other (0.3\%), and 3 (0.9\%) preferring not to answer. The average age was 45 years (SD = 15). Ethnicity and educational attainment are summarized in Table~\ref{tab:demo}. %
On average, participants completed the survey in 16 minutes. %
\subsection{Data Analysis}
Free response questions were qualitatively coded for statistical testing. In Study-1{}, one question was coded by a single researcher for simple correctness (see \Appref{results:1:rq1}), and the other was independently coded by three researchers (resolved to 100\%) to capture sentiment information (see \Appref{results:1:rq3}). In Study-2{}, both questions were independently coded by 2-3 researchers (resolved to 100\%). Participants who provided nonsensical answers, answers not in English, or other non-responsive answers to free response questions were excluded from all analysis.
The following methods were used for all statistical analyses unless otherwise specified. Correlations with nonparamentric ordinal data were assessed using Spearman's rho. Omnibus comparisons on nonparametric ordinal data were performed with a Kruskal--Wallis (K-W) test, and relevant post-hoc comparisons with Mann--Whitney U (M-WU) tests. Post-hoc $p$-values were adjusted for multiple comparisons using Bonferroni correction. $\chi^2$ tests were used for comparisons of nominal data.
Boxplots show median and first and third quartiles; whiskers extend to $1.5 * \text{IQR}$ (interquartile range), with outliers indicated by points. \dsnew{The full analysis script for both studies can be found on github. \footnote{\url{https://github.com/saharaja/ICML2020-fairness}}}
\subsection{Limitations}
As with all surveys, our study has certain limitations. We recruited a demographically broad population, but web panels are generally more tech-savvy than the broader population \cite{redmiles2019well}. We consider this acceptable for a first effort. Some participants may
be satisficing rather than answering carefully. We mitigate this by
disqualifying participants with off-topic or non-responsive free-text responses. Further, this limitation can be expected to be consistent across conditions, enabling reasonable comparison. Finally, better or clearer explanations of the fairness definitions we explored are certainly possible; we believe our explanations were sufficient to allow us to investigate our research questions, especially because they were designed to be consistent across conditions.
\section{Results}\label{sec:results}
In this section we first discuss the preliminary findings from Study-1{} (see \S\ref{results:a}). These findings were used as hypotheses for further exploration and testing in Study-2{}; we discuss those results second (see \S\ref{results:b}).
\subsection{Study-1{}} \label{results:a}
We analyze survey responses for Study-1{} and make several observations. We first validate our comprehension score as a measure of participant understanding; we then generate hypotheses for further exploration in Study-2{}.
\subsubsection{Our Survey Effectively Captures Rule Comprehension} \label{results:a:validity}
We find that we can measure comprehension of the fairness rule. The comprehension score was calculated as the total correct responses out of a possible 9. All questions were weighted equally. The relevant questions included 2 multiple choice, 4 true/false, and 3 yes/no questions. The average score was 6.2 (SD=2.3).
We validate our comprehension score using two methods: internal validity testing, and correlation against two self-report and one free response question included in our survey (see \Appref{results:1:rq1} for further details).
\vspace{-5pt}
\paragraph{Internal Validity}
Cronbach's $\alpha$ and item-total correlation were used to assess internal validity of the comprehension score. Both measures met established thresholds \cite{nunnally1978,everitt2010}: Cronbach's $\alpha = 0.71$, and item-total correlation for 8 of the 9 items (all but Q5) $> 0.3$. %
\vspace{-5pt}
\paragraph{Question Correlation}
We find that self-reported rule understanding and use are reflected in comprehension score. First, we compared comprehension score to self-reported rule understanding (Q13): ``I am confident I know how to apply the award rule described above,'' rated on a five-point Likert scale from strongly agree (1) to strongly disagree (5). The median response was ``agree'' ($\text{Q1}=1$, $\text{Q3}=3$). Higher comprehension scores tended to be associated with greater confidence in understanding (Spearman's $\rho = 0.39$, $p<0.001$), supporting the notion that comprehension score is a valid measure of rule comprehension.
Next, we compared comprehension score to a self-report question about the participant's use of the rule (Q14), with the following options: (a) ``I applied the provided award rule only,'' (b) ``I used my own ideas of what the correct award decision should be rather than the provided award rule,'' or (c) ``I used a combination of the provided award rule and my own ideas of what the correct award decision should be.'' We find that participants who claimed to use only the rule scored significantly higher (mean 7.09) than those who used their own notions (4.90) or a combination (4.68) (post-hoc M-WU,
$p<0.001$ for both tests; corrected $\alpha = 0.05/3 = 0.017$). This further corroborates our comprehension score.
Finally, we asked participants to explain the rule in their own words (Q12). Each response was then qualitatively coded as one of five categories -- \textbf{Correct}: describes rule correctly; \textbf{Partially correct}: description has some errors or is somewhat vague; \textbf{Neither}: vague description of purpose of the rule rather than how it works, or pure opinion; \textbf{Incorrect}: incorrect or irrelevant; and \textbf{None}: no answer, or expresses confusion. Participants whose responses were either correct (mean comprehension score = 7.71) or partially correct (7.03) performed significantly better on our survey than those responding with neither (5.13) or incorrect (4.24) (post-hoc M-WU, $p<0.001$ for these four comparisons, corrected $\alpha = 0.05/10 = 0.005$). These findings further validate our comprehension score. Additional details of these results and the associated statistical tests can be found in \Appref{results:1:rq1}.
\subsubsection{Hypotheses Generated} \label{results:a:hypotheses}
We analyzed the data from Study-1{} in an exploratory fashion intended to generate hypotheses that could be tested in Study-2{}.
We highlight here three key hypotheses that emerged from the data.
\paragraph{Education Influences Comprehension}
We used poisson regression models to explore whether various demographic factors were associated with differences in comprehension. We found that a model including education as a regressor had greater explanatory power than a model without (see \Appref{results:1:rq2} for further details).
\paragraph{Disagreement with the Rule is Associated with Higher Comprehension Scores}
We asked participants for their opinion on the presented rule in a free response question (Q15). These responses were qualitatively coded to capture participant sentiment toward the rule in one of five categories -- \textbf{Agree}: generally positive sentiment towards rule; \textbf{Depends}: describes both pros and cons of the given rule; \textbf{Disagree}: generally negative sentiment towards rule; \textbf{Not understood}: expresses confusion about rule; \textbf{None}: no answer, or lacks opinion on appropriateness of the rule. Participants who expressed disagreement with the rule performed better (mean comprehension score = 7.02) than those who expressed agreement (5.50), did not understand the rule (4.44), or provided no response (5.09) to the question (post-hoc M-WU, $p<0.005$ for these three comparisons;
corrected $\alpha=0.05/10=0.005$). \Appref{results:1:rq3} provides further details.
\begin{figure*}[h]
\centering
\begin{subfigure}[t]{0.30\textwidth}
\includegraphics[width=1\linewidth]{fig_studyB/q13.png}
\caption{Grouped by response to Q13}
\label{fig:studyB_q13}
\vspace{-5pt}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.30\textwidth}
\includegraphics[width=1\linewidth]{fig_studyB/q14.png}
\caption{Grouped by response to Q14.}
\label{fig:studyB_q14}
\vspace{-5pt}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.30\textwidth}
\includegraphics[width=1\linewidth]{fig_studyB/q12.png}
\caption{Grouped by coded response to Q12.}
\label{fig:studyB_q12}
\vspace{-5pt}
\end{subfigure}
\caption{Comprehension scores grouped by questions. In (a), self-reported understanding of the rule was not related to comprehension score. X-axis is reversed for figure and correlation test. In (b), rule compliance (leftmost on the x-axis) was associated with higher comprehension scores. One participant who did not provide a response was excluded from this figure and the relevant analysis. Finally, in (c), participants who provided either correct or partially correct responses tended to perform better.}
\end{figure*}
\paragraph{Non-Compliance is Associated with Lack of Understanding} \label{results:a:non-comp}
We were interested in understanding why some participants failed to adhere to the rule, as measured by their self-report of rule usage in Q14. We labeled those who responded with either having used their own personal notions of fairness ($n=29)$ or some combination of their personal notions and the rule ($n=28$) as ``non-compliant" (NC), with the remaining $n=89$ labeled as ``compliant" (C). One participant who did not provide a response was excluded from this analysis, conducted using $\chi^2$ tests.
Non-compliant participants were less likely to self-report high understanding of the rule in Q13 (see Fig. \ref{fig:q13q14}). Moreover, non-compliance also appears to be associated with a reduced ability to correctly explain the rule in Q12 (see Fig. \ref{fig:q12q14}). This fits with the overall strong relationship we observed among comprehension scores, self-reported understanding, ability to explain the rule, and compliance.
Further, negative participant sentiment towards the rule (Q15) also appears to be associated with greater compliance (see Fig. \ref{fig:q15q14}). %
Thus, non-compliant participants appear to behave this way because they do not \emph{understand} the rule, rather than because they do not \emph{like} it. Refer to \Appref{results:a:non-comp} for further details.
\subsection{Study-2{}} \label{results:b}
We first confirm the validity of our
comprehension score, then compare comprehension across
definitions and examine the hypotheses generated in Study-1{}.
\subsubsection{Score Validation} \label{results:b:validation}
We validated our metric using the same approach used in Study-1{}, i.e., assessing both internal validity and correlation with self-report and free-response questions. We report the results of this assessment here.
\paragraph{Internal Validity}
We again used Cronbach's $\alpha$ and item-total correlation to assess internal validity of the comprehension score. An initial assessment using all 349 responses yielded Cronbach's $\alpha = 0.38$, and item-total correlation $> 0.3$ for only four of the nine comprehension questions. Since both measures performed below established thresholds \cite{nunnally1978,everitt2010}, we investigated further and repeated these measurements individually for each fairness-definition condition (DP, FNR, FPR, EO). This procedure showed stark differences in Cronbach's $\alpha$ based on definition: DP = 0.64, FNR = 0.39, FPR = 0.49, EO = 0.62. Item-total correlations followed a similar pattern: best in DP, worst in FNR. Based on these differences, we iteratively removed problematic questions from the score on a per-definition basis until all remaining questions achieved an item-total correlation of $> 0.3$ \cite{everitt2010}.
By removing poorly performing questions, we
increase our confidence that the measured comprehension scores are meaningful for further analysis. Table \ref{tab:dropped_qs} specifies which questions were retained for analysis in each definition.
\vspace{-10pt}
\begin{table}[ht]
\centering
\caption{\label{tab:dropped_qs} Questions that were used for downstream analysis after iterative removal of questions with poor item-total correlation.}
\vspace{5pt}
{\small
\begin{tabular}{@{}lrrrrrrrrr@{}}
\toprule
& \multicolumn{9}{c}{\textbf{Questions}}\\
\midrule
& Q3 & Q4 & Q5 & Q6 & Q7 & Q8 & Q9 & Q10 & Q11\\
\midrule
DP & X & X & & & X & X & X & X & X \\
FNR & X & X & X & & & X & & & \\
FPR & X & X & X & X & & X & & X & X \\
EO & X & X & X & & X & X & X & X & X \\
\bottomrule
\end{tabular}%
}
\end{table}
Because questions were dropped on a per-definition basis, the maximum of the resulting scores varied from 4-8 depending on the definition, rather than being a uniform 9. We normalized this treating comprehension score as a percentage of the maximum for each condition rather than a raw score. %
We report this \textit{adjusted score} in the remainder of \S\ref{results:b}. The average score was 0.53 (SD=0.22).
\paragraph{Question Correlation} \label{qcorr}
As in Study-1{}, we compare comprehension scores with responses to self-report and free response questions included in our survey.
First, we compared comprehension score to self-reported rule understanding (Q13), as described in \S\ref{results:a:validity}.
The median response was ``agree'' ($\text{Q1}=2$, $\text{Q3}=3$). We assess the correlation between these responses and comprehension score using Spearman's rho (appropriate for ordinal data). Unlike in Study-1{}, there was no relationship between self-reported understanding and comprehension score (Fig.~\ref{fig:studyB_q13}).
Next, we compared comprehension score to a self-report question about the participant's use of the rule (Q14), as described in \S\ref{results:a:validity}.
A K-W test revealed a relationship between self-reported rule usage and comprehension score ($p<0.001$). %
We find that participants who claimed to use only the rule tended to score higher (mean comprehension score = 0.58) than those who used a combination of the rule and their own notions of fairness (0.47, $p<0.01$). No other differences were found (post-hoc M-WU; %
corrected $\alpha = 0.05/3 = 0.017$). This suggests that participants are answering at least somewhat honestly: when they try to apply the rule, comprehension scores improve (see Fig. \ref{fig:studyB_q14}).
Finally, we asked participants to explain the rule in their own words (Q12). Each response was then qualitatively coded as one of five categories, as described in \S\ref{results:a:validity}.
These results can be seen in Fig.~\ref{fig:studyB_q12}. A K-W test revealed a relationship between comprehension score and coded responses to Q12 %
($p<0.001$). Correct (mean comprehension score = 0.83) responses were associated with higher comprehension scores than partially correct (0.58), neither (0.44), incorrect (0.52), and none (0.48) responses %
($p<0.001$ for all); partially correct responses were also associated with higher comprehension scores than neither responses
($p<0.001$); and incorrect responses were associated with higher comprehension scores that neither responses
($p<0.005$). No other differences were found (post-hoc M-WU; corrected $\alpha = 0.05/10 = 0.005$). These findings support our claim that our comprehension score is a valid measure of fairness-rule comprehension.
\subsubsection{Education and Definition are Related to Comprehension Score} \label{results:b:edu}
One hypothesis generated by Study-1{} was that comprehension score is positively correlated with education level.
We investigated this hypothesis further in Study-2{} using linear regression models followed by model selection.
\dsnew{We believe this exploratory approach to be appropriate despite the previously formulated hypothesis, given the introduction of a new variable in Study-2{}, i.e. fairness definition.} %
Eleven models were tested, regressing different combinations of demographics (ethnicity, gender, education, and age) and condition (fairness definition). Models were compared using Akaike information criterion (AIC), a standard method of evaluating model quality and performing model selection \cite{akaike1974}. Comparison by AIC revealed that the model using just education (edu) and fairness definition (def) as regressors was the model of best fit. In this model, having a Bachelor's degree or above resulted in a score increase of 0.14, and the FNR condition caused a score decrease of -0.11 ($p < 0.004$ for both; corrected $\alpha = 0.05/11 = 0.0045$). A regression table of the best fit model can be found in Table \ref{tab:GLM}.
\begin{table}[h]
\centering
\caption{\label{tab:GLM} Regression table for the best fit model, with two covariates: education (baseline: no HS) and definition (baseline: DP). %
Est. = estimate, CI = confidence interval.}
\vspace{5pt}
{\small
\begin{tabular}{@{}lrrc@{}}
\toprule
\textbf{Covariate} & \textbf{Est.} & \textbf{95\% CI} & \textbf{$p$} \\
\midrule
\emph{Education} \\
HS & 0.00 & [-0.10, 0.10] & 0.989 \\ %
Post-secondary, no BS & 0.09 & [-0.01, 0.18] & 0.078\\ %
Bachelor's and above & 0.14 & [0.04, 0.23] & $<0.004$ \\ %
\addlinespace[1.5 ex]
\emph{Definition} \\
EO & -0.08 & [-0.14, 0.01] & 0.020 \\ %
FPR & -0.05 & [-0.11, 0.01] & 0.124 \\ %
FNR & -0.11 & [-0.18, -0.05] & $<0.001$ \\ %
\bottomrule
\end{tabular}%
}
\vspace{-7pt}
\end{table}
AIC results of each of the eleven models, along with the relevant regressors, can be seen in Table \ref{tab:AIC} in \Appref{app:b:model_selection}. Comprehension score as a function of education and fairness definition can be seen in Figs. \ref{fig:studyB_edu} and \ref{fig:studyB_scores}.
\begin{figure}[th]
\centering
\includegraphics[width=0.8\columnwidth]{fig_studyB/edu.png}
\vspace{-15pt}
\caption{Comprehension score grouped by education level. Higher education was associated with higher comprehension scores. Note that two participants who did not report their education level were removed from this figure and the relevant analysis.}
\label{fig:studyB_edu}
\vspace{-5pt}
\end{figure}
\begin{figure}[th]
\centering
\includegraphics[width=0.8\columnwidth]{fig_studyB/scores.png}
\vspace{-15pt}
\caption{Comprehension score grouped by fairness definition. The FNR condition was associated with lower comprehension sore.}
\label{fig:studyB_scores}
\vspace{-10pt}
\end{figure}
\subsubsection{Greater Negative Sentiment Toward the Rule is Associated with Higher Comprehension Scores} \label{results:b:sentiment}
In Study-1{}, we found a relationship between participant sentiment towards the rule and comprehension score. To better interrogate this phenomenon, in Study-2{} we added two more questions to the survey to directly address the issue of sentiment, rather than relying on a free-response question. One (Q15) asks, ``To what extent do you agree with the following statement: I like the hiring rule?", and is evaluated on a five-point Likert scale from ``strongly agree" (1) to ``strongly disagree" (5). The other (Q16) asks, ``To what extent do you agree with the following statement: I agree with the hiring rule?", and is also evaluated on a five-point Likert scale from ``strongly agree" (1) to ``strongly disagree" (5).
Using Spearman's rho, we assessed the correlation between responses to these two questions and comprehension score. A minor correlation was found between liking the rule and comprehension score, i.e. those who disliked the rule were more likely to have higher comprehension scores ($\rho = -0.11, p < 0.05$; see Fig.~\ref{fig:studyB_q15}).
A slight correlation was also found between agreeing with the rule and comprehension score, i.e. disagreement was associated with higher comprehension scores ($\rho = -0.11, p < 0.05$; see Fig.~\ref{fig:studyB_q16}).
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{fig_studyB/q15.png}
\vspace{-15pt}
\caption{Comprehension score grouped by response to Q15. Dislike of the rule was associated with higher comprehension scores. X-axis is reversed for figure and correlation test.}
\label{fig:studyB_q15}
\vspace{10pt}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{fig_studyB/q16.png}
\vspace{-15pt}
\caption{Comprehension score grouped by response to Q16. Disagreement with the rule was associated with higher comprehension score. X-axis is reversed for figure and correlation test.}
\label{fig:studyB_q16}
\vspace{-10pt}
\end{figure}
\subsubsection{Non-Compliance is Associated with Lack of Understanding} \label{results:b:non-comp}
A final hypothesis generated in Study-1{} involves non-compliance: i.e., why do participants who report \textit{not} using the rule to answer the comprehension questions behave this way?
In Study-1{}, we found that this was due to the fact that non-compliant participants were less able to \textit{understand} the rule, rather than because they did not \textit{like} it.
We also observed this in our results from Study-2{}:
compliant participants exhibited higher self-reported understanding of the rule ($p < 0.001$, Fig. \ref{fig:studyB_nc_q13q14}), were more likely to correctly explain the rule ($p < 0.001$, Fig. \ref{fig:studyB_nc_q12q14}), and were more likely to dislike the rule ($p < 0.05$, Fig. \ref{fig:studyB_nc_q15q14}). We observed no relationship between compliance and agreement with the rule (Fig. \ref{fig:studyB_nc_q16q14}). Refer to \Appref{app:b:compliance} for more details.
\section{Discussion} \label{sec:discussion}
Bias in machine learning is a growing threat to justice; to date, ML bias has been documented in both commercial and government applications, in sectors such as medicine, criminal justice, and employment. In response, ML researchers have proposed various notions of \emph{fairness} to correct these biases. Most ML fairness definitions are purely mathematical, and require some knowledge of machine learning. While they are intended to benefit the general public, it is unclear whether the general public agrees with --- or even understands --- these notions of ML fairness.
We take an initial step to bridge this gap by asking \emph{do people understand the notions of fairness put forth by ML researchers?} To answer this question we develop a short questionnaire to assess understanding of three particular notions of ML fairness (demographic parity, equal opportunity, and equalized odds). We find that our comprehension score (with some adjustments for each definition) appears to be a consistent and reliable indicator of understanding the fairness metrics.
The comprehension score demonstrated in this work lays a foundation for many future studies exploring other fairness definitions.
We do find, however, that comprehension is lower for equal opportunity, false negative rate than other definitions.
In general, comprehension scores for equal opportunity (both FNR and FPR) were less internally consistent than other fairness rules, suggesting participant responses were also more ``noisy'' for equal opportunity.
This is somewhat intuitive: equal opportunity is difficult to understand, as it only involves one type of error (FNR or FPR) rather than both.
Furthermore, FNR participants had the lowest comprehension scores \emph{and} the lowest consistency of all conditions.
We believe this finding also matches intuition: FNR is a strange notion in the context of hiring, as it concerns only those qualified applicants who were \emph{not} hired or offered jobs.
Indeed, in free-response questions several participants mentioned that they do not understand why qualified candidates are \emph{not} hired.
We believe many participants fixated on this strange setting, impacting their comprehension scores.
This finding is potentially problematic, as equal opportunity definitions are increasingly used in practice. Indeed, major fairness tools such as Google What-If tool \cite{wexler2019if} and the IBM AI Fairness 360 \cite{bellamy2019ai} specifically focus on equal opportunity. Further work should be put into making descriptions of nuanced fairness metrics more accessible.
Our analysis also identified other issues that should be considered when thinking about mathematical notions of fairness.
First, we find that education is a strong
predictor of comprehension. This is especially troubling, as the negative impacts of biased ML are expected to disproportionately impact the most marginalized~\cite{barocas2016big} and displace employment opportunities for those with the least education~\cite{frey2017future}. Lack of understanding may hamper these groups' ability to effectively advocate for themselves. Designing more accessible explanations of fairness should be a top research priority.
Second, we find that those with the weakest comprehension of fairness metrics also express the least negative sentiment toward them. When fairness is a concern, there are always trade-offs---between accuracy and equity, or between different stakeholders, and so on. Balancing these trade-offs is an uncomfortable dilemma often lacking an objectively correct solution. It is possible that those who comprehend this dilemma \emph{also} recognize the precarious trade-off struck by any mathematical definition of fairness, and are therefore dissatisfied with it. From another perspective, this finding is more insidious. If those with the weakest understanding of AI bias are also least likely to protest, then major problems in algorithmic fairness may remain uncorrected.
\input{acknowledgments}
\subsection{Our Survey Effectively Captures Rule Comprehension} \label{results:1:rq1}
We find that our survey is internally consistent, and effectively measures participant comprehension of demographic parity. The former we evaluated using Cronbach's $\alpha$ and item-total correlation (discussed in \S\ref{results:a:validity}), and the latter using two self-report measures and one free response question.
See Fig.~\ref{fig:question_breakdown} for participant performance per question.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\columnwidth]{fig/question_breakdown.png}
\vspace{-10pt}
\caption{Number of participants answering each question correctly. Each panel contains all 147 participants.
}
\label{fig:question_breakdown}
\vspace{-5pt}
\end{figure}
\subsubsection{Self-reported rule understanding and use are reflected in comprehension score}
First, we compared comprehension score to self-reported rule understanding (Q13). Higher comprehension scores were associated with greater confidence in understanding (Spearman's rho), suggesting that participants were accurately assessing their ability to apply the rule (see Fig. \ref{fig:q13}).
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\columnwidth]{fig/q13.png}
\vspace{-10pt}
\caption{Comprehension score grouped by response to Q13. Self-reported understanding of the rule was associated with higher comprehension scores. X-axis is reversed for figure and correlation test.}
\label{fig:q13}
\end{figure}
Next, we compared comprehension score to a self-report question about the participant's use of the rule (Q14)
Participants who claimed to use only the rule tended to score higher than those who used their own notions of fairness or a combination thereof (K-W test, and post-hoc M-WU), suggesting that participants are answering somewhat honestly: when they try to apply the rule, comprehension scores improve (see Fig.~\ref{fig:q14}).
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\columnwidth]{fig/q14.png}
\vspace{-10pt}
\caption{Comprehension score grouped by response to Q14. Rule compliance (leftmost on the x-axis) was associated with higher comprehension scores. One participant who did not provide a response was excluded from the figure and relevant analysis.}
\label{fig:q14}
\end{figure}
\subsubsection{Participants with higher comprehension scores are better able to explain the rule}
To further validate our comprehension score, we asked participants to explain the rule in their own words (Q12). %
Responses were qualitatively coded as one of five categories: \textbf{correct}, \textbf{partially correct}, \textbf{neither}, \textbf{incorrect}, or \textbf{none} (as discussed in \S\ref{results:a:validity}). The results of this coding can be seen can be seen in Fig. \ref{fig:q12}. Participants providing correct explanations of the rule attained higher comprehension scores (k-W test, and post-hoc M-WU), further corroborating our claim that our comprehension score is a valid measure of fairness rule comprehension.
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\columnwidth]{fig/q12.png}
\vspace{-10pt}
\caption{Comprehension score grouped by code assigned to Q12 response. Participants who provided either correct or partially correct responses tended to perform better.}
\label{fig:q12}
\vspace{-10pt}
\end{figure}
\subsection{Education Influences Comprehension} \label{results:1:rq2}
During the cognitive interview phase, we observed a possible trend of comprehension scores being lower for older participants and those with less educational attainment. If true, this would suggest that fairness explanations should be carefully validated to ensure they can be used with diverse populations. We investigated this hypothesis, in an exploratory fashion, using poisson regression models.
Three models were tested. The first regressed score against all four demographic categories as predictors (gender, age, ethnicity, and education), the second omitted education, and the third tested only education. Models were compared using Akaike information criterion (AIC), a standard method of evaluating model quality and performing model selection \cite{akaike1974}. Comparison by AIC revealed that model 1 (all four categories) was a better predictor for comprehension score than models 2 or 3 (AIC = 643.3, 651.2, and 660.5, respectively; difference = 0.0, 7.9, and 17.1).
In model 1, only education showed correlation with comprehension score (effect size = $1.40$, $p<0.05$). %
Further work is needed to confirm this exploratory result.
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\columnwidth]{fig/edu.png}
\vspace{-10pt}
\caption{Comprehension score grouped by education level. Higher education level was associated with higher comprehension scores.}
\label{fig:edu}
\vspace{-10pt}
\end{figure}
\subsection{Disagreement with the Rule is Associated with Higher Comprehension Scores} \label{results:1:rq3}
Participants were asked for their opinion on the presented rule in another free response question (Q15). These responses were then qualitatively coded to capture participant sentiment towards the rule as one of five categories: \textbf{agree}, \textbf{depends}, \textbf{disagree}, \textbf{not understood}, or \textbf{none} (as discussed in \S\ref{results:a:hypotheses}).
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\columnwidth]{fig/q15.png}
\caption{Comprehension score grouped by code assigned to Q15 response. Participants who exhibited negative sentiment toward the rule responses tended to perform better.}
\label{fig:q15}
\vspace{-10pt}
\end{figure}
This question was added based on the cognitive interviews (see \Appref{methods:design:cog}), where perception seemed to influence compliance.
The results of coding Q15 can be seen in Fig. \ref{fig:q15}. Participants who expressed disagreement with the rule performed better than those who expressed agreement, did not understand the rule, or provided no response to the question (K-W test, post-hoc M-WU). Note that this result should not be interpreted as an overall finding on the appropriateness of demographic parity. Instead we anticipate the perceptions of appropriateness of any fairness definition will be highly context-dependent.
\subsection{Non-Compliance is Associated with Lack of Understanding} \label{results:a:non-comp}
We were interested in understanding why some participants failed to adhere to the rule, as measured by their self-report of rule usage in Q14.
After labeling participants as either ``non-compliant" (NC, $n=57)$ or ``compliant" (C, $n=89$), we conducted a series of $\chi^2$ tests to investigate this phenomenon.
Non-compliant participants were less likely to self-report high understanding of the rule in Q13 (see Fig. \ref{fig:q13q14}). %
Moreover, non-compliance also appears to be associated with a reduced ability to correctly explain the rule in Q12 (see Fig. \ref{fig:q12q14}). %
Further, negative participant sentiment towards the rule (Q15) also appears to be associated with greater compliance (see Fig. \ref{fig:q15q14}). %
Thus, non-compliant participants appear to behave this way because they do not \emph{understand} the rule, rather than because they do not \emph{like} it.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\columnwidth]{fig/nc_q13q14.png}
\vspace{-15pt}
\caption{Self-report of understanding (Q13) split by compliance (Q14). NC participants tend to report less confidence in their ability to apply the rule. SD = strongly disagree, D = disagree, N = neither agree nor disagree, A = agree, SA = strongly agree.}
\label{fig:q13q14}
\vspace{-5pt}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.8\columnwidth]{fig/nc_q12q14.png}
\vspace{-15pt}
\caption{Correctness of rule explanation (Q12) split by compliance (Q14). NC participants tend to be less able to explain the presented rule in their own words. NA = none, I = incorrect, N = neither, PC = partially correct, C = correct.}
\label{fig:q12q14}
\vspace{-5pt}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.8\columnwidth]{fig/nc_q15q14.png}
\vspace{-15pt}
\caption{Participant agreement with rule (Q15) split by compliance (Q14). NC participants tend to harbor less negative sentiment towards the rule. NA = none, NU = not understood, D = disagree, De = depends, A = agree.}
\label{fig:q15q14}
\vspace{-5pt}
\end{figure}
\subsection{Decision Scenarios} \label{app:scenario_analysis}
For Study-1{} we designed three decision-making scenarios to test whether the perceived importance or realism of a particular scenario influenced comprehension score. They are as follows:
\begin{itemize} \itemsep=0cm
\item \textbf{Art Project (AP):} distributing awards for art projects to primary school students,
\item \textbf{Employee Awards (EA):} distributing employee awards at a sales company, and
\item \textbf{Hiring (HR):} distributing job offers to applicants.
\end{itemize}
In each scenario the students/employees/applicants are partitioned into two groups (parents' occupation for the first scenario, and binary gender for the other two scenarios).
We use a between-subjects design: participants are randomly partitioned into three conditions, one for each scenario (AP, EA, or HR).
For each condition we define the \emph{fairness rule} in the context of the decision-making scenario (see Appendix~\ref{app:survey} for the full surveys).
Next we describe our main conclusion related to the different decision-making scenarios in Study-1: the scenario does not influence comprehension score.
\subsubsection{Scenario does not Influence Comprehension Scores (RQ4)} \label{results:rq4}
We were concerned that less important and/or realistic scenarios would cause participants to take the survey less seriously, and therefore perform more poorly.
To test this,
participants were randomly assigned to a scenario, resulting in the following distribution: AP = 41, EA = 49, HR = 57.
A K-W test revealed no differences between scenarios in terms of comprehension score (mean comprehension scores: AP = 6.0, EA = 6.74, HR = 5.86%
). However, differences did exist between scenarios in terms of importance (assessed in Q2), measured in hours of effort deemed necessary to make the relevant decision (K-W, %
$p<0.001$). Post-hoc M-WU revealed that participants believed making a decision in the AP scenario merited fewer hours of effort (mean = 3.15hrs) than in the EA (13.52hrs, $p<0.001$)
or HR (15.23hrs, $p<0.001$)
scenarios (corrected $\alpha=0.05/3=0.017$). See Fig. \ref{fig:q2} for distributions of responses.
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\columnwidth]{fig/q2.png}
\vspace{-10pt}
\caption{Importance of a scenario by proxy of hours of effort necessary to make a decision in each scenario. AP merited less hours of effort than both EA and HR.}
\label{fig:q2}
\end{figure}
Of note, it is possible that perceived realism, assessed in Q1 on a five-point Likert scale, was also influenced by scenario (K-W, $p=0.051$), but we may need larger sample sizes to confirm this. Regardless, while the nature of a scenario does influence participant perception in terms of importance and (possibly) realism, it does not appear to influence comprehension (at least for the scenarios we chose). For this reason, we chose to test a single scenario (HR) in Study-2{}. %
\subsection{Model Selection} \label{app:b:model_selection}
In \S\ref{results:b:edu} we assessed eleven linear regression models for predicting comprehension scores. The best fit model, determined by model selection via AIC, included only education (edu) and fairness definition (def) as regressors. The results of model selection are below in Table \ref{tab:AIC}.
\begin{table}[bht]
\centering
\caption{\label{tab:AIC} Models tested in \S\ref{results:b:edu}, sorted by best to least fit. The first model in the table (edu + def) is the model of best fit. dAIC = difference from model with lowest AIC value.}
\vspace{7pt}
{\small
\begin{tabular}{@{}lrr@{}}
\toprule
\textbf{Model regressors} & \textbf{AIC} & \textbf{dAIC} \\
\midrule
edu + def & -80.4 & 0 \\
edu & -72.8 & 7.6 \\
gender + edu & -70.3 & 10.1 \\
age + edu & -63.7 & 16.7 \\
gender + age + edu & -61.1 & 19.2 \\
gender + age + eth + edu + def & -61.1 & 19.2 \\
def & -60.8 & 19.6 \\
gender + age + eth + edu & -55.5 & 24.9 \\
gender + age + def & -46.4 & 34 \\
gender + age + eth + def & -41.6 & 38.8 \\
gender + age + eth & -37.2 & 43.2 \\
\bottomrule
\end{tabular}%
}
\vspace{-10pt}
\end{table}
\subsection{Non-Compliance} \label{app:b:compliance}
In \S\ref{results:b:non-comp} we sought to further investigate the findings of Study-1{} with regards to compliance (Q14). To do so, we labeled those who responded (in Study-2{}) with either having used their own personal notions of fairness ($n=26)$ or some combination of their personal notions and the rule ($n=148$) as ``non-compliant" (NC), with the remaining $n=174$ labeled as ``compliant" (C). One participant who did not provide a response was excluded from this analysis, conducted using KW and $\chi^2$ tests.
Non-compliant participants were less likely to self-report high understanding of the rule in Q13 %
(KW test, $p<0.001$, see Fig. \ref{fig:studyB_nc_q13q14}). Moreover, non-compliance also appears to be associated with a reduced ability to correctly explain the rule in Q12 %
($\chi^2$ test, $p<0.001$, see Fig. \ref{fig:studyB_nc_q12q14}). This fits with the overall strong relationship we observed among comprehension scores, %
ability to explain the rule, and compliance.
Further, greater dislike towards the rule (Q15) also appears to be associated with greater compliance %
(KW test, $p<0.05$, see Fig. \ref{fig:studyB_nc_q15q14}). %
However, there was no relationship between disagreement towards the rule (Q16) and compliance (see Fig. \ref{fig:studyB_nc_q16q14}).
These results largely corroborate the notion that non-compliant participants appear to behave this way because they do not \emph{understand} the rule, rather than because they do not \emph{like} it. %
\begin{figure}[h]
\centering
\includegraphics[width=0.8\columnwidth]{fig_studyB/nc_q13q14.png}
\vspace{-15pt}
\caption{Self-report of understanding (Q13) split by compliance (Q14). NC participants tend to report less confidence in their ability to apply the rule. SD = strongly disagree, D = disagree, N = neither agree nor disagree, A = agree, SA = strongly agree.}
\label{fig:studyB_nc_q13q14}
\vspace{-5pt}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.8\columnwidth]{fig_studyB/nc_q12q14.png}
\vspace{-15pt}
\caption{Correctness of rule explanation (Q12) split by compliance (Q14). NC participants tend to be less able to explain the presented rule in their own words. NA = none, I = incorrect, N = neither, PC = partially correct, C = correct.}
\label{fig:studyB_nc_q12q14}
\vspace{-5pt}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.8\columnwidth]{fig_studyB/nc_q15q14.png}
\vspace{-15pt}
\caption{Participant liking for rule (Q15) split by compliance (Q14). NC participants tend to dislike the rule less than C participants. SD = strongly disagree, D = disagree, N = neither agree nor disagree, A = agree, SA = strongly agree.}
\label{fig:studyB_nc_q15q14}
\vspace{-5pt}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.8\columnwidth]{fig_studyB/nc_q16q14.png}
\vspace{-15pt}
\caption{Participant agreement with rule (Q16) split by compliance (Q14). No differences were found between NC and C participants. SD = strongly disagree, D = disagree, N = neither agree nor disagree, A = agree, SA = strongly agree.}
\label{fig:studyB_nc_q16q14}
\vspace{-5pt}
\end{figure}
\subsubsection{Scenario description and questions}\label{app:st2_scenarios}
The following is shown to each participant (note that Step 3 is not shown to participants with the DP definition):
It is very important that you read each question carefully and think about your answers. The success of our research relies on our respondents being thoughtful and taking this task seriously.
\begin{itemize}
\vspace{-8pt}
\item[\text{\fboxsep=-.15pt\fbox{\rule{0pt}{1.5ex}\rule{1.5ex}{0pt}}}] I have read the above instructions carefully.
\vspace{-8pt}
\end{itemize}
A company, Sales-a-lot, is reviewing their hiring process. They want to hire applicants who are high performing, and they also want to make sure that their hiring process is fair to their applicants, no matter their gender. To do this, Sales-a-lot employs an external firm, Recruit-a-matic, which keeps track of all applicants. This review will take place over one year.
For clarity at each stage of the hiring process we use images to represent the hiring pool.
\paragraph{Step 1: Applicant Pool.} At the beginning of the year, Sales-a-lot reviews all job applicants, and sends job offers to some of them. The initial applicant pool is shown with a gray background. For example, the following image shows an applicant pool with 15 female applicants and 25 male applicants:
\includegraphics[height=1in]{illustrations/intro/step_1_green_yellow.png}
\paragraph{Step 2: Sending Job Offers.} Next, Sales-a-lot sends job offers \space to some of these \space applicants, using the \space following criteria:
\begin{itemize} \itemsep=0pt
\vspace{-10pt}
\item Interview scores
\item Quality of recommendation letters
\item Number of years of prior experience in the field
\end{itemize}
Suppose that Sales-a-lot sends offers to 5 female applicants and 8 male applicants (so 10 female and 17 male applicants didn’t receive offers). In the following image, applicants who received a job offer are shown on the left (with a green background) and applicants who didn’t receive a job offer are shown on the right, with a red background):
\includegraphics[height=1in]{illustrations/intro/step_2_green_yellow_OFFER.png}
\space\space\space
\includegraphics[height=1in]{illustrations/intro/step_2_green_yellow_NO_OFFER.png}
\paragraph{Step 3: Applicant Evaluation.} For the rest of the year, Recruit-a-matic (the external firm) keeps track of all applicants in the initial pool, whether they received job offers or not. At the end of the year, Rectruit-a-matic finds out which applicants were high performers, i.e. qualified (shown in dark), and which applicants were low performers, i.e. unqualified (shown in light). For example, the following image shows that most of the high performers received job offers, but some did not.
\includegraphics[height=1in]{illustrations/intro/step_3_OFFER.png}
\space\space\space
\includegraphics[height=1in]{illustrations/intro/step_3_NO_OFFER.png}
\begin{tabular}{r|c|c}
& female & male \\
\midrule
qualified & \includegraphics[height=0.2in]{illustrations/intro/qual_female.png} & \includegraphics[height=0.2in]{illustrations/intro/qual_male.png} \\
unqualified & \includegraphics[height=0.2in]{illustrations/intro/uqual_female.png} & \includegraphics[height=0.2in]{illustrations/intro/uqual_male.png} \\
\end{tabular}
\paragraph{Questions}
\begin{enumerate}
\vspace{-5pt}
\item To what extent do you agree with the following statement: a scenario similar to the one described above might occur in real life.
\begin{itemize}
\item Strongly agree
\item Agree
\item Neither agree nor disagree
\item Disagree
\item Strongly disagree
\end{itemize}
\item How much effort, in hours, should Sales-a-lot put in to make sure these decisions were fair? [short answer - number of hours]
\end{enumerate}
\subsubsection{Rule descriptions and questions}\label{app:st2_fairness}
The following sections provide fairness definitions (presented to participants as \emph{rules}) for Demographic Parity, Equal Opportunity (FNR and FPR), and Equalized Odds. Unless otherwise noted the rule description is shown above each of the questions for reference. Correct answers are noted in \correct{red}.
\paragraph{Demographic Parity.}
Recruit-a-matic uses the following rule to determine whether Sales-a-lot’s hiring decisions were fair:
\emph{The fraction of male candidates who receive job offers should equal the fraction of female candidates who receive job offers.}
Example 1: Suppose that over the past year, Recruit-a-matic finds that Sales-a-lot received the following applicants (10 female and 12 male).
\includegraphics[height=1in]{illustrations/demo_parity/example_1_POOL.png}
If Sales-a-lot sent job offers to the following number of applicants (5 female and 6 male), then this would be fair according to the hiring rule (note that there are other possible outcomes that are fair according to the hiring rule).
\includegraphics[height=1in]{illustrations/demo_parity/example_1b_offer.png}
Example 2: Suppose that over the past year, Recruit-a-matic finds that Sales-a-lot reviewed a total of 100 applicants as follows (40 female and 60 male).
\includegraphics[height=2in]{illustrations/demo_parity/example_2_POOL.png}
If Sales-a-lot sent job offers to the following number of applicants (10 female and 15 male), then this would be fair according to the hiring rule (note that there are other possible outcomes that are fair according to the hiring rule).
\includegraphics[height=1in]{illustrations/demo_parity/example_2b_offer.png}
In the next section, we will ask you some questions about the information you have just read. Please note that this is not a test of your abilities. We want to measure the quality of the description you read, not your ability to take tests or answer questions.
\textbf{Please note that we ask you to apply and use ONLY the above hiring rule when answering the following questions. You will have an opportunity to state your opinions and feelings on the rule later in the survey.}
\begin{enumerate}
\setcounter{enumi}{2}
\item Suppose a different company considered applicants for a different job. There were 200 female applicants and 100 male applicants,
\includegraphics[height=1.3in]{illustrations/demo_parity/q3_POOL.png}
and they did send job offers to 90 male applicants.
\includegraphics[height=1.3in]{illustrations/demo_parity/q3_male_offer.png}
Assuming that Recruit-a-matic reviews their decisions using the hiring rule above, how many female applicants should have received job offers?
\begin{enumerate}
\item 190
\includegraphics[height=1.3in]{illustrations/demo_parity/q3_a_offer.png}
\item \correct{180}
\includegraphics[height=1.3in]{illustrations/demo_parity/q3_b_offer.png}
\item 160
\includegraphics[height=1.3in]{illustrations/demo_parity/q3_c_offer.png}
\item 150
\includegraphics[height=1.3in]{illustrations/demo_parity/q3_d_offer.png}
\end{enumerate}
\item Assuming Recruit-a-matic reviews decisions using the hiring rule above, in which of these cases could Sales-a-lot have accepted more qualified female applicants than qualified male applicants?
\begin{enumerate}
\item When there are more qualified female applicants than qualified male applicants (i.e., more women had low net sales at the end of the year).
\item \correct{When there are more female applicants than male applicants.}
\item When female applicants receive worse interview scores than male applicants.
\item This cannot happen under the hiring rule.
\end{enumerate}
\item Consider one male applicant and one female applicant, both of whom are similarly qualified for the job (they achieve about the same net sales at the end of their first year). Is the following statement \correct{TRUE} OR FALSE: The hiring rule above allows Sales-a-lot to make a job offer to one of these applicants and not the other.
\item Consider a situation where all female applicants were unqualified (they all achieve low net sales at the end of their first year), but some of them received job offers. Is the following statement TRUE OR \correct{FALSE}: The hiring rule above requires that some job offers made to male applicants must have been made to unqualified male applicants.
\item Suppose Sales-a-lot received 100 male and 100 female applicants, and eventually made 10 job offers. Is the following statement \correct{TRUE} OR FALSE: The hiring rule above requires that even if all male applicants were unqualified (they all achieve low net sales at the end of their first year), some of the unqualified males must have received job offers.
\item Is the following statement TRUE OR \correct{FALSE}: The hiring rule above always allows Sales-a-lot to send job offers only to the most qualified applicants (those who achieve high net sales at the end of their first year).
\end{enumerate}
Consider a different scenario than the two examples above, with 6 applicants -- 4 female and 2 male, as illustrated below. The next three questions each give a different potential outcome for all 6 applicants (i.e., which of the 6 applicants do receive job offers). Please indicate which of the outcomes follow the hiring rule above.
\includegraphics[height=0.4in]{illustrations/demo_parity/q9-11_POOL.png}
\begin{enumerate}
\setcounter{enumi}{8}
\item Sales-a-lot makes the following hiring decisions.
\includegraphics[height=0.4in]{illustrations/demo_parity/q9_offer.png}
\space\space\space
\includegraphics[height=0.4in]{illustrations/demo_parity/q9_no_offer.png}
Do these decisions obey the hiring rule? \correct{Yes}
\item Sales-a-lot makes the following hiring decisions.
\includegraphics[height=0.4in]{illustrations/demo_parity/q10_offer.png}
\space\space\space
\includegraphics[height=0.4in]{illustrations/demo_parity/q10_no_offer.png}
Do these decisions obey the hiring rule? \correct{No}
\item Sales-a-lot makes the following hiring decisions.
\includegraphics[height=0.4in]{illustrations/demo_parity/q11_offer.png}
\space\space\space
\includegraphics[height=0.4in]{illustrations/demo_parity/q11_no_offer.png}
Do these decisions obey the hiring rule? \correct{No}
\item In your own words, explain the hiring rule. [short answer] [The rule is not shown above this question]
\item To what extent do you agree with the following statement: I am confident I know how to apply the hiring rule described above?
\begin{itemize}
\item Strongly agree
\item Agree
\item Neither agree nor disagree
\item Disagree
\item Strongly Disagree
\end{itemize}
\item Please select the choice that best describes your experience: When I answered the previous questions...
\begin{enumerate}
\item I applied the provided hiring rule only.
\item I used a combination of the provided hiring rule and my own ideas of what the correct hiring rule should be.
\item I used only my own ideas of what the correct hiring decision should be rather than the provided hiring rule.
\end{enumerate}
\item To what extent do you agree with the following statement: I like the hiring rule?
\begin{itemize}
\item Strongly agree
\item Agree
\item Neither agree nor disagree
\item Disagree
\item Strongly Disagree
\end{itemize}
\item To what extent do you agree with the following statement: I agree with the hiring rule?
\begin{itemize}
\item Strongly agree
\item Agree
\item Neither agree nor disagree
\item Disagree
\item Strongly Disagree
\end{itemize}
\item Please explain your opinion on the hiring rule. [short answer]
\item Was there anything about this survey that was hard to understand or answer? [short answer]
\end{enumerate}
\paragraph{Equal Opportunity - FNR.}
Recruit-a-matic uses the following rule to determine whether Sales-a-lot’s hiring decisions were fair:
\emph{The fraction of qualified male candidates who do not receive job offers should equal the fraction of qualified female candidates who do not receive job offers.}
Example 1: Suppose that over the past year, Recruit-a-matic finds that Sales-a-lot received the following qualified applicants (10 female and 12 male).
\includegraphics[height=1in]{illustrations/fnr/example_1_POOL.png}
If Sales-a-lot did not send job offers to the following number of qualified applicants (5 female and 6 male), then this would be fair according to the hiring rule (note that there are other possible outcomes that are fair according to the hiring rule).
\includegraphics[height=1in]{illustrations/fnr/example_1b_no_offer.png}
Example 2: Suppose that over the past year, Recruit-a-matic finds that Sales-a-lot reviewed a total of 100 qualified applicants as follows (40 female and 60 male).
\includegraphics[height=2in]{illustrations/fnr/example_2_POOL.png}
If Sales-a-lot did not send job offers to the following number of qualified applicants (10 female and 15 male), then this would be fair according to the hiring rule (note that there are other possible outcomes that are fair according to the hiring rule).
\includegraphics[height=1in]{illustrations/fnr/example_2b_no_offer.png}
Note that in the above examples the remaining qualified applicants received job offers, but are not displayed here.
In the next section, we will ask you some questions about the information you have just read. Please note that this is not a test of your abilities. We want to measure the quality of the description you read, not your ability to take tests or answer questions.
\textbf{Please note that we ask you to apply and use ONLY the above hiring rule when answering the following questions. You will have an opportunity to state your opinions and feelings on the rule later in the survey.}
\begin{enumerate}
\setcounter{enumi}{2}
\item Suppose a different company considered applicants for a different job. There were 200 qualified female applicants and 100 qualified male applicants,
\includegraphics[height=1.2in]{illustrations/fnr/q3_POOL.png}
and they did not send job offers to 90 qualified male applicants.
\includegraphics[height=1.3in]{illustrations/fnr/q3_male_no_offer.png}
Assuming that Recruit-a-matic reviews their decisions using the hiring rule above, how many qualified female applicants should not have received job offers?
\begin{enumerate}
\item 190
\includegraphics[height=1.3in]{illustrations/fnr/q3_a_no_offer.png}
\item \correct{180}
\includegraphics[height=1.3in]{illustrations/fnr/q3_b_no_offer.png}
\item 160
\includegraphics[height=1.3in]{illustrations/fnr/q3_c_no_offer.png}
\item 150
\includegraphics[height=1.3in]{illustrations/fnr/q3_d_no_offer.png}
\end{enumerate}
\item Assuming Recruit-a-matic reviews decisions using the hiring rule above, in which of these cases could Sales-a-lot have rejected more qualified female applicants than qualified male applicants?
\begin{enumerate}
\item \correct{When there are more qualified female applicants than qualified male applicants (i.e., more women had low net sales at the end of the year).}
\item When there are more female applicants than male applicants.
\item When female applicants receive worse interview scores than male applicants.
\item This cannot happen under the hiring rule.
\end{enumerate}
\item Consider one male applicant and one female applicant, both of whom are similarly qualified for the job (they achieve about the same net sales at the end of their first year). Is the following statement \correct{TRUE} OR FALSE: The hiring rule above allows Sales-a-lot to make a job offer to one of these applicants and not the other.
\item Consider a situation where all female applicants were unqualified (they all achieve low net sales at the end of their first year), but some of them received job offers. Is the following statement TRUE OR \correct{FALSE}: The hiring rule above requires that some job offers made to male applicants must have been made to unqualified male applicants.
\item Suppose Sales-a-lot received 100 male and 100 female applicants, and eventually made 10 job offers. Is the following statement TRUE OR \correct{FALSE}: The hiring rule above requires that even if all male applicants were unqualified (they all achieve low net sales at the end of their first year), some of the unqualified males must have received job offers.
\item Is the following statement \correct{TRUE} OR FALSE: The hiring rule above always allows Sales-a-lot to send job offers only to the most qualified applicants (those who achieve high net sales at the end of their first year).
\end{enumerate}
Consider a different scenario than the two examples above, with 6 qualified applicants -- 4 female and 2 male, as illustrated below. The next three questions each give a different potential outcome for all 6 qualified applicants (i.e., which of the 6 applicants do not receive job offers). Please indicate which of the outcomes follow the hiring rule above.
\includegraphics[height=0.4in]{illustrations/fnr/q9-11_POOL.png}
\begin{enumerate}
\setcounter{enumi}{8}
\item Sales-a-lot makes the following hiring decisions.
\includegraphics[height=0.4in]{illustrations/fnr/q9_offer.png}
\space\space\space
\includegraphics[height=0.4in]{illustrations/fnr/q9_no_offer.png}
Do these decisions obey the hiring rule? \correct{Yes}
\item Sales-a-lot makes the following hiring decisions.
\includegraphics[height=0.4in]{illustrations/fnr/q10_offer.png}
\space\space\space
\includegraphics[height=0.4in]{illustrations/fnr/q10_no_offer.png}
Do these decisions obey the hiring rule? \correct{No}
\item Sales-a-lot makes the following hiring decisions.
\includegraphics[height=0.4in]{illustrations/fnr/q11_offer.png}
\space\space\space
\includegraphics[height=0.4in]{illustrations/fnr/q11_no_offer.png}
Do these decisions obey the hiring rule? \correct{No}
\item In your own words, explain the hiring rule. [short answer] [The rule is not shown above this question]
\item To what extent do you agree with the following statement: I am confident I know how to apply the hiring rule described above?
\begin{itemize}
\item Strongly agree
\item Agree
\item Neither agree nor disagree
\item Disagree
\item Strongly Disagree
\end{itemize}
\item Please select the choice that best describes your experience: When I answered the previous questions...
\begin{enumerate}
\item I applied the provided hiring rule only.
\item I used a combination of the provided hiring rule and my own ideas of what the correct hiring rule should be.
\item I used only my own ideas of what the correct hiring decision should be rather than the provided hiring rule.
\end{enumerate}
\item To what extent do you agree with the following statement: I like the hiring rule?
\begin{itemize}
\item Strongly agree
\item Agree
\item Neither agree nor disagree
\item Disagree
\item Strongly Disagree
\end{itemize}
\item To what extent do you agree with the following statement: I agree with the hiring rule?
\begin{itemize}
\item Strongly agree
\item Agree
\item Neither agree nor disagree
\item Disagree
\item Strongly Disagree
\end{itemize}
\item Please explain your opinion on the hiring rule. [short answer]
\item Was there anything about this survey that was hard to understand or answer? [short answer]
\end{enumerate}
\paragraph{Equal Opportunity - FPR.}
Recruit-a-matic uses the following rule to determine whether Sales-a-lot’s hiring decisions were fair:
\emph{The fraction of unqualified male candidates who receive job offers should equal the fraction of unqualified female candidates who receive job offers.}
Example 1: Suppose that over the past year, Recruit-a-matic finds that Sales-a-lot received the following unqualified applicants (10 female and 12 male).
\includegraphics[height=1in]{illustrations/fpr/example_1_POOL.png}
If Sales-a-lot sent job offers to the following number of unqualified applicants (5 female and 6 male), then this would be fair according to the hiring rule (note that there are other possible outcomes that are fair according to the hiring rule).
\includegraphics[height=1in]{illustrations/fpr/example_1b_offer.png}
Example 2: Suppose that over the past year, Recruit-a-matic finds that Sales-a-lot reviewed a total of 100 unqualified applicants as follows (40 female and 60 male).
\includegraphics[height=2in]{illustrations/fpr/example_2_POOL.png}
If Sales-a-lot sent job offers to the following number of unqualified applicants (10 female and 15 male), then this would be fair according to the hiring rule (note that there are other possible outcomes that are fair according to the hiring rule).
\includegraphics[height=1in]{illustrations/fpr/example_2b_offer.png}
Note that in the above examples the remaining unqualified applicants did not receive job offers, but are not displayed here.
In the next section, we will ask you some questions about the information you have just read. Please note that this is not a test of your abilities. We want to measure the quality of the description you read, not your ability to take tests or answer questions.
\textbf{Please note that we ask you to apply and use ONLY the above hiring rule when answering the following questions. You will have an opportunity to state your opinions and feelings on the rule later in the survey.}
\begin{enumerate}
\setcounter{enumi}{2}
\item Suppose a different company considered applicants for a different job. There were 200 unqualified female applicants and 100 unqualified male applicants,
\includegraphics[height=1.3in]{illustrations/fpr/q3_POOL.png}
and they did send job offers to 10 unqualified male applicants.
\includegraphics[height=1.3in]{illustrations/fpr/q3_male_offer.png}
Assuming that Recruit-a-matic reviews their decisions using the hiring rule above, how many unqualified female applicants should have received job offers?
\begin{enumerate}
\item 10
\includegraphics[height=1.3in]{illustrations/fpr/q3_a_offer.png}
\item \correct{20}
\includegraphics[height=1.3in]{illustrations/fpr/q3_b_offer.png}
\item 40
\includegraphics[height=1.3in]{illustrations/fpr/q3_c_offer.png}
\item 50
\includegraphics[height=1.3in]{illustrations/fpr/q3_d_offer.png}
\end{enumerate}
\item Assuming Recruit-a-matic reviews decisions using the hiring rule above, in which of these cases could Sales-a-lot have accepted more unqualified female applicants than unqualified male applicants?
\begin{enumerate}
\item \correct{When there are more unqualified female applicants than unqualified male applicants (i.e., more women had low net sales at the end of the year).}
\item When there are more female applicants than male applicants.
\item When female applicants receive worse interview scores than male applicants.
\item This cannot happen under the hiring rule.
\end{enumerate}
\item Consider one male applicant and one female applicant, both of whom are similarly qualified for the job (they achieve about the same net sales at the end of their first year). Is the following statement \correct{TRUE} OR FALSE: The hiring rule above allows Sales-a-lot to make a job offer to one of these applicants and not the other.
\item Consider a situation where all female applicants were unqualified (they all achieve low net sales at the end of their first year), but some of them received job offers. Is the following statement \correct{TRUE} OR FALSE: The hiring rule above requires that some job offers made to male applicants must have been made to unqualified male applicants.
\item Suppose Sales-a-lot received 100 male and 100 female applicants, and eventually made 10 job offers. Is the following statement TRUE OR \correct{FALSE}: The hiring rule above requires that even if all male applicants were unqualified (they all achieve low net sales at the end of their first year), some of the unqualified males must have received job offers.
\item Is the following statement \correct{TRUE} OR FALSE: The hiring rule above always allows Sales-a-lot to send job offers only to the most qualified applicants (those who achieve high net sales at the end of their first year).
\end{enumerate}
Consider a different scenario than the two examples above, with 6 unqualified applicants -- 4 female and 2 male, as illustrated below. The next three questions each give a different potential outcome for all 6 applicants (i.e., which of the 6 applicants receive job offers). Please indicate which of the outcomes follow the hiring rule above.
\includegraphics[height=0.4in]{illustrations/fpr/q9-11_POOL.png}
\begin{enumerate}
\setcounter{enumi}{8}
\item Sales-a-lot makes the following hiring decisions.
\includegraphics[height=0.4in]{illustrations/fpr/q9_offer.png}
Do these decisions obey the hiring rule? \correct{Yes}
\item Sales-a-lot makes the following hiring decisions.
\includegraphics[height=0.4in]{illustrations/fpr/q10_offer.png}
Do these decisions obey the hiring rule? \correct{No}
\item Sales-a-lot makes the following hiring decisions.
\includegraphics[height=0.4in]{illustrations/fpr/q11_offer.png}
Do these decisions obey the hiring rule? \correct{No}
\item In your own words, explain the hiring rule. [short answer] [The rule is not shown above this question]
\item To what extent do you agree with the following statement: I am confident I know how to apply the hiring rule described above?
\begin{itemize}
\item Strongly agree
\item Agree
\item Neither agree nor disagree
\item Disagree
\item Strongly Disagree
\end{itemize}
\item Please select the choice that best describes your experience: When I answered the previous questions...
\begin{enumerate}
\item I applied the provided hiring rule only.
\item I used a combination of the provided hiring rule and my own ideas of what the correct hiring rule should be.
\item I used only my own ideas of what the correct hiring decision should be rather than the provided hiring rule.
\end{enumerate}
\item To what extent do you agree with the following statement: I like the hiring rule?
\begin{itemize}
\item Strongly agree
\item Agree
\item Neither agree nor disagree
\item Disagree
\item Strongly Disagree
\end{itemize}
\item To what extent do you agree with the following statement: I agree with the hiring rule?
\begin{itemize}
\item Strongly agree
\item Agree
\item Neither agree nor disagree
\item Disagree
\item Strongly Disagree
\end{itemize}
\item Please explain your opinion on the hiring rule. [short answer]
\item Was there anything about this survey that was hard to understand or answer? [short answer]
\end{enumerate}
\paragraph{Equalized Odds.}
Recruit-a-matic uses the following rule to determine whether Sales-a-lot’s hiring decisions were fair:
\emph{The fraction of qualified male candidates who do not receive job offers should equal the fraction of qualified female candidates who do not receive job offers. Similarly, the fraction of unqualified male candidates who receive job offers should equal the fraction of unqualified female candidates who receive job offers.}
Example 1: Suppose that over the past year, Recruit-a-matic finds that Sales-a-lot received the following qualified applicants (10 female and 12 male) and unqualified applicants (10 female and 12 male).
\includegraphics[height=1in]{illustrations/eo/example_1_POOL.png}
If Sales-a-lot did send offers to the following number of unqualified applicants (left, 5 female and 6 male), and did not send job offers to the following number of qualified applicants (right, 5 female and 6 male), then this would be fair according to the hiring rule (note that there are other possible outcomes that are fair according to the hiring rule).
\includegraphics[height=1in]{illustrations/eo/example_1b_offer.png}
\space\space\space
\includegraphics[height=1in]{illustrations/eo/example_1b_no_offer.png}
Example 2: Suppose that over the past year, Recruit-a-lot finds that Sales-a-lot reviewed a total of 100 qualified applicants (40 female and 60 male) and 100 unqualified applicants (40 female and 60 male).
\includegraphics[height=2in]{illustrations/eo/example_2_POOL.png}
If Sales-a-lot did send offers to the following number of unqualified applicants (left, 10 female and 15 male), and did not send job offers to the following number of qualified applicants (right, 10 female and 15 male), then this would be fair according to the hiring rule (note that there are other possible outcomes that are fair according to the hiring rule).
\includegraphics[height=1in]{illustrations/eo/example_2b_offer.png}
\space\space\space
\includegraphics[height=1in]{illustrations/eo/example_2b_no_offer.png}
Note that in the above examples the remaining unqualified applicants did not receive job offers, but are not displayed here. Similarly, the remaining qualified applicants received job offers, but are not displayed here.
In the next section, we will ask you some questions about the information you have just read. Please note that this is not a test of your abilities. We want to measure the quality of the description you read, not your ability to take tests or answer questions.
\textbf{Please note that we ask you to apply and use ONLY the above hiring rule when answering the following questions. You will have an opportunity to state your opinions and feelings on the rule later in the survey.}
\begin{enumerate}
\setcounter{enumi}{2}
\item Suppose a different company considered applicants for a different job. There were 200 qualified female applicants and 100 qualified male applicants,
\includegraphics[height=1.2in]{illustrations/eo/q3_POOL.png}
and they did not send job offers to 90 qualified male applicants.
\includegraphics[height=1.3in]{illustrations/eo/q3_male_no_offer.png}
Assuming that Recruit-a-matic reviews their decisions using the hiring rule above, how many qualified female applicants should not have received job offers?
\begin{enumerate}
\item 190
\includegraphics[height=1.3in]{illustrations/eo/q3_a_no_offer.png}
\item \correct{180}
\includegraphics[height=1.3in]{illustrations/eo/q3_b_no_offer.png}
\item 160
\includegraphics[height=1.3in]{illustrations/eo/q3_c_no_offer.png}
\item 150
\includegraphics[height=1.3in]{illustrations/eo/q3_d_no_offer.png}
\end{enumerate}
\item Assuming Recruit-a-matic reviews decisions using the hiring rule above, in which of these cases could Sales-a-lot have accepted more unqualified female applicants than unqualified male applicants?
\begin{enumerate}
\item \correct{When there are more unqualified female applicants than unqualified male applicants (i.e., more women had low net sales at the end of the year).}
\item When there are more female applicants than male applicants.
\item When female applicants receive worse interview scores than male applicants.
\item This cannot happen under the hiring rule.
\end{enumerate}
\item Consider one male applicant and one female applicant, both of whom are similarly qualified for the job (they achieve about the same net sales at the end of their first year). Is the following statement \correct{TRUE} OR FALSE: The hiring rule above allows Sales-a-lot to make a job offer to one of these applicants and not the other.
\item Consider a situation where all female applicants were unqualified (they all achieve low net sales at the end of their first year), but some of them received job offers. Is the following statement \correct{TRUE} OR FALSE: The hiring rule above requires that some job offers made to male applicants must have been made to unqualified male applicants.
\item Suppose Sales-a-lot received 100 male and 100 female applicants, and eventually made 10 job offers. Is the following statement TRUE OR \correct{FALSE}: The hiring rule above requires that even if all male applicants were unqualified (they all achieve low net sales at the end of their first year), some of the unqualified males must have received job offers.
\item Is the following statement \correct{TRUE} OR FALSE: The hiring rule above always allows Sales-a-lot to send job offers only to the most qualified applicants (those who achieve high net sales at the end of their first year).
\end{enumerate}
Consider a different scenario than the two examples above, with 6 qualified applicants -- 4 female and 2 male; and 6 unqualified applicants -- 4 female and 2 male. The next three questions each give a different potential outcome for the applicants (i.e., which of the applicants did or did not receive job offers). Please indicate which of the outcomes follow the hiring rule above.
\includegraphics[height=0.4in]{illustrations/eo/q9-11_POOL.png}
\begin{enumerate}
\setcounter{enumi}{8}
\item Sales-a-lot makes the following hiring decisions.
\includegraphics[height=0.4in]{illustrations/eo/q9_offer.png}
\space\space\space
\includegraphics[height=0.4in]{illustrations/eo/q9_no_offer.png}
Do these decisions obey the hiring rule? \correct{Yes}
\item Sales-a-lot makes the following hiring decisions.
\includegraphics[height=0.4in]{illustrations/eo/q10_offer.png}
\space\space\space
\includegraphics[height=0.4in]{illustrations/eo/q10_no_offer.png}
Do these decisions obey the hiring rule? \correct{No}
\item Sales-a-lot makes the following hiring decisions.
\includegraphics[height=0.4in]{illustrations/eo/q11_offer.png}
\space\space\space
\includegraphics[height=0.4in]{illustrations/eo/q11_no_offer.png}
Do these decisions obey the hiring rule? \correct{No}
\item In your own words, explain the hiring rule. [short answer] [The rule is not shown above this question]
\item To what extent do you agree with the following statement: I am confident I know how to apply the hiring rule described above?
\begin{itemize}
\item Strongly agree
\item Agree
\item Neither agree nor disagree
\item Disagree
\item Strongly Disagree
\end{itemize}
\item Please select the choice that best describes your experience: When I answered the previous questions...
\begin{enumerate}
\item I applied the provided hiring rule only.
\item I used a combination of the provided hiring rule and my own ideas of what the correct hiring rule should be.
\item I used only my own ideas of what the correct hiring decision should be rather than the provided hiring rule.
\end{enumerate}
\item To what extent do you agree with the following statement: I like the hiring rule?
\begin{itemize}
\item Strongly agree
\item Agree
\item Neither agree nor disagree
\item Disagree
\item Strongly Disagree
\end{itemize}
\item To what extent do you agree with the following statement: I agree with the hiring rule?
\begin{itemize}
\item Strongly agree
\item Agree
\item Neither agree nor disagree
\item Disagree
\item Strongly Disagree
\end{itemize}
\item Please explain your opinion on the hiring rule. [short answer]
\item Was there anything about this survey that was hard to understand or answer? [short answer]
\end{enumerate} |
2001.00203 | \section{Introduction}
Before reviewing the work I did with Ernest Henley, I would
like to recount some personal memories of how I came to know
him and how it came about that we worked together on baryon
properties for over a decade.
During the academic year 1980/1981, I and two fellow students from
the University of Mainz were exchange students at the University of Washington in Seattle. One of them told me that with the credits transfered from our German home University we could get a Bachelor's degree from the University of Washington within one year. Both of us managed to obtain the required additional credits. In August 1981 we received the desired Bachelor of Science diploma signed by the Dean of the Faculty of Arts and Sciences, Ernest Henley.
At that time, he was not lecturing and I did not come to know him personally.
After returning to the University of Mainz, I took my Master's exams,
and in the following semester, started to work on my Master's thesis on meson exchange currents with Hartmuth Arenh\"ovel.
The textbook "Subatomic Physics" by Frauenfelder and Henley~\cite{Fra74}
was an invaluable guide during my thesis work and beyond and became one of my favorite textbooks.
In 1984 Prof. Henley was awarded the prestigious Humboldt prize
that allowed him to travel and teach at various German universities.
It so happened that in September 1984 he gave a series of lectures on electroweak interactions at the Students' Workshop held in the small town of Bosen near Mainz. One afternoon, while walking together around the Bostal lake, I told him about my Master's thesis on meson exchange current operators that had to be constructed so as to satisfy the continuity equation consistently with the nucleon-nucleon interaction potential~\cite{Buc85}. Ernest was very encouraging and said that focusing on the continuity equation was very important to obtain reliable results.
One evening during the Bosen workshop, there was a performance of a fire artist spitting flames and juggling with fiery rings. I happened to be standing next to Ernest and made a snobbish remark saying something
like "...if he only would apply his skills to something more useful...".
Ernest looked at me and said: "Why, he is entertaining people. That is allright." This was only one of several occasions, where I noticed that
he treated everybody with respect.
Another story characteristic of Ernest's modesty was related to me by Lothar Tiator, who was coorganizer of the Bosen workshop. At the time Humboldt prize winners were provided with a BMW car free of charge during their stay in Germany. Ernest's first reaction was: "I would have prefered a bicycle".
\begin{figure*}[th]
\centerline{\includegraphics[width=13.0cm]{Bosen1.pdf}}
\caption{Participants of the Students' Workshop in Bosen in 1984.
The author is holding a notebook. \\
Courtesy of Dr. Lothar Tiator. }
\end{figure*}
As a Humboldt fellow, Ernest Henley often visited the University of
T\"ubingen during the period 1984-2000. I was working on my PhD thesis there and as a postdoc with Amand F\"a\ss ler. That is when I learned that Ernest was born in Frankfurt, close to Mainz where I was born.
When he spoke German, you could hear traces of the regional accent typical for this area. During his visits in the late nineties, we had some discussions on relations between baryon charge radii and quadrupole moments that Eliecer
Hern\'andez, Amand F\"a\ss ler, and myself had obtained in a quark model including two-body exchange currents~\cite{Buc97}.
Early in 1999, G. Dillon and G. Morpurgo had rederived the relation between proton, neutron and $\Delta^+$ charge radii using a model-independent QCD parameterization technique~\cite{Dil99a}.
Ernest immediatedly realized the virtue of the Morpurgo method~\cite{Mor89,Mor92} and its potential applicability to baryon quadrupole moments and other observables. So it came about that Ernest invited me to the University of Washington in the fall of 1999 for a month.
There we laid the ground work for several papers\cite{hen00a,hen00b,Hen02,Hen08,Hen11,Hen14} which were published between 2000 and 2014. Ernest noticed that pion-baryon couplings had not yet been calculated with this method and suggested that we do this first~\cite{hen00a}. During my stay,
Ernest generously shared his office with me, invited me over for dinner, and on the last day he insisted that he drive me to the airport very early in the morning.
Thereafter we continued our collaboration by email and by visiting each other either in Seattle or T\"ubingen. During my visit to Seattle in 2005, we started our work on baryon magnetic
octupole moments~\cite{Hen08}, a topic which Ernest was particularly fond of.
In the spring of 2010, while staying in Munich he called me, and we managed to meet and discuss physics for several hours in the DB lounge at the Munich Central train station. That is when we started our last collaboration on the proton spin problem~\cite{Hen11,Hen14} at his suggestion.
Back in 1999, when we first began working together, he had already mentioned that the nucleon shape issue was closely related with the proton spin problem. During the meeting in Munich, we realized how to apply Morpurgo's method to calculate quark spin and quark orbital angular momentum.
Without Ernest's profound knowledge, creativity, and persistence,
the following results would not have been possible.
\section{Morpurgo's general parameterization method}
In 1989 Morpurgo~\cite{Mor89,Mor92} introduced a general parameterization (GP) method for the properties of baryons, which expresses masses, magnetic moments, transition amplitudes, and other properties of the baryon octet and decuplet in terms of a few parameters. The method uses only
general features of QCD and baryon descriptions in terms of quarks.
Later, Dillon and Morpurgo showed that the method is independent
of the choice of the quark mass renormalization point in the QCD
Lagrangian~\cite{Dil96}.
Dillon and Morpurgo also extended the method to nucleon charge radii~\cite{Dil99a} and electromagnetic form factors~\cite{Dil99b}.
The Morpurgo method is based on the following considerations.
For the observable at hand one formally writes a QCD operator
$\Omega$ and QCD eigenstates expressed explicitly
in terms of quarks and gluons. This matrix element
can, with the help of the unitary operator $V$, be expressed
in the basis of auxiliary (model) three-quark states $\Phi_B$
\begin{equation}
\label{map}
\left \langle B \vert \Omega \vert B \right \rangle =
\left \langle \Phi_B \vert
V^{\dagger}\Omega V \vert \Phi_B \right \rangle =
\left \langle W_B \vert
{\cal O} \vert W_B \right \rangle \, .
\end{equation}
Both the unitary operator $V$ and the model states $\Phi_B$ are defined in
Ref.\cite{Mor89}.
The $\Phi_B$ are pure $L=0$ three-quark states excluding any quark-antiquark
or gluon components. $W_B$ stands for the standard three-quark $SU(6)$ spin-flavor wave functions~\cite{Clo}. The operator $V$ dresses the auxiliary states $\Phi_B$ with $q\bar q$ components and gluons and thereby generates
the exact QCD eigenstate $\vert B \rangle $ as in
\begin{eqnarray}
\label{QCDstates}
\vert B\rangle &=& \alpha \vert qqq\rangle +\beta_1 \vert qqq\,(q\overline{q})\rangle
+ \beta_2 \vert qqq\,(q\overline{q})^2\rangle + \nonumber \\
& & \ldots
+ \gamma_1 \vert qqq \, g\rangle + \gamma_2 \vert qqq \, gg\rangle + \ldots
\end{eqnarray}
On the right hand side of the last equality in Eq.(\ref{map})
the integration over spatial and color degrees of freedom
has been performed. As a result only a matrix element
of a spin-flavor operator $\mathcal{O}$ between spin-flavor states
$\vert W_B \rangle $ remains. The $q{\bar q}$ and gluon degrees of freedom
of the exact QCD eigenstates now appear as many-quark operators,
constrained by Lorentz and inner QCD symmetries.
Although non-covariant in appearance,
the operator basis of this method involves a complete set of spin-flavor invariants that are allowed by Lorentz invariance and flavor symmetry.
One then writes the most general expression for ${\cal O}$
compatible with the space-time and inner QCD symmetries.
Generally, this is a sum of one-, two-, and three-quark
operators in spin-flavor space multiplied by {\it a priori} unknown constants $A_1$, $A_2$, and $A_3$ which parametrize the orbital and color space matrix elements.
Empirically, a hierarchy in the importance
of one-, two-, and three-quark operators is found.
This fact can be understood
in the $1/N_c$ expansion~\cite{Das94} where
two- and three-quark operators describing second and third
order SU(6) symmetry breaking
are usually suppressed by powers of $1/N_c$ and $1/N_c^2$ respectively,
compared to one-quark operators associated with first order symmetry breaking~\cite{Leb00,Leb02}.
\section{Pion-baryon couplings}
For the strong pion-baryon couplings one-, two-, and three-quark
axial vector operators are defined as~\cite{hen00a,mos13}
\begin{eqnarray}
\label{operators}
{\mathcal{O}}_1 & = & A_1\, \sum_{i=1}^3 \tau_3^i \sigma_z^i, \nonumber \\
{\mathcal{O}}_2 & = & A_2\, \sum_{i\neq j=1}^3 \tau_3^i \sigma_z^j \nonumber \\
{\mathcal{O}_3} & = & A_3\, \sum_{i\neq j \neq k=1}^3 \, \tau_3^i\, \sigma_z^i\, {\xbf{\sigma}}^j \cdot {\xbf{\sigma}}^k,
\end{eqnarray}
and the total operator reads
\begin{equation}
\label{quarkop}
{\mathcal{O}}={\mathcal{O}}_1 + {\mathcal{O}}_2 + {\mathcal{O}}_3.
\end{equation}
Here, ${\xbf{\sigma}}^i$ and ${\xbf{\tau}}^i$ are the spin and isospin
operators of quark $i$.
These operators are evaluated using completely symmetric symmetric spin-isospin states $\vert W_B\rangle$~\cite {Clo}. We obtained the quark model matrix elements up to second order corrections (two-body terms) listed in Table~\ref{tab1},
where $r= \frac{m_u}{m_s}$,
is a flavor symmetry breaking parameter included in the two-body term,
with $m_u=m_d$ and $m_s$ being the masses of non-strange and strange quarks.
\begin{figure}[th]
\centerline{\includegraphics[width=8cm]{deltacoup1.pdf}}
\caption{\label{fig:deltacoup}
Strong coupling of the pion to the nucleon ($N$) and $\Delta$-isobar
($\Delta$). The $\pi NN$, $\pi N \Delta$, and $\pi \Delta \Delta$
coupling constants are denoted as $f_{\pi NN}$, $f_{\pi N \Delta}$,
and $f_{\pi \Delta \Delta}$. The corresponding interaction vertices are
represented as black dots.}
\end{figure}
\begin{table}[pt]
\caption{\label{tab1} Quark model matrix elements
of the operator in Eq.(\ref{quarkop})
to first order (${\mathcal O}_1$) and second order
corrections (${\mathcal O}_2$).}
{\begin{tabular}{|l|c|c|} \hline
Baryon & First order & Second order\\ \hline
p & $\frac{5}{3}A_1$ & -$\frac{2}{3} A_2$ \\
$\Sigma^+ $&$ \frac{4}{3} A_1$ &$\frac{2 (2-r)}{3}A_2$\\
$\Sigma^0 \rightarrow \Lambda^0$ &$ -\frac{2\sqrt{3}}{3} A_1$&
$\frac{2\sqrt{3}}{3}A_2$\\
$\Xi^0$ &$ -\frac{1}{3} A_1$ &$ \frac{4 \, r}{3} A_2$ \\ \hline
$\Delta^+ \rightarrow p$ & $ \frac {4 \sqrt{2}}{3} A_1$ &
- $ \frac {4 \sqrt{2}}{3}A_2$\\
$\Sigma^{* +} \rightarrow \Sigma^+$&$ \frac{2\sqrt{2}}{3} A_1$
&$\frac{2\sqrt{2}(1-2r)}{3}A_2$\\
$\Sigma^{* 0} \rightarrow \Lambda^0$ &$ \frac{2\sqrt{6}}{3} A_1$&
$-\frac{2\sqrt{6}}{3}A_2$\\
$\Xi^{* 0} \rightarrow \Xi^{0}$
&$ \frac{2\sqrt{2}}{3} A_1$ &$ -\frac{2\sqrt{2}\, r}{3} A_2$ \\ \hline
$\Delta^{+}$ & $ A_1$ & $ 2 A_2$\\
$\Sigma^{*+}$ & $ 2 A_1$ & $ 2(1+r) A_2$\\
$\Xi^{*0}$ & $ A_1$ & $ 2r A_2$\\ \hline
\end{tabular}}
\end{table}
To derive from the quark level matrix elements in Table~\ref{tab1} the
conventional pion-baryon couplings~\cite{Bro75},
as depicted in Fig.~\ref{fig:deltacoup} for the nucleon (N) and $\Delta$ the quark level matrix elements must be divided by baryon level spin and isospin Clebsch-Gordan coefficients~\cite{hen00a,mos13}.
Table~\ref{tab2} lists the various couplings
in terms of $\,f\,$, the $\pi^0 p$
coupling constant, to first order and to second order with and without the
inclusion of the SU(3) flavor symmetry breaking parameter $r=m_u/m_s$.
\begin{table}[pt]
\caption{\label{tab2} Coupling constants of the pion to
various members of the baryon
octet and decuplet, and the decuplet-octet transitions
in terms of $f=f_{\pi^0 pp}$. The * indicates an input.
The ratio $r=m_u/m_s$ of non-strange and strange quark masses
indicates the degree of flavor symmetry breaking. }
{\begin {tabular}{|l|r|r|r|} \hline
Baryon & First order & Total & Total \\
& ($A_2=0$) & r=1 & r=0.6 \\ \hline
p & 1 & 1 & 1\\
$\Sigma^+ $& 0.80 &0.59 & 0.54\\
$\Sigma^0 \rightarrow \Lambda^0$ & -0.69& -0.82 & -0.82 \\
$\Xi^0$ &-0.20 &-0.42 & -0.32 \\ \hline
$\Delta^+ \rightarrow p $ & 1.70 & 2* & 2*\\
$\Sigma^{* +} \rightarrow \Sigma^+ $& 0.98 &1.16 & 0.92\\
$\Sigma^{* 0} \rightarrow \Lambda^0$ & -1.20& -1.42 & -1.42 \\
$\Xi^{* 0} \rightarrow \Xi^0$ &-1.20 &-1.42 & -1.28 \\ \hline
$\Delta^{+}$ & 0.80 & 0.23 & 0.23\\
$\Sigma^{*+}$ & 0.80 & 0.23 & 0.32\\
$\Xi^{*0}$ & 0.80 & 0.23 & 0.42\\ \hline
\end{tabular}}
\end{table}
Our results satisfy the following relation in the SU(3) symmetric case
\begin{eqnarray}
\label{rel1}
f_{\pi^0 pp}+ f_{\pi^0 \Xi^0\Xi^0} & = & f_{\pi^0 \Sigma^+\Sigma^+}\; .
\end{eqnarray}
Furthermore, the $\pi \Sigma \Lambda$ and the
$\pi \Sigma^* \Lambda$ couplings remain unaffected by SU(3)
symmetry breaking. Irrespective of the value of $r$ the
octet-decuplet transition couplings satisfy the sum rule
\begin{equation}
\label{rel2}
\sqrt{2} f_{\Delta^+ p}= f_{\Xi^{*0}\Xi^0} +
\sqrt{6} f_{\Sigma^{*+}\Sigma^+} - f_{\Sigma^{* 0}\Lambda^0}.
\end{equation}
This relation is not new. It has been
derived before \cite{Bec64} using SU(3) symmetry and its
breaking to first order.
In addition, by taking ratios of two transition couplings
for $\pi^+$ emission we got for the case $r=1$
\begin{equation}
\frac{f_{{\Delta^{++} p}}}{f_{\Sigma^{* +} \Sigma^0}} =
-\sqrt{6} \, (-3.06), \quad
\frac{f_{\Sigma^{* +} \Sigma^0}}{f_{\Sigma^{* +} \Lambda^0 }} =
-\frac{1}{\sqrt{3}} \, (-0.46)
\end{equation}
The numbers in parentheses include SU(3) symmetry
breaking in the two-quark term $(r=0.6)$.
These results are in agreement with those obtained in the
large $N_c$ approach \cite{Das94}, including the next-to-leading order
corrections, which is undoubtedly more than a numerical coincidence.
Finally, we found certain analytical relations between
octet and decuplet baryon couplings to pions
(neglecting three-quark terms)
\begin{eqnarray}
\label{rel3}
f_{\pi^0 p}- \frac{1}{4} f_{\pi^0 \Delta^+\Delta^+} & = &
\frac{\sqrt{2}}{3} f_{\pi^0 p \Delta^+} \nonumber \\
f_{\pi^0 \Sigma^+}- \frac{1}{2} f_{\pi^0 \Sigma^{* +}\Sigma^{* +}} & = &
\frac{1}{\sqrt{6}} f_{\pi^0 \Sigma^{* +} \Sigma^+} \nonumber \\
f_{\pi^0 \Xi^0}- \frac{1}{4} f_{\pi^0 \Xi^{* 0}\Xi^{* 0}} & = &
\frac{1}{3} f_{\pi^0 \Xi^{* 0} \Xi^0}.
\end{eqnarray}
They are a consequence of the underlying unitary symmetry,
and are valid for all values of the strange quark mass.
Eq.(\ref{rel3}) can be used
to predict the elusive decuplet couplings from the
experimentally better known octet and decuplet-octet
transition couplings. As far as we know, these relations are new.
Including the three-body term for the nucleon and $\Delta$,
Steve Moszkowski and myself later found that the first relation in Eq.(\ref{rel3}) is modified.
It turned out that three-quark terms have a major effect on the
$\pi \Delta \Delta$ coupling and
the reduction of the $\pi\Delta\Delta$ coupling
obtained in second order is more than compensated
by the inclusion of third order symmetry breaking term~\cite{mos13}.
We also found an interesting connection between the $\pi NN$,
$\pi N\Delta$ and
$\pi \Delta \Delta$ couplings and the shape of the $N$ and $\Delta$.
\section{Intrinsic quadrupole moment of the nucleon}
To learn something about the shape of a spatially extended
particle one has to determine its {\it intrinsic} quadrupole moment
\cite{Boh75}
\begin{equation}
Q_0=\int d^3r \rho({\bf r}) (3 z^2 - r^2),
\end{equation}
which is defined with respect to the body-fixed frame and thus
defines the shape of the particle. If $Q_0>0$ the particle is
prolate (cigar-shaped), if $Q_0<0$ the particle is
oblate (pancake-shaped).
The intrinsic quadrupole moment $Q_0$ must be distinguished
from the {\it spectroscopic} quadrupole moment $Q$ measured in
the laboratory frame. Due to angular momentum selection rules, a spin $J=1/2$ nucleus, such as the nucleon, does not have a spectroscopic
quadrupole moment. This is analogous to
a deformed $J=1/2$ or $J=0$ nucleus.
For example, all orientations of a deformed $J=0$
nucleus are equally probable, which
results in a spherical charge distribution
in the ground state and a vanishing quadrupole moment $Q$ in the
laboratory.
The intrinsic quadrupole moment
$Q_0$ of a spin $J=1/2$ can then only be obtained
by measuring electromagnetic quadrupole transitions between the ground and
excited states, or by measuring the quadrupole moment of an excited state
with $J > 1/2$ of that nucleus.
During my stay in Seattle in 1999,
Ernest and I discussed how to extract from
the measured $N\to \Delta$ transition quadrupole moment the proton's intrinsic quadrupole moment, which contains the relevant information
on the proton shape. We had noticed that most work that addressed the issue of nucleon deformation~\cite{Gia79,Ven81,Ma83,Cle84,Mig87} did not clearly distinguish between intrinsic (body-fixed frame) and the measured spectroscopic (laboratory) quadrupole moment as qualitatively shown in Fig.~\ref{fig:body_fixed}.
\begin{figure}[th]
\centerline{\includegraphics[width=9cm]{body_fixed.pdf}}
\caption{\label{fig:body_fixed} Precession of a semiclassical deformed charge distribution
with intrinsic symmetry axis $z$ and spin $J$ around the laboratory frame $z'$ axis. The transformation from the body-fixed to the
laboratory frame gives rise to a projection factor
$P_2(\cos(\Theta))=( 3 \, \cos^2(\Theta) -1)/2$
relating the spectroscopic quadrupole moment $Q$ (laboratory frame) and
the intrinsic quadrupole moment $Q_0$ (body-fixed frame)
as $Q=[(3 J_z'^2-J(J+1))/(2J(J+1))]Q_0$, where $J$ is the total angular momentum, and $J_z'$ its projection on the $z'$ axis.
For $J=0$ and $J=1/2$ systems, $Q=0$ even if $Q_0\ne 0$.
It is $Q_0$ and not $Q$ that pertains to the shape of the system.}
\end{figure}
\subsection{Quark model}
In standard notation the $SU(6)$ spin-flavor wave function of the proton
is composed of a spin-singlet and a spin-triplet term for the coupling of the first two quarks
\begin{eqnarray}
\label{protonwave}
\vert p \rangle & = & {1 \over \sqrt{2} }
\biggl \lbrace {1 \over \sqrt{6}}
\vert \left ( 2uud - udu -duu \right ) \rangle \nonumber \\
& \times & {1 \over \sqrt{6}}
\left ( 2 \uparrow \uparrow \downarrow - \uparrow \downarrow \uparrow
- \downarrow \uparrow \uparrow \right ) \rangle \nonumber \\
& & + {1 \over \sqrt{2}} \vert
\left ( udu -duu \right ) \rangle \vert
{1 \over \sqrt{2}}
\left ( \uparrow \downarrow \uparrow - \downarrow \uparrow \uparrow \right )
\rangle
\biggr \rbrace .
\end{eqnarray}
The angular momentum coupling factors $2$, $-1$, $-1$ in front of the
three terms in the spin triplet part
express (i) the coupling of the first two quarks
to an $S=1$ diquark, and (ii) the coupling of the $S=1$ diquark with
the third quark to total $J=1/2$.
In leading order, the quadrupole moment is a two-quark operator in spin-flavor space
\begin{equation}
\label{decomp}
{\hat Q}_{[2]} = B
\sum_{i\ne j=1}^3 e_i \left ( 3 \sigma_{i \, z} \sigma_{j\, z} -
{\b{\sigma}}_i \cdot {\b{\sigma}}_j \right ),
\end{equation}
where $e_i=(1 + 3 \tau_{i \, z})/6$ is the charge of the i-th quark,
and the $z$-component of the Pauli spin (isospin) matrix $\b{\sigma}_i$ ($\b{\tau}_i$) is denoted by $\sigma_{i \, z}$ ($\tau_{i \, z}$).
The constant $B$ with dimension fm$^2$
contains the orbital and color matrix elements. There is no one-quark operator, because one cannot construct a spin tensor of rank 2
from a single Pauli matrix.
Sandwiching the quadrupole operator ${\hat Q}_{[2]}$
between the proton's spin-flavor wave function
yields a vanishing spectroscopic quadrupole moment.
The reason is clear. The spin tensor ${\hat Q}_{[2]}$ applied
to the spin-singlet wave function gives zero, and when acting on
the proton's spin-triplet wave function it gives
\begin{eqnarray}
\label{intquark1}
& & \left ( 3 \sigma_{1 \, z} \sigma_{2 \, z} -
\b{\sigma}_1 \cdot \b{\sigma}_2 \right )
{1 \over \sqrt{6}} \left \vert
\left ( 2 \uparrow \uparrow \downarrow - \uparrow \downarrow \uparrow
- \downarrow \uparrow \uparrow \right ) \right \rangle \nonumber \\
& = & {4 \over \sqrt{6} }
\vert \left (\uparrow \uparrow \downarrow + \uparrow \downarrow \uparrow
+ \downarrow \uparrow \uparrow \right ) \rangle,
\end{eqnarray}
where the right-hand side is a spin 3/2 wave function,
which has zero overlap with the spin 1/2 wave function of the proton
in the final state. Consequently,
the spectroscopic quadrupole moment
\begin{equation}
Q_p = \langle p \vert \hat Q_{[2]} \vert p \rangle =
B \left ( 2 - 1 -1 \right ) = 0
\end{equation}
vanishes due to the spin coupling coefficients in $\vert p \rangle $.
Although the spin $S=1$ diquarks ($uu$ and $ud$) in the proton have
nonvanishing quadrupole moments, the angular momentum
coupling of the diquark spin to the spin of the third quark
prevents this quadrupole moment from being observed.
Ernest came up with the idea to renormalize the
Clebsch-Gordan coefficients in spin space that guarantee that the proton spectroscopic quadrupole moment is zero~\cite{hen00b}.
Setting ``by hand''
all Clebsch-Gordan coefficients in the spin part of
the proton wave function of Eq.(\ref{protonwave}) equal to 1,
while preserving the normalization, one obtains
a modified ``proton'' wave function $\vert {\tilde p} \rangle $
\begin{eqnarray}
\label{intquark2}
\vert {\tilde p} \rangle & = &
{1 \over \sqrt{2} } \biggl \lbrace
\biggl \lbrack \vert
{1 \over \sqrt{6}}\left ( 2uud - udu -duu \right ) \rangle \nonumber \\
& + &
{1 \over \sqrt{2}}
\vert \left ( udu -duu \right ) \rangle \biggr \rbrack \nonumber \\
& \times &
{1 \over \sqrt{3}} \vert
\left (\uparrow \uparrow \downarrow +\uparrow \downarrow \uparrow
+\downarrow \uparrow \uparrow \right ) \rangle \biggr \rbrace.
\end{eqnarray}
The renormalization of the Clebsch-Gordan coefficients is undoing the
averaging over all spin directions, which renders the intrinsic
quadrupole moment unobservable.
We did not modify the flavor part of the wave function
in order to ensure that we deal with a proton.
We considered the expectation value of the two-body quadrupole operator
${\hat Q}_{[2]}$ in the state of the spin-renormalized proton wave function
$\vert \tilde p \rangle $
as an estimate
of the {\it intrinsic} quadrupole moment of the proton $Q_0^p$
\begin{equation}
\label{int1}
Q_0^p = \langle {\tilde p} \vert Q_{[2]} \vert {\tilde p} \rangle
=2 B \left ( \frac{2}{3} -\frac{8}{3} \right ) = - 4 B = - r_n^2.
\end{equation}
The two terms in Eq.(\ref{int1}) arise from the spin 1 diquark with projection $M=1$ and $M=0$. The latter dominates.
The last equality came from a comparison
with the quark model relation~\cite{Buc97} that was rederived with fewer assumptions~\cite{hen00b}
\begin{equation}
\label{spectroscopicquadrupolemoment}
\sqrt{2} Q_{p \to \Delta^+}=Q_{\Delta^+}=r_n^2=4B.
\end{equation}
This relation is in good agreement with experimental data~\cite{Bla01,Tia03}.
Thus, we found that the {\it intrinsic} quadrupole moment of the proton, $Q_0^p$ is equal
to the {\it negative} of the neutron charge radius $r_n^2$ and is therefore
{\it positive}.
\begin{figure}[th]
\centerline{\includegraphics[width=9cm]{shape_qm.pdf}}
\caption{\label{fig:shape_qm}
Left: In the neutron, both down quarks
are in a spin 1 state, and are repelled
more strongly than an up-down pair. This results in an elongated (prolate) charge distribution, a negative neutron charge radius $r_n^2$, and a
positive intrinsic quadrupole moment $Q_0^n=-r_n^2$.
Right: In the $\Delta^0$, all quark pairs have spin 1 resulting in an equal distance between down-down and up-down pairs. This in turn leads to a planar (oblate) charge distribution, a vanishing $r_{\Delta^0}^2=0$ charge radius, and a negative intrinsic quadrupole moment $Q_0^{\Delta^0}=r_n^2$.}
\end{figure}
Similarly, with
the $\Delta^+$ wave function with maximal spin projection $M_J=3/2$
\begin{equation}
\vert \Delta^+ \rangle = {1 \over \sqrt{3} } \vert
\left ( uud + udu + duu \right ) \rangle
\vert \uparrow \uparrow \uparrow \rangle ,
\end{equation}
we found for the intrinsic quadrupole moment of the $\Delta^+$
\begin{equation}
Q^{\Delta^+}_{0} = Q_{\Delta^+}= r_n^2.
\end{equation}
In the case of the $\Delta$, there are no Clebsch-Gordan coefficients
that could be "renormalized," and there is no difference between the
intrinsic $Q_0^{\Delta^+}$ and the spectroscopic quadrupole moment
$Q_{\Delta^+}$. The same results where obtained for the neutron and the
$\Delta^0$.
Summarizing, in the quark model,
the intrinsic quadrupole moments of the proton and the $\Delta^+$
are equal in magnitude but opposite in sign
\begin{equation}
\label{int2}
Q_0^p = - Q_0^{\Delta^+} .
\end{equation}
We concluded that in the quark model, the proton is a prolate and the
$\Delta^+$ an oblate spheroid. In Fig.~\ref{fig:shape_qm} an attempt
is made to interpret these results geometrically~\cite{Buc05}.
\subsection{Pion cloud model}
The same conclusion was also obtained in a pion cloud model.
In this model, the nucleon consists of a spherically symmetric
bare nucleon (quark core) surrounded by a pion moving
with orbital angular momentum $l=1$ (p-wave).
For example, the physical proton with spin up, denoted by
$\vert p \uparrow \rangle$, is
a coherent superposition of three different terms~\cite{Hen62}:
\begin{itemize}
\item
spherical quark core contribution with spin 1/2, called a bare proton $p'$,
\item bare $p'$ surrounded by a neutral pion cloud,
\item bare neutron $n'$ surrounded by a positively charged pion cloud.
\end{itemize}
In each term involving pions, the spin(isospin)
of the bare proton and of the pion cloud are coupled to total spin and
isospin of the physical proton.
Similarly, the physical $\Delta^+$ is
described as a superposition of a spherical quark core term with spin 3/2,
called a bare $\Delta^{+\, '}$, a bare
$p'$ surrounded by a $\pi^0$ cloud, and
a bare $n'$ surrounded by a $\pi^+$ cloud. Again, the spin (isospin)
of the quark core and pion cloud are coupled to the total spin and isospin
of the physical $\Delta^+$.
The pion cloud wave functions of the proton
and $\Delta^+$ for spin projections $J_z=1/2$ are:
\begin{eqnarray}
\label{pionwave}
\vert p \uparrow \rangle &= & \alpha
\vert p' \uparrow \rangle
+ \beta
\frac{1}{3} \Bigl (\vert p' \uparrow \pi^0 Y^1_0 \rangle
-\sqrt{2} \vert p' \downarrow \pi^0 Y^1_1 \rangle \nonumber \\
&- &\sqrt{2} \vert n' \uparrow \pi^+ Y^1_0 \rangle
+ 2 \vert n' \downarrow \pi^+ Y^1_1 \rangle \Bigr ),
\nonumber \\
\vert \Delta^+ \uparrow \rangle &= & \alpha'
\vert \Delta^{+'} \uparrow \rangle
+ \beta'
\frac{1}{3} \Bigl ( 2 \vert p' \uparrow \pi^0 Y^1_0 \rangle
+ \sqrt{2} \vert p' \downarrow \pi^0 Y^1_1 \rangle \nonumber \\
&+ & \sqrt{2} \vert n' \uparrow \pi^+ Y^1_0 \rangle
+ \vert n' \downarrow \pi^+ Y^1_1 \rangle \Bigr ),
\end{eqnarray}
where $\beta$ and $\beta'$ describe
the amount of pion admixture in the $N$ and $\Delta$ wave
functions. These amplitudes satisfy the normalization conditions
$\alpha^2 + \beta^2=\alpha^{'2} + \beta^{'2} =1$,
so that we have only two unknowns $\beta$ and
$\beta'$.
The corresponding wave functions for the neutron and $\Delta^0$ are obtained
by isospin rotation~\cite{Hen62}.
Here, $Y^1_0({\bf {\hat r}}_{\pi})$ and $Y^1_1({\bf {\hat r}}_{\pi})$ are spherical harmonics of rank 1
describing the orbital angular momentum wave functions of the pion.
Because the pion moves predominantly in a $p$-wave,
the charge distributions of the nucleon and $\Delta$
deviate from spherical symmetry, even if the bare nucleon and
bare $\Delta$ wave functions are spherical.
The quadrupole operator to be used in connection with these states is
\begin{equation}
\label{pionquad}
{\hat Q}={\hat Q_{\pi}} = e_{\pi} \sqrt{16 \pi \over 5}
r_{\pi}^2 Y^2_0({\bf {\hat r}}_{\pi}),
\end{equation}
where $e_{\pi}$ is the pion charge operator divided by the
charge unit $e$, and $r_{\pi}$ is the distance between the center
of the quark core and the pion. Our choice of ${\hat Q}={\hat Q_{\pi}}$ implies
that the quark core is spherical and that the entire quadrupole moment
comes from the pion p-wave orbital motion.
The $\pi^0$ terms
do not contribute when evaluating the operator ${\hat Q}_{\pi}$
between the wave functions of Eq.(\ref{pionwave}).
We then obtain, e.g., for the
spectroscopic $\Delta^+$ and $p \to \Delta^+$ quadrupole moments
\begin{equation}
\label{pcm1}
Q_{\Delta^+} = -{2 \over 15} \, {\beta'}^{2}\, r_{\pi}^2, \qquad
Q_{p \to \Delta^+} = {4 \over 15} \, {\beta'} \beta \, r_{\pi}^2.
\end{equation}
\begin{figure}[th]
\centerline{\includegraphics[height=0.2\textheight]{pcm2.pdf}}
\caption{\label{fig:pcm}
Intrinsic quadrupole deformation of the nucleon (left)
and $\Delta$ (right) in the pion cloud model. In the $N$
the $p$-wave pion cloud is concentrated along the polar (symmetry) axis,
with maximum probability of finding the pion at the poles.
This leads to a prolate deformation. In the $\Delta$, the pion cloud is
concentrated in the equatorial plane producing an oblate intrinsic
deformation. Depicted here are the
angular ($p$-wave) parts of the pion wave functions,
i.e. $Y^1_0$ in the case of $N$
and $Y^1_1$ in the case of $\Delta$ surrounding an
almost spherical quark core (from Ref.~\cite{hen00b}). }
\end{figure}
To fix the three parameters $\beta$, $\beta'$,
and $r_{\pi}$ we used the $N \to \Delta$ quadrupole transition
moment, $Q_{p \to \Delta^+}^{exp}\approx r_n^2 $~\cite{Bla01}.
In addition, we calculated the nucleon and $\Delta$ charge radii in the pion cloud model and found
\begin{equation}
\label{cond}
r_p^2 - r_{\Delta^+}^2 =
(r_{p'}^2 - r_{\pi}^2 )\left ( \frac{1}{3} {\beta'}^2
- \frac{2}{3} {\beta}^2 \right ) = r_n^2,
\end{equation}
where $r_{p'}^2$ is the charge radius of the bare proton.
We knew from the work of Dillon and Morpurgo~\cite{Dil99a}, and Lebed
and myself~\cite{Leb00} that the last equality holds in good approximation. In the pion model, this could be achieved by chosing
$\beta' =- 2\beta$.
When the latter condition is used in Eq.(\ref{pcm1}), we found that
the $\Delta^+$ and the $p \to \Delta^+$ transition quadrupole moment
are equal and with the experimental input $Q_{p \to \Delta^+}^{exp}\approx r_n^2 $ we obtained from Eq.(\ref{pcm1})
\begin{equation}
\label{pcmsqm}
Q_{\Delta^+}= Q_{p \to \Delta^+} \approx r_n^2.
\end{equation}
This was in the same ballpark as the quark model prediction of
Eq.(\ref{spectroscopicquadrupolemoment}).
From the experimental nucleon charge radii
we could determine the remaining parameters $\beta$ and $r_{\pi}$
(see Ref.~\cite{hen00b}).
Furthermore, for the spectroscopic quadrupole moment of the proton we obtained the following expression
\begin{eqnarray}
\label{pcm3}
Q_p & = & {4 \over 3} \beta^2 r_{\pi}^2 \,
\left ( \frac{1}{3} \, \langle Y^1_0 \vert P_2 \vert Y^1_0 \rangle
+ \frac{2}{3} \,
\langle Y^1_1 \vert P_2 \vert Y^1_1 \rangle \right ) \nonumber \\
& = &
{4 \over 3} \beta^2 r_{\pi}^2 \, \left (
{1 \over 3} \ \left ( { 2 \over 5 } \right )
+ {2 \over 3} \ \left ( -{ 1 \over 5} \right ) \right )=0.
\end{eqnarray}
The factors $1/3$ and $2/3$ are the squares of the Clebsch-Gordan
coefficients that describe the angular momentum coupling of the
bare neutron spin 1/2 with the pion orbital angular momentum $l=1$ to total
spin $J=1/2$ of the proton. They ensure that the spectroscopic
quadrupole moment of the proton is zero. The factors $2/5$ and
$-1/5$ are the expectation values of the Legendre polynomial
$P_2(\cos \theta)$ evaluated between the pion wave function
$Y^1_0({\bf {\hat r} }_{\pi})$ (pion cloud aligned along z-axis) and
$Y^1_1({\bf {\hat r}}_{\pi})$ (pion cloud aligned along an axis
in the x-y plane).
To obtain an estimate for the intrinsic quadrupole moment
we set {\it by hand} each of the coupling coefficients in front of
$<Y^1_0| P_2| Y^1_0> $ and $<Y^1_1| P_2| Y^1_1> $ equal to 1/2,
thereby preserving the sum of coupling coefficients.
The cancellation between the two orientations of the cloud then disappears.
After renormalization, the dominant first term in Eq.(\ref{pcm3}) is equal to the negative of the spectroscopic $\Delta^+$ quadrupole moment
in Eq.(\ref{pcm1}). This term was then identified with the intrinsic quadrupole moment of the proton and we obtained
\begin{equation}
\label{pcm4}
Q^p_0 =-Q_{\Delta^+}= {8 \over 15} \beta^2 r_{\pi}^2 = -r_{n}^2, \qquad
Q^{\Delta^+}_0 = r_n^2.
\end{equation}
The positive sign of the intrinsic proton
quadrupole moment has a simple geometrical interpretation
in this model. It arises because the pion is preferably
emitted along the spin (z-axis) of the nucleon (see Fig.~\ref{fig:pcm}).
Thus, the proton assumes a prolate shape.
Previous investigations in a quark model with pion exchange~\cite{Ven81}
concluded that the nucleon assumes an oblate shape under the pressure of the
surrounding pion cloud, which is strongest along the polar axis.
However, in these studies the deformed shape of the pion cloud
itself was ignored. Inclusion of the latter
leads to a prolate deformation that exceeds the small
oblate quark bag deformation by a large factor.
\subsection{Collective model}
In the collective nuclear model \cite{Boh75}, the relation between the
observable spectroscopic quadrupole moment $Q$ and the intrinsic quadrupole
moment $Q_0$ is
\begin{equation}
\label{collective}
Q= {3 K^2 -J(J+1) \over (J+1) (2J+3) } Q_0,
\end{equation}
where $J$ is the total spin of the nucleus,
and $K$ is the projection of $J$ onto the $z$-axis in the body fixed frame
(symmetry axis of the nucleus) as shown in Fig.~\ref{fig:collective}.
The intrinsic quadrupole moment $Q_0$ characterizes the deformation of the
charge distribution in the ground state. The ratio between $Q_0$ and
$Q$ is the expectation value of the Legendre polynomial $P_2(\cos\Theta)$
in the substate with maximal projection $M=J$. This factor
represents the averaging of the nonspherical charge distribtion due
to its rotational motion as seen in the laboratory frame.
Inserting the quark model relation for the spectroscopic quadrupole moment
$Q_{\Delta^+}= r_n^2$ on the left-hand side we found
for the intrinsic quadrupole moment of the proton
\begin{equation}
\label{int3}
Q_0^p= - 5\, r_n^2.
\end{equation}
\begin{figure}[th]
\centerline{\includegraphics[height=0.2\textheight]{collective.pdf}}
\caption{\label{fig:collective} Representation of the $\Delta$-isobar as a
collective rotation of a prolate nucleon with intrinsic spin $K=1/2$.
The collective orbital angular momentum is denoted by $R$.
As a result of the collective rotation with angular momentum $R=1$
of a cigar-shaped object ($N$)
with intrinsic spin $K=1/2$ one obtains a pancake-shaped object ($\Delta$)
with total angular momentum
$J=3/2$. The lengths of the major half-axis $a$ and the minor half-axis $b$
can be calculated in the model of a homogeneously
charged spheroid. For the nucleon we obtained $a/b=1.11$.}
\end{figure}
The large value for
$Q_0^p$ is certainly due
to the crudeness of the rigid rotor model for the nucleon
which underlies Eq.(\ref{collective}). A more realistic
description would treat nucleon rotation as being partly
irrotational, e.g., only the peripheral parts of the nucleon participate in
the collective rotation. This results in smaller intrinsic quadrupole
moments\cite{Boh75}. However, we speculated that the sign of
the intrinsic quadrupole moment given by Eq.(\ref{int3}) is correct and
concluded that the nucleon is a prolate spheroid.
We also applied the collective model to estimate $Q_0^{\Delta}$.
For this purpose one regards the $\Delta^+$ as the
$K=J=3/2$ ground state of a rotational band. We then obtain
from Eq.(\ref{collective}) a negative intrinsic quadrupole moment
for the $\Delta^+$
\begin{equation}
\label{int4}
Q_0^{\Delta^+}= 5\, r_n^2= -Q_0^p.
\end{equation}
Obviously, the intrinsic quadrupole moments of the proton and the
$\Delta^+$ have the same magnitude but different sign,
a result that was also obtained in the quark model and the pion cloud
model. In the collective model, the sign change between $Q_0^p$ and $Q_0^{\Delta^+}$ can be explained by imagining a cigar-shaped
ellipsoid ($N$) collectively rotating around the $x$ axis. This leads to a pancake-shaped ellipsoid ($\Delta$).
Summarizing, the collective model
leads in combination with the experimental information
to a positive intrinsic quadrupole moment of the nucleon
and a negative intrinsic quadrupole moment for the $\Delta^+$.
Although the magnitude of the deformation is uncertain,
we are confident that our assignment of a prolate deformation
for the nucleon and an oblate deformation for the $\Delta$ is correct.
\section{Quadrupole moments of baryons}
When Ernest visited T\"ubingen in July 2000, we finsished
the intrinsic quadrupole moment paper~\cite{hen00b} and started to systematically calculate the directly measurable spectroscopic quadrupole moments of decuplet baryons as well as decuplet-octet transition quadrupole moments~\cite{Hen02}.
The charge quadrupole operator is composed of a two- and three-body term
in spin-flavor space
\begin{eqnarray}
\label{para1}
{ {\cal Q}} & = & B\sum_{i \ne j}^3 e_i
\left ( 3 \sigma_{i \, z} \sigma_{ j \, z}
-\b{\sigma}_i \cdot \b{\sigma}_j \right ) \nonumber \\
&+ & C \sum_{i \ne j \ne k }^3 e_k
\left ( 3 \sigma_{i \, z} \sigma_{ j \, z} -
\b{\sigma}_i \cdot \b{\sigma}_j \right ),
\end{eqnarray}
where
$e_i=(1 + 3 \tau_{i \, z})/6$ is the charge of the i-th quark.
More general operators containing second and third
powers of the quark charge are conceivable~\cite{Leb00} but are
not considered here. Their contribution is suppressed by factors
of $e^2/4\pi=1/137$. The $z$-component
of the Pauli spin (isospin) matrix $\b{\sigma}_i$ ($\b{\tau}_i$)
is denoted by $\sigma_{i \, z}$ ($\tau_{i \, z}$).
We recall that there is no one-quark operator, because one cannot
construct a spin tensor of rank 2 with a single Pauli matrix.
Decuplet quadrupole moments $Q_{B^*}$ and octet-decuplet transition
quadrupole moments $Q_{B \to B^*}$ are obtained by calculating the
matrix elements of the quadrupole operator
in Eq.(\ref{para1}) between the
three-quark spin-flavor wave functions $\vert W_B \rangle $
\begin{eqnarray}
\label{matrixelements}
Q_{B^*} & = &\left \langle W_{B^*} \vert { {\cal Q}}
\vert W_{B^*} \right \rangle , \nonumber \\
Q_{B \to B^*}
& = & \left \langle W_{B^*} \vert {{\cal Q}} \vert W_B \right \rangle,
\end{eqnarray}
where $B$ denotes a spin 1/2 octet baryon and $B^*$ a member of the
spin 3/2 baryon decuplet.
Although the two- and three-body operators in Eq.(\ref{para1})
formally act on valence quark states, they are mainly
a reflection of the $q \bar q$ and gluon
degrees of freedom that have been
eliminated from the Hilbert space, and which reappear as quadrupole
tensors in spin-flavor space~\cite{hen00b,Buc97}.
As spin tensors of rank 2, they can induce
spin $1/2 \to 3/2$ and $3/2 \to 3/2$ quadrupole transitions.
\subsection{SU(6) spin-flavor symmetry breaking}
If the spin-flavor symmetry was exact,
octet and decuplet masses would be equal,
the charge radii of neutral baryons would be zero, and the spectroscopic
quadrupole moments of decuplet baryons would vanish.
In particular, we would have $M_{\Delta^+}=M_p$, $r_{\Delta^0}^2=r_{n}^2=0$,
and $Q_{\Delta^+}=Q_{p\to \Delta^+}=0$.
But SU(6) symmetry is only approximately
realized in nature. It is broken by spin-dependent terms in the strong
interaction Hamiltonian. The spin-dependent interaction terms explain
why decuplet baryons
are heavier than their octet member counterparts with the same strangeness.
Spin-flavor symmetry is also broken by the spin-dependent operators in the
electromagnetic interaction, in particular by the charge quadrupole
operators in Eq.(\ref{para1}). These have different matrix elements for
spin 1/2 octet and spin 3/2 decuplet baryons,
and give rise to nonzero quadrupole moments for decuplet baryons.
In Tables~\ref{quadmo} and ~\ref{transquad}
we show our results for the decuplet quadrupole moments
and the decuplet-octet transition quadrupole moments
in terms of the GP constants $B$ and $C$ describing the contribution
of two- and three-quark operators,
assuming that SU(3) flavor symmetry is exact $(r=1)$ and with
approximate treatment of SU(3) flavor symmetry breaking $(r\ne 1)$.
We observed that the spectroscopic decuplet quadrupole moments are proportional to their
charge, and that the octet-decuplet transition moments between
the negatively charged baryons are zero.
The latter result follows from $U$-spin conservation,
which forbids such transitions if flavor symmetry is exact~\cite{Lip73}.
Furthermore, the sum of all decuplet quadrupole moments is zero in this limit.
\subsection{SU(3) flavor symmetry breaking}
To get an idea of the degree of SU(3) flavor symmetry breaking
induced by the electromagnetic transition operator,
we replaced the spin-spin terms in Eq.(\ref{para1}) by
expressions with a cubic quark mass dependence
\begin{eqnarray}
\label{cubicmass}
\sigma_{i} \sigma_{j} &\rightarrow &\sigma_{i} \sigma_{j}m_u^3/(m_i^2 m_j),
\end{eqnarray}
as obtained from the two-body gluon exchange charge density shown in Fig.~\ref{fig:SU3breaking}.
\begin{figure}[th]
\centerline{\includegraphics[height=0.20\textheight]{SU3_breaking.pdf}}
\caption{\label{fig:SU3breaking}
The two-quark gluon exchange current gives rise to a
two-quark quadrupole operator as in Eq.(\ref{para1}) and a cubic quark mass dependence of SU(3) flavor symmetry breaking as in Eq.(\ref{cubicmass}).
The additional factor $1/m_i$ in Eq.(\ref{cubicmass})
is due to the intermediate quark propagator between the photon-quark and gluon-quark vertices.}
\end{figure}
Flavor symmetry breaking is then characterized by the ratio
$r=m_u/m_s$ of $u$ and $s$ quark masses, which is a known number.
We use the same mass for $u$ and $d$ quarks
to preserve the SU(2) isospin symmetry of the strong interaction,
that is known to hold to a very good accuracy.
We emphasize that this treatment of SU(3) symmetry breaking is not exact. The GP method of including
SU(3) symmetry breaking is to introduce additional operators and
parameters, which guarantees that flavor symmetry breaking is incorporated
to all orders~\cite{Mor99a}.
There are then so many undetermined constants that the theory can no
longer make predictions. We expect that our approximate treatment
includes the most important physical effect.
\begin{table}[pt]
\caption{\label{quadmo} Two-quark ($B$) and three-quark ($C$) contributions to
quadrupole moments of decuplet baryons
in the SU(3) symmetry limit ($r=1$)
and with broken flavor symmetry ($r\ne 1$).
SU(3) flavor symmetry breaking is characterized by the ratio of
u-quark and s-quark masses $r=m_u/m_s$.}
{\begin{tabular}{ | l | c | c |} \hline
& $Q(r=1)$ & $Q(r\ne 1$ \\ \hline
$\Delta^{-}$ & $ -4B -4C$ & $-4B -4C$ \\
$\Delta^{0}$ & $ 0 $ & $0 $ \\
$\Delta^{+}$ & $4B+ 4C $ & $4B +4C$ \\
$\Delta^{++}$ & $8B + 8C$ & $8B +8C$ \\
\hline
$\Sigma^{\ast -}$ & $-4B-4C$ & $-(4B+4C) (1+r+r^2)/3 $ \\
$\Sigma^{\ast 0}$ & $0$ & $ [2B (1+r-2r^2) - 2C(2-r-r^2)]/3$ \\
$\Sigma^{\ast +}$ & $4B+4C$ & $ [4B(2 + 2r -r^2) -4C(1-2r-2r^2)]/3 $ \\
\hline
$\Xi^{\ast -}$ & $-4B-4C$ & $-(4B+4C)(r + r^2 +r^3)/3$ \\
$\Xi^{\ast 0}$ & $0$ & $[4B(2r-r^2-r^3) -4C(r+r^2-2r^3)]/3$ \\ \hline
$\Omega^-$ & $-4B-4C$ & $-(4B + 4C)r^3 $ \\ \hline
\end{tabular}}
\end{table}
\begin{table*}[pt]
\caption{\label{transquad} Two-quark ($B$) and three-quark ($C$)
contributions
to the octet-decuplet transition quadrupole moments
in the SU(3) symmetry limit ($r=1$)
and with broken flavor symmetry ($r\ne 1$).
SU(3) flavor symmetry breaking is characterized by the ratio of
u-quark and s-quark masses $r=m_u/m_s$. }
{\begin{tabular}{| l | c | c | } \hline
& $Q(r=1)$ & $Q(r\ne 1)$ \\ \hline
$p\to \Delta^+$ & $2\sqrt{2} (B-2C)$ & $2\sqrt{2} \,(B-2C)$ \\
$n\to \Delta^0$ & $2\sqrt{2} (B-2C)$ & $2\sqrt{2} \,(B-2C)$ \\
\hline
$\Sigma^- \to \Sigma^{\ast -}$ & $0$ & $-\sqrt{2}\,(2B+2C)\,(2-r-r^2)/3$ \\
$\Sigma^0 \to \Sigma^{\ast 0}$ & $\sqrt{2}(B-2C)$ &
$\sqrt{2} [2B (2-r+2r^2) - 2C (4 + r +r^2)]/6 $ \\
$\Lambda^0 \to \Sigma^{\ast 0}$ & $\sqrt{6} (B-2C)$ & $\sqrt{6}[2B r - 2C (r + r^2)]/2 $ \\
$\Sigma^+ \to \Sigma^{\ast +}$ & $2\sqrt{2} (B-2C)$ &
$2\sqrt{2}\, \lbrack B \,(4-2r+r^2) - 2C \,(1+r +r^2) \rbrack /3 $ \\
\hline
$\Xi^- \to \Xi^{\ast -}$ & $0$ & $-\sqrt{2} \,(2B+2C)\, (r+r^2-2r^3)/3$ \\
$\Xi^0 \to \Xi^{\ast 0}$ & $2\sqrt{2} (B-2C)$ &
$\sqrt{2}[2B (2r -r^2 + 2r^3) - 2C (r + r^2 + 4r^3)]/3$ \\ \hline
\end{tabular} }
\end{table*}
\subsection{Relations among quadrupole moments}
Even though the SU(6) and SU(3) symmetries are broken,
there exist --as a consequence of the underlying unitary symmetries---
certain relations among the quadrupole moments.
A relation is the stronger the weaker the assumptions required for its
derivation. We were therefore interested in those relations that hold even
when SU(3) symmetry breaking is included in the charge quadrupole operator.
These are the ones, which are most likely satisfied in nature.
The 18 quadrupole moments (10 diagonal
decuplet and 8 decuplet-octet transition quadrupole moments) are expressed
in terms of only two constants $B$ and $C$. Therefore, there
must be 16 relations between them. Given the analytical expressions in
Tables~\ref{quadmo} and~\ref{transquad}, it is straightforward to verify
that the following relations hold
\setcounter{equation}{33}
\alpheqn
\begin{eqnarray}
\label{rel6a}
0 & = & Q_{\Delta^{-}} + Q_{\Delta^+}, \\
\label{rel6b}
0 & = & Q_{\Delta^{0}}, \\
\label{rel6c}
0 & = & 2\, Q_{\Delta^{-}} + Q_{\Delta^{++}}, \\
\label{rel6d}
0 & = & Q_{\Sigma^{* -}} - 2\, Q_{\Sigma^{* 0}} + Q_{\Sigma^{* +}} , \\
\label{rel6e}
0 & = & 3 ( Q_{\Xi^{* -}} - Q_{\Sigma^{* -}} ) -
( Q_{\Omega^-}- Q_{\Delta^-}), \\
\label{rel6f}
0 & = & Q_{p \to \Delta^{+}} - \, Q_{n \to \Delta^{0} }, \\
\label{rel6g}
0 & = & Q_{\Sigma^{-} \to \Sigma^{* -}} - 2 \, Q_{\Sigma^{0} \to \Sigma^{* 0}}
+ Q_{\Sigma^{ +} \to \Sigma^{* +}}, \\
\label{rel6h}
0 & = & Q_{\Delta^-} - Q_{\Sigma^{* -}}
-\sqrt{2} \, Q_{\Sigma^{-} \to \Sigma^{* -}}, \\
\label{rel6i}
0 & = & Q_{\Delta^+} \!- \!Q_{\Sigma^{* +}} \!+\! \sqrt{2}
Q_{p \to \Delta^{+}}
\!-\! \sqrt{2} Q_{\Sigma^{+} \to \Sigma^{* +}}, \\
\label{rel6j}
0 & = & Q_{\Sigma^{* 0}} + Q_{\Xi^{* 0}} \nonumber \\
&-& \! \!\!\frac{1}{\sqrt{2}} \left( Q_{\Sigma^{0} \to \Sigma^{* 0}}\!-\!Q_{\Xi^{0} \to \Xi^{* 0}}\! + \!
\frac{1}{\sqrt{6}} Q_{\Lambda^{0} \to \Sigma^{* 0}}\right)\!\!, \\
\label{rel6k}
0 & \! \!= \!\!& Q_{\Sigma^{* -}} \!\!- \! Q_{\Xi^{* -}} \! \!- \! \!
\frac{1}{\sqrt{2}} Q_{\Xi^{ -} \! \to \! \Xi^{* -}}\! \! - \!\!
\frac{1}{\sqrt{2}} Q_{\Sigma^{- } \! \to \!\Sigma^{* -}}\!.
\end{eqnarray}
These eleven combinations of quadrupole moments do not depend
on the flavor symmetry breaking parameter $r$.
In fact, Eqs.(\ref{rel6a}-\ref{rel6d}) are already
a consequence of the assumed SU(2) isospin symmetry of the strong interaction,
and hold irrespective of the order of SU(3) symmetry breaking.
Eq.(\ref{rel6e}) is the quadrupole moment counterpart of
the ``equal spacing rule'' for decuplet masses.
There are also five $r$-dependent
relations which can be chosen as
\setcounter{equation}{34}
\alpheqn
\begin{eqnarray}
\label{rel8a}
0 & = & \frac{1}{3}(1+r+r^2) Q_{\Delta^+} + Q_{\Sigma^{* -}}, \\
\label{rel8b}
0 & = & (r-r^2) \, Q_{\Delta^+} - \sqrt{2} (2+r^2) Q_{p \to \Delta^{+}} \nonumber \\
& + & 6 \sqrt{2} Q_{\Sigma^{0} \to \Sigma^{* 0}}, \\
\label{rel8c}
0 & = & r \, Q_{\Sigma^{* -}} - Q_{\Xi^{* -}}, \\
\label{rel8d}
0 & = & (r-r^2) Q_{\Delta^+} + \sqrt{2} (r + 2r^3) \, Q_{p \to \Delta^{+}} \nonumber \\
&-& 3 \sqrt{2} \, Q_{\Xi^{0} \to \Xi^{* 0}}, \\
\label{rel8e}
0 & = & r^3 \, Q_{\Delta^-} -Q_{\Omega^-}.
\end{eqnarray}
\reseteqn
Other combinations of the expressions in
Tables~\ref{quadmo} and~\ref{transquad} can be written down if desirable.
With the help of these relations the experimentally inaccessible
quadrupole moments can be obtained from those that can be measured.
Quadrupole moments of decuplet baryons are difficult to measure due to their short lifetime with the exception of the $\Omega^-$.
It is planned to measure the quadrupole moment of the relatively long-lived
$\Omega^-$ baryon at FAIR in Darmstadt~\cite{Poc17}.
\begin{table}[pt]
\caption{Numerical values for the quadrupole moments
of decuplet baryons in units of [fm$^2$]
according to the analytic expressions in
Table~\ref{quadmo} with $B=r_n^2/4$ and $C=0$.
The experimental neutron charge radius~\cite{Kop95},
$r_n^2=-0.113(3)$ fm$^2$, and the SU(3) symmetry breaking
parameter~\cite{hen00a}, $r=0.6$, are used as input values.
\label{quadmonum}}
{\begin{tabular}{| l | r | r | } \hline
& ${Q}(r=1)$ & $Q(r=0.6)$ \\[0.15cm] \hline
$\Delta^{-}$ & $ 0.113$ & 0.113 \\
$\Delta^{0}$ & 0 & 0 \\
$\Delta^{+}$ & $-0.113$ & -0.113 \\
$\Delta^{++}$ & $-0.226$ & -0.226 \\
$\Sigma^{\ast -}$ & $ 0.113$ & 0.074 \\
$\Sigma^{\ast 0}$ & 0 & -0.017 \\
$\Sigma^{\ast +}$ & $-0.113$ & -0.107 \\
$\Xi^{\ast -}$ & $ 0.113$ & 0.044 \\
$\Xi^{\ast 0}$ & 0 & -0.023 \\
$\Omega^-$ & $ 0.113$ & 0.024 \\ \hline
\end{tabular} }
\end{table}
\begin{table}[pt]
\caption{\label{transquadnum} Numerical values for the
octet-decuplet transition quadrupole moments in units of [fm$^2$] according
to the analytic expressions in Table~\ref{transquad}.}
{\begin{tabular}{| l | r | r | } \hline
& ${Q}(r=1)$ & $Q(r=0.6)$ \\[0.15cm] \hline
$p\rightarrow \Delta^+$ & $-0.080$ & -0.080 \\
$n\rightarrow \Delta^0$ & $-0.080$ & -0.080 \\
$\Sigma^- \rightarrow \Sigma^{\ast -}$ & $0$ & 0.028 \\
$\Sigma^0 \rightarrow \Sigma^{\ast 0}$ & $-0.040$ & -0.028 \\
$\Lambda^0 \rightarrow \Sigma^{\ast 0}$ & $-0.069$ & -0.042 \\
$\Sigma^+ \rightarrow \Sigma^{\ast +}$ & $-0.080$ & -0.084 \\
$\Xi^- \rightarrow \Xi^{\ast -}$ & 0 & 0.014 \\
$\Xi^0 \rightarrow \Xi^{\ast 0}$ & $-0.080$ & -0.034 \\ \hline
\end{tabular}}
\end{table}
\subsection{Numerical results}
Numerical values
are listed in Tables~\ref{quadmonum} and \ref{transquadnum} for the cases
without ($r=1$) and with ($r=0.6$) flavor symmetry breaking.
The electric quadrupole moments of the charged baryons are of the same
order of magnitude as $r_n^2$, while those of the
neutral baryons are considerably smaller.
Updated numerical results including the three-quark terms have been given in Ref.~\cite{Buc07}.
Electric quadrupole moments and their generalization to quadrupole form factors have been the focus of numerous works~\cite{But94,Leb95,Oh95,Dah13,Kri91,Buc04,Pas07a,Ram16} and several reviews~\cite{Pas07,Ber07,Tia07,Tia11,Azn11}.
\section{Magnetic octupole moments of baryons}
While there is a large body of literature on baryon magnetic dipole moments,
there are only few works that deal with the next higher multipole moments,
that is the magnetic octupole moments $\Omega$
of decuplet baryons~\cite{Gia90,Hen08,Ram09,Ali09}.
Presently, relatively little is known concerning the sign and the size of
these moments. This information is needed to reveal
further details of the current distribution in baryons beyond
those available from the magnetic dipole moment~\cite{kot02}.
The magnetic octupole moment operator $\Omega$ usually given
in units $[{\rm fm}^2 \, \mu_N]$ and
normalized as in Ref.~\cite{Don84} can be written as
\begin{eqnarray}
\label{M1andM3}
\Omega_0 & = & \frac{3}{8}
\int \! dr^3 (3 z^2-r^2) \, ({\bf r} \times {\bf J}({\bf r}))_z,
\end{eqnarray}
where ${\bf J}({\bf r})$ is the spatial current density and
$\mu_N$ the nuclear magneton.
This definition is analogous to the one for the
charge quadrupole moment~\cite{hen00b} if the magnetic moment density
$({\bf r} \times {\bf J}({\bf r}))_z$ is replaced
by the charge density $\rho({\bf r})$. Again, one has to distinguish
between the spectroscopic (laboratory frame) and intrinsic (body-fixed frame).
Thus, the magnetic octupole moment measures
the deviation of the spatial magnetic moment distribution from
spherical symmetry. More specifically, for a prolate (cigar-shaped)
magnetic moment distribution $\Omega_0 >0$,
while for an oblate (pancake-shaped) magnetic moment distribution
$\Omega_0 <0$. We also see from Eq.(\ref{M1andM3}) that the typical size of a magnetic octupole moment is
\begin{equation}
\Omega_0 \simeq r^2 \, \mu
\end{equation}
where $\mu$ is the magnetic moment and $r^2$ a size parameter
related to the quadrupole moment of the system.
Although the nucleon cannot have a spectroscopic octupole moment,
due to angular momentum selection rules, it may have an intrinsic octupole moment, if its magnetic moment distribution deviates from spherical symmetry~\cite{Buc18}.
\begin{figure}[th]
\centerline{\includegraphics[height=0.20\textheight]{octupole.pdf}}
\caption{\label{fig:octupole}
Magnetic octupole moment of the current distribution. Left: Prolate
current distribution with an intrinsic octupole moment $\Omega_0>0$.
Right: Oblate
current distribution with an intrinsic octupole moment $\Omega_0<0$.
For further details see~\cite{Buc18}).}
\end{figure}
To calculate the spectroscopic octupole moments of decuplet baryons
we had to construct an octupole moment operator ${\tilde \Omega}$
in spin-flavor space. We knew that we needed a tensor of
rank 3 in spin space, which must involve the Pauli spin matrices
of three {\it different} quarks~\cite{comment0}. This could be done by
considering a three-body quadrupole moment operator multiplied by the spin
of the third quark,
\begin{eqnarray}
\label{para2}
{\tilde \Omega}_{[3]} & = & C \sum_{i \ne j \ne k }^3 e_k
\left ( 3 \sigma_{i \, z} \sigma_{ j \, z} -
\xbf{ \sigma}_i \cdot \xbf{ \sigma}_j \right )\xbf{ \sigma}_{k},
\end{eqnarray}
where $C$ is a constant and
$e_k=(1 + 3 \tau_{k \, z})/6$ is the charge of the k-th quark.
The $z$-component
of the Pauli spin (isospin) matrix $\xbf{ \sigma}_i$ ($\xbf{\tau}_i$)
is denoted by $\sigma_{i \, z}$ ($\tau_{i \, z}$). Alternatively, it
could be built by replacing $e_k$ in Eq.(ref{para2}) by $e_i$, i.e.
from a two-quark quadrupole operator. We soon realized that both operator
structures lead to the same results. In addition, we found that from the point of view of broken SU(6) spin-flavor symmetry~\cite{Gur64},
there is a unique octupole moment operator~\cite{comment22}.
The spectroscopic magnetic octupole moments $\Omega_{B^*}$
were then obtained by sandwiching the operator in Eq.(\ref{para2})between the three-quark spin-flavor wave functions $\vert W_{B^{*}} \rangle $.
For example, for $\Delta(1232)$ baryons we obtained
\begin{eqnarray}
\label{three}
\Omega_{\Delta} & = &\langle W_{\Delta}
\vert {\tilde {\Omega}}_{[3]}
\vert W_{\Delta} \rangle = 4 \, C \, q_{\Delta},
\end{eqnarray}
where $q_{\Delta}$ is the $\Delta$ charge.
Similarly, the magnetic octupole moments for the other decuplet baryons
were calculated.
In this way Morpurgo's method yields an efficient parameterization
of baryon octupole moments in terms of just one unknown parameter $C$.
In the second column of Table~\ref{octumom}
we show our results for the decuplet octupole moments
expressed in terms of the GP constant $C$
assuming that SU(3) flavor symmetry is only broken by
the electric charge operator as in Eq.(\ref{para2}).
We observe that in this limit the spectroscopic magnetic octupole
moments are proportional to the baryon charge.
\begin{table}[pt]
\caption{\label{octumom} Magnetic octupole moments of decuplet baryons.
Second column: SU(3) flavor symmetry limit ($r=1$).
Third column: with flavor symmetry breaking ($r\ne 1$).}
{\begin{tabular}{ | l | c | c | } \hline &
$\Omega_{B^*}(r=1)$ & $\Omega_{B^*}(r\ne 1)$ \\
\hline
$\Delta^{-}$ & $ -4C $ & $-4C $ \\
$\Delta^{0}$ & 0 & 0 \\
$\Delta^{+}$ & $4C $ & $4C $ \\
$\Delta^{++}$ & $8C $ & $8C$ \\
\hline
$\Sigma^{\ast -}$ & $-4C$ & $-4C\,(1+r+r^2)/3 $ \\
$\Sigma^{\ast 0}$ & $0$ & $ -2C \,(2-r-r^2)/3$ \\
$\Sigma^{\ast +}$ & $4C$ & $ -4C\,(1 -2r -2r^2)/3$ \\
\hline
$\Xi^{\ast -}$ & $-4C$ & $-4C\,(r + r^2 +r^3)/3$ \\
$\Xi^{\ast 0}$ & $0$ & $-4C\,(r+r^2-2r^3)/3 $ \\
\hline
$\Omega^-$ & $-4C$ & $-4C\,r^3 $ \\ \hline
\end{tabular} }
\end{table}
To estimate the degree of SU(3) flavor symmetry breaking
beyond first order, we replaced the spin-spin terms in Eq.(\ref{para2}) by
expressions with a cubic quark mass dependence as in Eq.(\ref{cubicmass}).
This leads to analytic expressions for the magnetic octupole
moments $\Omega_{B^*}$ containing terms up to third order in $r$
as shown in the third column of Table~\ref{octumom}.
Because the 10 diagonal octupole moments can be expressed
in terms of only one constant $C$, there
must be 9 relations between them. Given the analytical expressions in
Table~\ref{octumom} it is straightforward to verify
that the following relations hold
\setcounter{equation}{39}
\alpheqn
\begin{eqnarray}
\label{rel9a}
0 & = & \Omega_{\Delta^{-}} + \Omega_{\Delta^+}, \\
\label{rel9b}
0 & = & \Omega_{\Delta^{0}}, \\
\label{rel9c}
0 & = & 2\, \Omega_{\Delta^{-}} + \Omega_{\Delta^{++}}, \\
\label{rel9d}
0 & = & \Omega_{\Sigma^{* -}}-
2\,\Omega_{\Sigma^{* 0}}+\Omega_{\Sigma^{* +}} , \\
\label{rel9e}
0 & = & 3 ( \Omega_{\Xi^{* -}} - \Omega_{\Sigma^{* -}} ) -
(\Omega_{\Omega^{-}} - \Omega_{\Delta^{-}} ) \\
\label{rel9f}
0 & = & (\Omega_{\Xi^{* 0}} + 2\, \Omega_{\Xi^{* -}}) +
(\Omega_{\Sigma^{* +}} - \Omega_{\Sigma^{* -}}) \\
\label{rel9g}
0 & = & \frac{1}{3}(1+r+r^2) \Omega_{\Delta^+} + \Omega_{\Sigma^{* -}}, \\
\label{rel9h}
0 & = & r \, \Omega_{\Sigma^{* -}} - \Omega_{\Xi^{* -}}, \\
\label{rel9i}
0 & = & r^3 \, \Omega_{\Delta^-} -\Omega_{\Omega^-}.
\end{eqnarray}
\reseteqn
The first six relations do not depend
on the flavor symmetry breaking parameter $r$.
In fact, Eqs.(\ref{rel9a}-\ref{rel9d}) are already
a consequence of the assumed SU(2) isospin symmetry of strong interactions.
Eq.(\ref{rel9e}) is the octupole moment counterpart of
the ``equal spacing rule'' for decuplet masses.
Other combinations of the expressions in
Table~\ref{octumom} can be written down if desirable.
To obtain an estimate for $\Omega_{\Delta^+}$
we use the pion cloud model~\cite{hen00b}
where the $\Delta^+$ wave function without
bare $\Delta$ and for maximal spin projection $J_z=1/2$ is writtten as
\begin{equation}
\label{pionwf}
\vert \Delta^+ J_z=\frac{3}{2} \rangle =
\beta'\, \Bigl (
\sqrt{\frac{1}{3}} \,
\vert n' \, \pi^+ \rangle
+
\sqrt{\frac{2}{3}} \,
\vert p' \, \pi^0 \rangle \Bigr )
\vert \uparrow \, Y^1_1 \rangle.
\end{equation}
In this model the magnetic octupole moment operator is a product
of a quadrupole operator in pion variables and a magnetic
moment operator in nucleon variables
\begin{equation}
\label{Omegaop}
\Omega_{\pi N} = \sqrt{\frac{16 \pi}{5}}
\, r_{\pi}^2 \, Y^2_0({\bf r}_{\pi}) \, \, \mu_N \,\tau_z^N \,
\sigma_z^N.
\end{equation}
Here, the spin-isospin structure of $\Omega_{\pi N}$
is infered from the $\gamma \pi N$ and $\gamma \pi$ currents
of the static pion-nucleon model~\cite{Hen62}.
With Eq.(\ref{pionwf}) and Eq.(\ref{Omegaop}) the $\Delta^+$ magnetic octupole moment was readily calculated~\cite{Hen08}
\begin{equation}
\Omega_{\Delta^+} = -{2 \over 15} \, {\beta'}^{2}\, r_{\pi}^2 \
\mu_N = Q_{\Delta^+}\, \mu_{N} = r_n^2 \, \mu_N,
\end{equation}
where $Q_{\Delta^+}$ is the $\Delta^+$ quadrupole moment
and $r_n^2$ the neutron charge radius. With the experimental value
of the latter and $\mu_N$ expressed in $[{\rm fm}]$ we obtained
$\Omega_{\Delta^+} =-0.012\,\,{\rm fm^3}$.
The negative value of $\Omega$ implies that the magnetic moment
distribution in the $\Delta^+$ is oblate and hence
has the same geometric shape as the charge distribution.
Numerical values for other baryon octupole moments can now be obtained
using Eq.(\ref{three}) and the expressions in Table~\ref{octumom}.
These are listed in Table~\ref{octumomnum}.
\begin{table}[pt]
\caption{Numerical values for magnetic octupole moments
of decuplet baryons in units [fm$^3$]
using Table~\ref{octumom} with $C=-0.003$. Second column: SU(3)
flavor symmetry limit ($r=1$). Third column: with SU(3)
flavor symmetry breaking ($r=0.6$).}
{\begin{tabular}{ | l | r | r | } \hline
& $\Omega_{B^*}(r=1)$ & $\Omega_{B^*}(r=0.6)$ \\ \hline
$\Delta^{-}$ & 0.012 & 0.012 \\
$\Delta^{0}$ & 0 & 0 \\
$\Delta^{+}$ & -0.012 & -0.012 \\
$\Delta^{++}$ & -0.024 & -0.024 \\
$\Sigma^{\ast -}$ & 0.012 & 0.008 \\
$\Sigma^{\ast 0}$ & 0 & 0.002 \\
$\Sigma^{\ast +}$ & -0.012 & -0.004 \\
$\Xi^{\ast -}$ & 0.012 & 0.005 \\
$\Xi^{\ast 0}$ & 0 & 0.002 \\
$\Omega^-$ & 0.012 & 0.003 \\ \hline
\end{tabular}
\label{octumomnum} }
\end{table}
To draw a first conclusion concerning the spatial
shape of the magnetic moment distribution in baryons we
estimated the spectroscopic magnetic octupole moment of the $\Delta^+$
in the pion cloud model. We found that the latter can be expressed as
the product of the $\Delta^+$ quadrupole moment and the
nuclear magneton.
This means that the magnetic moment distribution
in the $\Delta^+$ is oblate and
hence has the same geometric shape as the charge distribution.
Recently, an attempt has been made to extract the intrinsic
octupole moment of the proton from these results~\cite{Buc18}.
\section{Spin and orbital angular momentum of ground state baryons}
\label{intro}
The question how the proton spin is made up from
the quark spin $\Sigma$, quark orbital angular momentum $L_q$,
gluon spin $S_g$, and gluon orbital angular momentum $L_g$
\begin{equation}
\label{angmom}
J = \frac{1}{2} \Sigma + L_q + S_g + L_g
\end{equation}
is one of the central issues in nucleon structure physics~\cite{seh74,ji97}.
In the constituent quark model with only one-quark operators,
also called additive quark model,
one obtains $J=\Sigma/2 =1/2$, i.e., the proton spin is the sum
of the constituent quark spins and nothing else.
However, experimentally it is known that only about 1/3 of the proton
spin comes from quarks~\cite{aid12}. The disagreement between the
additive quark model
result and experiment came as a surprise because the same model
accurately described the related proton and neutron magnetic moments.
We showed that the failure of the additive quark model to describe
the quark contribution to proton spin correctly is due to its neglect
of three-quark terms in the axial current~\cite{Hen11}.
The first step is to realize~\cite{Gur64} that a general SU(6) spin-flavor
operator ${\tilde \Omega}^{R}$ acting on the ${\bf 56}$
dimensional baryon ground state
supermultiplet must transform according to one of the irreducible
representations $R$ contained in the direct product
$\label{directproduct}
\bar{{\bf 56}} \times {\bf 56}
= {\bf 1} + {\bf 35} + {\bf 405} + {\bf 2695}.$
The ${\bf 1}$ dimensional representation (rep)
corresponds to an SU(6) symmetric operator,
while the ${\bf 35}$, ${\bf 405}$,
and ${\bf 2695}$ dimensional reps characterize respectively, first, second,
and third order SU(6) symmetry breaking. Therefore, a general SU(6) symmetry
breaking operator for ground state baryons has the form
\begin{equation}
\label{genop}
{\tilde \Omega}
= {\tilde \Omega}^{\bf 35} +
{\tilde \Omega}^{\bf 405} + {\tilde \Omega}^{\bf 2695}.
\end{equation}
The second step is to decompose each SU(6) tensor ${\tilde \Omega}^R$
in Eq.(\ref{genop})
into SU(3)$_F\times$SU(2)$_J$ subtensors ${\tilde \Omega}^R_{(F,2J+1)}$,
where $F$ and $2J+1$ are the dimensionalities of the flavor and spin reps.
One finds ~\cite{Hen11,Beg64a} that a flavor singlet $(F=1)$ axial
vector $(J=1)$ operator
${\tilde \Omega}^{R}_{({\bf 1}, {\bf 3})}$
needed to describe baryon spin,
is contained {\it only} in the $R={\bf 35}$ and $R={\bf 2695}$
dimensional reps of SU(6).
The third step is to construct quark operators transforming
as the SU(6) tensor
${\tilde \Omega}^R_{({\bf 1},{\bf 3})}$.
In terms of quarks, the SU(6) tensors on the right-hand side of
Eq.(\ref{genop}) are represented respectively by one-, two-,
and three-quark operators~\cite{Leb95}. We found the following
uniquely determined one-quark ${\bf A}_{[1]}$ and
three-quark ${\bf A}_{[3]}$ flavor singlet axial currents~\cite{Hen11}
\begin{eqnarray}
{\tilde \Omega}^{\bf 35}_{({\bf 1},{\bf 3})} & = &
{\bf A}_{[1]} = A \, \sum_{i=1}^3 \ {\b{\sigma}}_{i}, \nonumber \\
{\tilde \Omega}^{\bf 2695}_{({\bf 1},{\bf 3})} &=&
{\bf A}_{[3]} = C \, \sum_{i \ne j \ne k}^3
\ {\b{\sigma}}_i \cdot {\b{\sigma}}_j \ {\b{\sigma}}_{k},
\end{eqnarray}
where $\b{\sigma}_i$ is the Pauli spin matrix of quark $i$. The constants
$A$ and $C$ are to be determined from experiment.
The most general flavor singlet axial current compatible with broken SU(6)
symmetry is then
\begin{equation}
\label{total}
{\bf A} = {\bf {A}}_{[1]} + {\bf {A}}_{[3]}
= A \, \sum_{i=1}^3 \ {\b{\sigma}}_{i} +
C \, \sum_{i \ne j \ne k}^3 \ {\b{\sigma}}_i \cdot {\b{\sigma}}_j
\ {\b{\sigma}}_{k}.
\end{equation}
The additive quark model corresponds to $C=0$ and $A=1$.
The three-quark operators are an effective description of quark-antiquark
and gluon degrees of freedom. Prior to our investigation,
the role of two-body gluon exchange currents was studied
in the nucleon spin problem~\cite{Bar06,tho09} in more
elaborate models but with similar results
for the nucleon. The relation between these approaches has not yet
been clarified.
\subsection{Quark spin contribution to baryon spin}
\label{sec:3}
By sandwiching the flavor singlet axial current ${\bf A}$ of Eq.(\ref{total})
between standard SU(6) baryon wave functions~\cite{Clo} we obtained
for the quark spin contribution to the spin of octet and
decuplet baryons~\cite{Hen11}
\begin{eqnarray}
\label{matrixelementsspin}
\Sigma_{1}: & = &
\langle B_8 \uparrow \vert {\bf A}_z \vert B_8 \uparrow \rangle = A - 10\, C,
\nonumber \\
\Sigma_{3}: & = &
\langle B_{10} \uparrow \vert {\bf A}_z \vert B_{10} \uparrow \rangle =
3\,A + 6\,C,
\end{eqnarray}
where $B_8$ ($B_{10}$) stands for any member of the
baryon flavor octet (decuplet). Here, $\Sigma_1$ ($\Sigma_3$)
is twice the quark spin
contribution to octet (decuplet) baryon spin. Our theory predicts
the same quark contribution
to baryon spin for all members of a given flavor multiplet, because
the operator in
Eq.(\ref{total}) is by construction a flavor singlet that
does not break SU(3) flavor symmetry.
On the other hand, SU(6) spin-flavor symmetry is broken as reflected by the
different expressions
for flavor octet and decuplet baryons.
We then constructed from the operators in Eq.(\ref{total})
one-body ${\bf A}^q_{[1] \, z}$ and three-body ${\bf A}^q_{[3] \, z}$
operators of flavor $q$ acting only on $u$ quarks and $d$ quarks~\cite{Hen11}
\begin{eqnarray}
\label{u-quark1and3b}
{\bf A}_z^u
& = & A \sum_{i=1}^{3} \b{\sigma}^u_{i\, z}
+2 C \sum_{i \ne j \ne k}^{3}
{\b{\sigma}}^u_i \cdot {\b{\sigma}}^d_j\ {\b{\sigma}}^u_{k\, z}, \nonumber \\
{\bf A}_z^d
& = & A \sum_{i=1}^{3} \b{\sigma}^d_{i\, z} +
C \sum_{i \ne j \ne k}^{3}
{\b{\sigma}}^u_i \cdot {\b{\sigma}}^u_j\ {\b{\sigma}}^d_{k\, z}.
\end{eqnarray}
For the $u$ and $d$ quark contributions to the spin of the proton we
obtained
\begin{eqnarray}
\label{flavordecomp}
\Delta u & = &
\langle p \uparrow \vert \,
{\bf A}_{[1]\, z}^u + {\bf A}_{[3]\, z}^u \vert p \uparrow \rangle
= \phantom{-}\frac{4}{3}\, A - \frac{28}{3} \, C, \nonumber \\
\Delta d & = &
\langle p \uparrow \vert \,
{\bf A}_{[1]\, z}^d + {\bf A}_{[3]\, z}^d \vert p \uparrow
\rangle
= -\frac{1}{3}\, A - \frac{2}{3} \, C.
\end{eqnarray}
These theoretical results were compared with the combined deep inelastic
scattering and hyperon $\beta$-decay experimental data, from
which the following quark spin contributions to the
proton spin were extracted~\cite{aid12}
$\Delta u = \hspace{.35cm} 0.84 \pm 0.03,\hspace{.5cm}
\Delta d =-0.43 \pm 0.03, \hspace{.5cm}
\Delta s = -0.08 \pm 0.03.$
The sum of these spin fractions
$\Sigma_{1_{exp}}=\Delta u + \Delta d + \Delta s = 0.33(08)$ is considerably
smaller than expected from the additive quark model, which gives $\Sigma_1=1$.
Solving Eq.(\ref{flavordecomp}) for $A$ and $C$ fixes
the constants $A$ and $C$ as
\begin{equation}
A = \phantom{-}\frac{1}{6}\, \, \Delta u - \frac{7}{3}\, \Delta d,
\qquad
C = -\frac{1}{12}\, \Delta u - \frac{1}{3}\, \Delta d.
\end{equation}
Inserting the experimental results for $\Delta u$ and $\Delta d$
we obtain $A=1.143(70)$ and $C=0.073(10)$ and from Eq.(\ref{matrixelementsspin})
\begin{eqnarray}
\Sigma_{1} & = & A - 10\, C = 1.14 - 0.73 = 0.41(12), \nonumber \\
\Sigma_{3} & = &3\,A + 6\,C =3.42 + 0.45 = 3.87(22)
\end{eqnarray}
compared to the experimental result $\Sigma_{1_{exp}}= 0.33(08)$.
For octet baryons, the three-quark term is of the same importance
as the one-quark term because of the factor 10 multiplying $C$.
It is interesting that for decuplet baryons, quark spins
add up to 1.3 times the additive quark model value $\Sigma_3 = 3$.
\subsection{Quark orbital angular momentum contribution to baryon spin}
\label{sec:4}
We then applied~\cite{Hen14} the spin-flavor operator analysis
of Sect.~\ref{sec:3} to quark orbital angular momentum $L_z$
using the general operator of Eq.(\ref{total}) for $L_z$ with new
constants $A'$ and $C'$
\begin{eqnarray}
\label{matrixelements2}
L_z(8) & = &
\langle B_8 \uparrow \vert { L}_z \vert B_8 \uparrow \rangle
= \frac{1}{2} \left ( A' - 10\, C' \right ), \nonumber \\
L_z(10) & = &
\langle B_{10} \uparrow \vert {L}_z \vert B_{10} \uparrow \rangle
= \frac{1}{2} \left ( 3\,A' + 6\,C' \right ).
\end{eqnarray}
Assuming that the gluon total angular momentum
$S_g+L_g \approx 0$ is small~\cite{aid12}
we obtained from Eq.(\ref{angmom})
\begin{eqnarray}
\label{matrixelements3}
L_z(8) & = & \frac{1}{2} - \frac{1}{2} \Sigma_1 = 0.30, \nonumber \\
L_z(10)& = & \frac{3}{2} - \frac{1}{2} \Sigma_3 = -0.44.
\end{eqnarray}
Eq.(\ref{matrixelements2}) and Eq.(\ref{matrixelements3}) yielded for the
parameters $A'=1-A=-0.143$
and $C'=-C=-0.073$.
Next, we calculated the orbital angular momentum carried by $u$ and
$d$ quarks in the proton in analogy to Eq.(\ref{flavordecomp})
\begin{eqnarray}
\label{flavordecomp_orb_proton}
L_z^u(p)& = & \frac{1}{2} \left (
\frac{4}{3}\, A' - \frac{28}{3} \, C' \right )=0.25,
\nonumber \\
L_z^d(p) & = & \frac{1}{2} \left ( -\frac{1}{3}\, A' - \frac{2}{3} \, C'
\right )=0.05.
\end{eqnarray}
For the total angular momentum carried by quarks we got
$J^u(p)=\frac{1}{2}\Delta u + L_z^u(p)=0.42+0.25=0.67$ and
$J^d(p)=\frac{1}{2}\Delta d + L_z^d(p)=-0.22+0.05=-0.17$.
Our results for $J^u(p)$ and $J^d(p)$
are consistent with those of Thomas~\cite{tho09}
who finds $J^u(p)=0.67$ and $J^d(p)=-0.17$ at the low energy (model) scale.
Applying the $u$ and $d$ quark operators in Eq.(\ref{u-quark1and3b}) to the
$\Delta^+$ state we obtain
\begin{eqnarray}
\label{flavordecomp_orb_Delta}
L_z^u(\Delta^+)& = & \frac{1}{2} \left ( 2 \, A' + 4 \, C' \right )=-0.29,
\nonumber \\
L_z^d(\Delta^+)& = & \frac{1}{2} \left ( A' + 2 \, C' \right )=-0.15.
\end{eqnarray}
We suggested an interpretation of Eq.(\ref{flavordecomp_orb_proton})
and Eq.(\ref{flavordecomp_orb_Delta}) in terms of the geometric
shapes of these baryons
as depicted in Fig.~\ref{figure:shapes}.
Previously, by studying the electromagnetic $p\to \Delta^+$
transition in various baryon structure models, we have found
that the proton has a positive intrinsic quadrupole moment $Q_0(p)$
corresponding to a prolate intrinsic charge distribution whereas the
$\Delta^+$ has a
negative intrinsic quadrupole moment of similar magnitude $Q_0(\Delta^+)
\approx -Q_0(p)$
corresponding to an oblate charge distribution~\cite{hen00b}.
This appears to be
consistent with our findings for the quark orbital angular
momenta $L^{u}_z$ and $L^{d}_z$ in both systems as qualitatively
shown in Fig.~\ref{figure:shapes}.
\begin{figure}[th]
\centerline{\includegraphics[width=8cm]{orbital_ang_mom.pdf}}
\caption{\label{figure:shapes}
Qualitative picture of the $u$ and $d$ quark distributions in the
proton (left) and $\Delta^+$ (right).
In the proton, most of the
quark orbital angular momentum is carried by the $u$ quarks
and relatively little by the $d$ quarks.
This is consistent with a linear (prolate or cigar-shaped)
quark distribution with
the $u$ quarks at the periphery and the $d$ quark near the origin.
In contrast, in the $\Delta^+$, the $u$ quark orbital angular momentum
is just twice that of the $d$ quark. This is
consistent with a planar (oblate or pancake-shaped) quark distribution,
in which each quark has the same distance from the origin.}
\end{figure}
In summary, using a broken spin-flavor symmetry
based parametrization of QCD, we calculated the quark spin and orbital angular momentum contributions to total baryon spin for the octet and
for the first time also for the decuplet.
For flavor octet baryons, we demonstrated that three-quark
operators reduce the standard quark model prediction based on
one-quark operators from $\Sigma_1 =1$ to
$\Sigma_1 = 0.41(12)$ in agreement with the experimental result.
On the other hand, in the case of flavor decuplet baryons, three-quark
operators enhance the contribution of one quark operators from
$\Sigma_3=3$ to $\Sigma_3=3.87(22)$.
Assuming that the gluon contribution to baryon spin is small,
we suggested a qualitative interpretation of the positive
and large $u$ quark and small
$d$ quark orbital angular momenta in the proton in terms of a prolate
quark distribution corresponding to a positive intrinsic quadrupole moment.
In the case of the $\Delta^+$, $u$ and $d$ quarks have negative orbital
angular momenta of the same magnitude corresponding to an oblate quark distribution giving rise to a negative intrinsic quadrupole moment.
\begin{figure}[h]
\vspace{0.2 cm}
\centerline{\includegraphics[width=4.5cm]{Ernest.pdf}}
\caption{Ernest Mark Henley (1924-2017).}
\label{figure:Ernest}
\end{figure}
\section{Epilogue}
The last time I saw Ernest was in Seattle in the summer of 2013.
We discussed the connection between quark orbital angular momentum
and the nonsphericity of the nucleon within the context of a harmonic oscillator quark model. Ernest was in good health and he told me that he was still commuting to his office by bike but that his wife did not approve.
Looking back, I am very proud to have had the honour of working with Ernest Henley. He was a very good scientist with a unique gift for cutting through formalism and getting to the heart of the matter. His textbooks "Subatomic physics"~\cite{Fra74} and the more advanced "Nuclear and particle physics"~\cite{Fra75} are masterpieces of clarity and pedagogy. Ernest Henley will always be a role model, not only as an ingenious physicist but also as a human being. He will be missed very much by everybody who had the good fortune to know him. |
1510.09180 | \section{Introduction}
\label{intro}
The \textit{Kepler} observatory began science operation in May 12,
2009, with the main scientific goal of discovering exoplanet
candidates transiting their host stars. The \textit{Kepler} Input
Catalog (KIC, \citealt{Brow11}) includes $\sim$150\,000 targets. The
\textit{Kepler} telescope is a defocused 0.95-/1.4-m Schmidt camera
with a field of view of about 100 square degrees.
For a detailed description of the \textit{Kepler} mission and design,
see the \textit{Kepler} technical documents web
page\footnote{\href{https://archive.stsci.edu/kepler/documents.html}{https://archive.stsci.edu/kepler/documents.html}}
and \citet[and references therein]{Koch10}. Here we provide a brief
review of the relevant characteristics. On the telescope focal plane
there is an array of 21 modules, each of which associated to two
2k$\times$1k CCDs. Each CCD is read out in two 1k$\times$1k channels,
for a total of 84 independent channels over the whole focal plane. On
January 12, 2010, one module failed (MOD-3) and a second one (MOD-7)
failed on February 2014. So, currently only 76 channels are operative.
The pixel scale is $\sim$4 arcsec pixel$^{-1}$, and the deliberate
defocusing causes the average point-spread function (PSF) to extend
across several pixels (except in the area close to the center of the
field). Even so, many features of the PSF are undersampled, even in
the outskirts of the field. Knowing the exact shape of the PSF is the
key for high precision photometry in crowded fields. The
determination of an appropriate PSF for the \textit{Kepler} images
represents the main effort of this work.
Due to limitation in telemetry, the scientific data were downloaded
once a month using a maximum data-transfer rate of approximately 550
kB/s. For this reason the \textit{Kepler} spacecraft conducts its own
pre-reduction of the data, and sends only a small portion of the
exposed pixels. Around each target, a small area (a ``stamp'') of
various dimensions (typically a few pixels square) is read out with an
integration time of 6.02 s. Every 270 exposures, the stamps are
co-added on board to create a long-cadence stamp of about 29 minutes
of total integration time; short cadences involve adding 9 exposures
for a one-minute integration time. A time series of such long- or
short-cadence stamps is called a target-pixel file (TPF). When several
targets of interest are in a relatively small patch of sky, multiple
contiguous stamps are collected together to form a so-called
super-stamp. In the original \textit{Kepler} field, two such
super-stamps were taken around the stellar clusters NGC~6791 and
NGC~6819.
In 2012 and 2013, the failures of two reaction wheels that were used
to maintain accurate, stable spacecraft pointing, forced NASA to
re-design the mission. After a period of study and evaluation, NASA
approved an extension to the mission (named \textit{Kepler}-2.0,
hereafter, \textit{K2}, \citealt{How14}), essentially for as long as
the two remaining reaction wheels continue to operate or until the
fuel is exhausted.
The mission was cleverly designed to use the radiation pressure from
the Sun to balance the spacecraft drift, allowing it to observe four
fields per year close to the Ecliptic. Each of these fields
corresponds to a so called \textit{K2}
Campaign\footnote{\href{http://keplerscience.\-arc.\-nasa.gov\-/K2\-/Fields\-.shtml}{http://keplerscience.\-arc.\-nasa.gov\-/K2\-/Fields\-.shtml}},
and can be continuously observed for $\sim$75 days.
While the two functional reaction wheels control the pitch and yaw,
the thruster needs to be fired every $\sim$6 hours to control the roll
angle of the field. This operation mode causes significantly larger
jitter than in the original \textit{Kepler} main mission. Therefore,
although the \textit{K2} data collection procedures are similar to
those adopted for the original \textit{Kepler} mission, the reduced
pointing capabilities impose the adoption of 2-4 times larger target
stamps. Because the \textit{K2} stamps include 4-16 times more
pixels, the number of observed targets is proportionally reduced from
the $\sim$150\,000 targets of the original \textit{Kepler} field to
$\sim$10\,000-20\,000 objects for the various \textit{K2} fields. A
new list of target objects is defined for each campaign: the Ecliptic
Plane Input
Catalog\footnote{\href{https://archive.stsci.edu/k2/epic/search.php}{https://archive.stsci.edu/k2/epic/search.php}}
(hereafter, EPIC).
Despite the changes introduced in \textit{K2} mission, several notable
results have been achieved, from exoplanet discovery and
characterization (e.g., \citealt{Cros15,FM15,Van15}) to
asteroseismology (\citealt{Her15,Ste15}) and stellar astrophysics
(\citealt{Arm15,Bal15,Kra15,LaC15}).
\subsection{The purpose of this study}
In this study we apply our expertise on PSF photometry and astrometry
on dithered and undersampled images in crowded fields
\citep{AK00,AK06} to extract high-precision photometry of stars in the
two Galactic open clusters (OCs) M\,35 and NGC~2158, which happened to
lie within the \textit{K2} Campaign 0 (C0) field.
The temporal sampling and coverage of \textit{Kepler} and \textit{K2}
missions and their high photometric precision could be an invaluable
resource for different stellar cluster fields, for example
gyrochronology (\citealt{Bar07,Mam08,Mei11,Mei13,Mei15}), stellar
structure and evolution (e.g., age and Helium content from detached
eclipsing binaries as done by \citealt{Bro11}) or asteroseismology
membership (\citealt{Ste11}). Another interesting topic concerns
exoplanets. We could improve our knowledge about the exoplanets in
star clusters, in particular on how the environment (chemical
composition, stellar density, dynamical interaction) can affect their
formation, evolution and frequency
(\citealt{Gil00,Moc04,Moc06,Adam06,Wel08,Nasc12,Qui12,Qui14,Mei13,Bru14}).
Until now, most of the published studies based on \textit{Kepler} and
\textit{K2} data have focused on isolated, bright objects. Focusing on
\textit{K2} data, photometry on such bright objects is well described
in the literature. The main difference between the methods concern the
mask determination, the stellar centroid measurement and the
subsequent detrending algorithms to improve the photometric precision
(for a review of the \textit{K2} methods adopted by the different
teams, see \citealt{Huang15}). However, the potential scientific
information on faint objects and on stars in the super-stamp crowded
regions has not been completely exploited.
In this paper we intend to obtain the most accurate possible models
for the \textit{Kepler} PSFs and to use them to explore the light
curves (LCs) of the sources in the densest regions that have been and
will be imaged by \textit{Kepler} and \textit{K2}. The \textit{Kepler}
main mission includes 4 OCs (NGC~6791, NGC~6811, NGC~6819, NGC~6866);
many more clusters have been and will be observed during the
\textit{K2} campaigns, which will also include the Galactic center and
the Bulge (Campaign 7 and 9, respectively).
Thanks to our PSF models, for the first time on \textit{Kepler}
images, it will be possible to obtain precise photometry for stars in
crowded fields, and down to $K_{\rm P}$$\sim$24. Having access to
accurate \textit{Kepler} PSFs and comprehensive catalogues from
high-angular-resolution ground-based imaging allows us to subtract
neighbours before measuring the flux of target stars, thus giving us
better corrections for dilution effects that might result in
under-estimates of the true radius of transiting/eclipsing objects.
Indeed, the combination of PSFs and catalogues allow {\it all}
reasonably bright sources within the stamps to be measured accurately.
Finally, even without ground-based imaging, accurate PSFs, combined
with aperture photometry, will allow better identification of blends.
We will illustrate a few examples of mismatched variables in the EPIC
catalogue, showing that the real variable is a different, much fainter
source (see Sect.~\ref{rmslit}).
\section{Image reconstruction}
\label{datared}
This pilot paper makes exclusively use of pixels within channel 81 and
collected during \textit{K2} Campaign 0, focusing on the OCs M\,35 and
NGC~2158. We downloaded all the \textit{K2} TPFs, which contain the
complete time series data, for both these clusters from the ``Mikulski
Archive for Space Telescopes'' (MAST).
In our approach we find it more convenient to work with reconstructed
images of the entire channel, putting all saved pixels into a common
reference frame, rather than working with separate stamps. This gives
us a better sense of the astronomical scene and enables us to work
with all pixels collected at a same epoch in a more intuitive way. We
assigned a flag value to all pixels in the CCD unsaved in any stamp.
We wrote a specialized \texttt{\small PYTHON} routine to construct an
image for each cadence number of the TPFs and the corresponding
\textit{Kepler} Barycentric Julian Day (KBJD) was defined as the
average KBJD of all the TPFs with the same cadence number. For C0
channel 81 we thus reconstructed a total of 2422 usable images. Each
channel-81 image is a 1132$\times$1070 pixel$^2$ array. The value
assigned to each pixel in each image is given by the \texttt{FLUX}
column of the corresponding TPF. To cross-check that the reconstructed
images were correct, we compared them with full-frame images of the
field (which were also available from the MAST).
Figure~\ref{fig1} gives an overview of our data set. We show the
coverage and the resolution the \textit{K2} stacked image obtained
with 2422 images and the Schmidt stacked image from \citet[see
Sect.~\ref{AIC} for the description of the catalogue and its
usage]{Nar15}.
\section{Point-spread function modeling}
\label{PSF}
Even taking into account the large defocusing of the \textit{Kepler}
camera, its PSFs are still undersampled. Indeed, \textit{Kepler} PSFs
are not simple 2-dimensional Gaussians, and several of the PSF's
fine-structure features are severely undersampled. If not correctly
modeled, these substructures can introduce sources of systematic
errors into the measurement of the positions and fluxes of the studied
sources. Furthermore, if a PSF model is not sufficiently accurate,
any attempted neighbour-subtraction results in spurious residuals and
consequent systematic errors.
\cite{AK00}, hereafter AK00, introduce a convenient formalism to model
the PSF. Rather than model the \textit{instrumental} PSF as it impacts
the pixels, they model the \textit{effective} pixel-convolved PSF
(ePSF) in a purely empirical way using a simple look-up table
formalism. Their PSF is similar to the \textit{pixel response
function} (PRF) described in \cite{Bry10}. One of the issues that
AK00 found in modeling the undersampled \textit{Hubble Space
Telescope} (\textit{HST}) WFPC2 PSF is that such PSFs suffer from a
degeneracy between the positions and the fluxes of the point
sources. An appropriate calibration data set where stars are observed
in different pixels of the detector and maps different sub-regions of
the pixels is required to solve for this issue.
The AK00 approach involves taking a dithered set of exposures and
empirically constructing a self-consistent set of star positions and a
well-sampled PSF model that describes the distribution of light in the
pixels. Such a data set was taken for \textit{Kepler} during
\textit{Kepler}'s early commissioning phase (see \citealt{Bry10} for a
description), but unfortunately it has not yet been made available to
the public (though it may be within a few months according to Thomas
Barclay, private communication).
Given the urgent need for PSFs to make optimal use of the \textit{K2}
data, we decided to do the next best thing to construct an accurate
set of star positions, so that properly sampled PSFs can be extracted
from \textit{Kepler} images. The main-mission \textit{Kepler} data are
not suitable for this, since they have very little dither and each
star provides only one sampling of the PSF. The \textit{K2} data are
actually better for PSF reconstruction purposes, since the loss of the
reaction wheels means that the spacecraft is no longer able to keep a
stable pointing. Every $\sim$6.5 h, a thruster jet is fired to
re-center the boresight position to its nominal position. As a
consequence, the stellar positions continuously change during
\textit{K2} observations, with each star sampling a variety of
locations within its central few pixels. Moreover, channel 81 contains
the large super-stamp covering M\,35 and NGC~2158 (our main targets)
with a large number of high signal-to-noise (SNR) point sources. We
will see in the next sections that this mapping is not optimal, but it
is the best available so far.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{fig2.ps}
\caption{(\textit{Top-left}): dither-pattern outline of the 2422
images used in our analysis. The gray grid represents the pixel
matrix in the master-frame reference system. The thick-line gray
rectangle encloses the points plotted in the \textit{Top-right}
and \textit{Bottom-left} panels. (\textit{Top-right}): $y$-offset
variation during C0. We excluded the 10 points around (524,508)
with the largest offset with respect to the average value to
better show the time variation of the $y$-offset. The azure points
are those images used for the ePSF modeling (see
text). (\textit{Bottom-left}): as on \textit{Top-right} but for
the $x$ offsets. (\textit{Bottom-right}): dither-pattern pixel
phase. The center of the pixel (dark gray square) is located at
(0,0). The pixel was divided into a 5$\times$5-grid (thin gray
lines) elements and, in each such element, we selected six images
(when possible) to map that sub-pixel region as homogeneously as
possible.}
\label{fig2}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{fig3_rid.ps}
\caption{Effective-PSF samplings at the first (\textit{Left}
column), second (\textit{Middle} column) and last (\textit{Right}
column) iteration. On the \textit{Top}-row panels we show the
location of the estimated value of the PSF with respect to the
center of the stars, placed at (0,0). At the beginning the star
position was computed using the photocenter. From the second
iteration, the sampling becomes more uniform. On the
\textit{Middle}- and \textit{Bottom}-row panels we show the ePSF
profile along the $x$ and $y$ axes for a thin slice with
$\mid\Delta y\mid$$<$0.01 and $\mid\Delta x\mid$$<$0.01,
respectively. In all panels we plotted only 10\% of the available
points, for clarity.}
\label{fig3}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{fig4_rid.ps}
\caption{Pixel-phase errors progression. On the \textit{Left} column
we show the first, in the \textit{Middle} column the second and in
the \textit{Right} column the last iteration of the ePSF
modeling. In the \textit{Top} row we show the pixel-phase errors
along the $x$ axes, in the \textit{Bottom} row the errors along
the $y$ axes. As in Fig.~\ref{fig3}, we plotted only 10\% of the
points.}
\label{fig4}
\end{figure*}
\subsection{Initial assess of the dithered pointings}
The complicated drift-and-repositioning process inherent in the
\textit{K2} data collection results in a very uneven sampling of each
star in pixel phase\footnote{The pixel phase of a star is its location
with respect to the pixel boundaries and can be expressed as the
fractional part of its position: ${\rm PIXEL\, PHASE}=x_i-{\rm
int}(x_i+0.5)$ .}. There are many observations at the initial
phase, and few at the latter phases. We note that even with the
repeated drift, a star typically samples its pixels along a line,
rather than evenly across the face of a pixel, so even the achieved
dither is less than ideal. In order to make the dither sampling as
even as possible, we selected a subset of images (out of the 2422
exposures described in the previous section) in order to evenly map
the astronomical scene across pixel phases. To do this, we used the
\textit{empirical} approach of \cite{Ande06} to construct an initial
PSF model that was spatially constant across each detector. Such a PSF
is not ideal for our ultimate purposes, but it provides better
positions than a crude centroid approach and will allow us to identify
a subset of images that can be used to extract the best possible PSF.
For each exposure, we made one empirical-PSF model for the entire
channel because most of the stars are located in the M\,35/NGC~2158
super-stamp and the spatial variability is not significant for this
initial purpose. With such PSFs, we were able to measure positions and
fluxes for all sources. We then built a common reference system
(hereafter master frame) by cross-identifying the stars in each
image. We used six-parameter linear transformations to bring the
stellar positions as measured in each exposure into the reference
system of the master frame. At the first iteration, the master frame
was chosen using as reference one of the exposures of our sample. We
then adopted a clipped-average of the single-frame positions
transformed into the master frame in order to improve the definition
of the master frame itself, and re-derived improved
transformations. This process was iterated until the master-frame
positions did not significantly change from one iteration to the next.
The transformations between each frame and the master frame allowed us
to analyze the dither-pattern.
In Fig.~\ref{fig2} we show this pattern along with its time
evolution. The dither pattern was made by transforming the same pixel
in each exposure into the master-frame reference system. Since the
behaviour of the geometric distortion is different along the detector,
the dither-pattern outline can change using a different
pixel. However, for our purpose, such representation allows us to
understand it anyway.
It is clear that the dithering places the same star at a range of
locations in its central pixels. While the dither coverage along the
$x$ axis is reasonably uniform, along the $y$ axis the bulk of the
2422 exposures is located in a narrow area. We constructed a
homogeneous mapping of the pixel-phase space (bottom-right panel of
Fig.~\ref{fig2}) as follows. We divided the pixel-phase space in a
5$\times$5-grid elements and, in each element, we selected by hand six
exposures in order to include that sub-pixel region in our PSF
construction. This was possible in almost all the cells. We ended up
with 154 images out of 2422. This is a compromise between the need to
have an adequate number of samplings for the ePSF, and the necessity
to map the entire pixel space homogeneously, avoiding over-weighting
of any sub-pixel region (by using the whole data set which is very
heterogeneous).
\subsection{Building the \textit{effective}-PSF}
With this subset of images, we were finally able to construct a
reliable PSF model. For all exposures, we assumed the PSF to be
spatially constant within our super-stamps, and to have no temporal
variations. For a given star of total flux $z_{\ast}$ located at the
position ($x_{\ast}$,$y_{\ast}$), the value of a given pixel $P_{i,j}$
close to such star is defined as: \looseness=-2
\begin{displaymath}
P_{i,j}=z_{\ast}\cdot \psi(i-x_{\ast},j-y_{\ast})+s_{\ast}
\phantom{1} ,
\end{displaymath}
where $s_{\ast}$ is the local sky background and $\psi(\Delta x,\Delta
y)$ is the PSF, defined as the fraction of light that would be
recorded by a pixel offset by $(\Delta x,\Delta
y)=(i-x_{\ast},j-y_{\ast})$ from the center of the star. By fitting
the PSF to an array of pixels for each star in each exposure, we can
estimate its $x_{\ast}$, $y_{\ast}$ and $z_{\ast}$ for each
observation of each star. The equation to be used to solve for the PSF
can be obtained by inverting the above equation: \looseness=-2
\begin{displaymath}
\psi(\Delta x,\Delta y)=\frac{P_{i,j}-s_{\ast}}{z_{\ast}} \phantom{1} .
\end{displaymath}
With knowledge of the flux and position of each star in each exposure,
each pixel in its vicinity constitutes a sampling of the PSF at one
point in its two-dimensional domain. The many images of stars, each
centered at a different location within its central pixel, give us a
great number of point-samplings and enable us to construct a reliable
ePSF model. This is described in detail in AK00. Here, we just
provide a brief overview of the key points: \\
$\bullet$ We made a common master frame by cross-identifying bright,
isolated\footnote{The adjective ``isolated'' should be considered in a
relative way. Within a 1$\times$1-pixel square on a \textit{K2}
exposure there could be more than one star, as we will see in the
Fig.~\ref{fig20}. Therefore by ``isolated'' we mean that, in the
\textit{K2} image, there are not other obvious stars close to the
target.} stars in each image and computing their clipped-averaged
positions and flux. On average, we have 650 good stars per exposure to
use. \\
$\bullet$ We transformed these average master-frame positions back
into the frame of each image in order to place the samplings more
accurately within the PSF domain (since each measure is an average of
154 images). \\
$\bullet$ We converted each pixel value in the vicinity of a given
star in a given image into an ePSF sampling, and modeled the variation
of the PSF across its 2-dimensional domain. We have on average 650
reliable stars per exposure, for a total of 10$^5$ samplings. \\
$\bullet$ The available ePSF was used to measure an improved position
and flux for the stars in each image. \\
The whole procedure was iterated fifteen times, after which we noticed
that the overall improvements were negligible (i.e., the pixel-phase
errors did not change from one iteration to the next). The final ePSF
was a 21$\times$21 array of points that maps the 5$\times$5-pixel
region around a star (as in AK00, our ePSF model was supersampled by a
factor 4).
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{psf_comp.ps}
\caption{FWHM analysis. In the \textit{Left} and \textit{Middle}
panels, we show a contour representation of channel-81 central PRF
and our average ePSF, respectively. The red dashed lines show the
direction along which we measured the FWHM ($-$45, 0, 45, and 90
degrees). The PRF and the ePSF are plot with their original
supersampling factor (50 and 4, respectively) but their size in
the images are the same (in real \textit{Kepler} pixels). It is
clear that our ePSF is limited to the center part, due to the data
set used that was not designed to model it. In the \textit{Right}
panel, we show in different colours the median of these FWHM
values for the five PRFs (black, magenta, orange, green and red)
and for our \textit{K2}/C0 ePSF (azure). The rectangle around each
point shows the minimum-to-maximum range of the FWHM values along
the considered directions.}
\label{figpsfcomp}
\end{figure*}
In Fig.~\ref{fig3} and \ref{fig4}, we show the result of our
procedure. At the beginning, the ePSF sampling is not homogeneous and
the shape of the ePSF is not well constrained. Even in the second
iteration, the sampling began to improve and the ePSF became
smoother. The same behavior can be seen in the pixel-phase
errors. Note that the pixel-phase errors appear to be behaved along
the $y$ axes rather than the $x$ axes. However, it is not clear
whether the available pixel-phase sampling simply allows us to see the
errors in $x$ better than those in $y$. Again, when the
\textit{Kepler} PSF-characterization data set becomes public, it will
allow for a much better characterization and verification of the PSF.
\subsubsection{Comparison with \textit{Kepler}-main-mission PRFs}
Following a suggestion by the referee, we investigated whether our
\textit{K2}/C0 ePSF is broader than the \textit{Kepler}-main-mission
PRF (\citealt{Bry10}). The broadening is expected because of the
\textit{K2} pointing jitters larger than those in the main mission.
We measured the full-width half-maximum (FWHM) of the five channel-81
PRFs (one for each corner and one for the central region of the CCD)
and that of our average ePSF along different directions. In
Fig.~\ref{figpsfcomp}, we show a contour representation of our ePSF
and the central PRF of \cite{Bry10}. The ePSF size in these
representation is the same as for the PRF (in \textit{Kepler}
pixels). Our ePSF is limited to a small region around the center and
do not model the ePSF wings, as the crowding in the studied
super-stamp does not allow the modeling of the ePSF wings. In the
right panel of Fig.~\ref{figpsfcomp} we show the median FWHM values
for the PRFs/ePSF. We found that our \textit{K2}/C0 ePSF is broader
than the \textit{Kepler}-main-mission PRFs.
\subsection{Perturbed ePSF}
The basic assumption of the original AK00 method is that the ePSF is
constant in time and identical for all the images. This is of course
an ideal condition and ---at some level--- it is never the case;
surely not for the \textit{HST} nor for any other instrument we have
used. There are also other subtle effects. The selection of a
uniformly-distributed subsample of images in pixel-phase space could
have introduced some biases. For example, some of the dither
pointings in the less-populated regions of the pixel-phase space could
have been taken while the telescope was still in motion, resulting in
a more ``trailed'' ePSF than the average ePSF. In any case, as a
working hypothesis, and having detected no obvious trailed images in
the subsample that we selected for the ePSF determination, we
proceeded under the approximation that these effects are not larger
than the general, ePSF variations as a function of time. Indeed, the
shape of the \textit{Kepler} ePSF clearly changes over time, as one
can infer in Fig.~\ref{fig3} from the vertical broadening of the ePSF
around the center (rightmost middle and bottom panels).
Figure~\ref{fig5} illustrates the temporal variation of the PSF. We
colour-coded the samplings of the final ePSF (top panels) according to
the epoch of observation (see bottom panel). There are clear trends
that are not simply monotonic.
In order to suppress as much as possible the impact of the temporal
dependencies of the ePSF we introduced a perturbation of the average
ePSF. This perturbation of the ePSF was first described in
\cite{AK06}, and can be summarized as follows. In each image, we fit
and subtracted the current ePSF model to high SNR stars. We then
modeled (with a look-up table) the normalized residuals of these ePSF
fits and added these tabulated residuals to the initial ePSF to obtain
an improved ePSF model (for a more recent application and a detailed
description of the method see \citealt{Bel14}).
In Fig.~\ref{fig6} we show an example of the time-variation
adjustments of the average ePSF for two images. The improvements in
position and flux of the perturbed ePSFs over a constant ePSF for all
the 2422 images are quantified in Fig.~\ref{fig7}. We measured
position and flux of all sources in each exposure with and without
perturbing the ePSF. The PSF-fit process (a least-square fit) is
achieved with a program similar to the \texttt{\small img2xym\_WFI}
described in \cite{Ande06} that measure all sources, from the
brightest to the faintest (up to a minimum-detection threshold set by
the user) objects, in seven iterations. From the second iteration
ahead, the fitting procedure is also performed on neighbour-subtracted
images, in order to converge on reliable positions and fluxes for
close-by objects. We then made two master frames, one for each of the
two different approaches, by cross-identifying the stars in each of
our 2422 images. We computed the 3$\sigma$-clipped median value of the
following quantities: \textsf{QFIT}\footnote{The \textsf{QFIT}
represents the absolute fractional error in the PSF-model fit to the
star \citep{Ande08}. The lower the \textsf{QFIT}, the better the PSF
fit.}, the 1D positional rms, and the photometric rms. These values
were calculated in 1-magnitude bins and, for an appropriate
comparison, we used the same set of stars for the two samples. In most
cases, we found that the difference between the use of perturbed or
unperturbed ePSF is not negligible. Using the perturbed rather than
the average ePSF to measure position and flux of a star in an image
improves the PSF fit because the former is a representation of a star
in that particular image, while the latter is the representation of a
star averaged on the entire C0.
In the following, we assume that the spatial variation of the ePSF
across the super-stamp region in channel-81 detector where NGC~2158
and M\,35 are imaged is negligible, therefore we will use only one
ePSF model per image. A substantial improvement to the ePSF will be
achieved when the PSF characterization data are made available, so
that we can properly account for the spatial variability.
\section{Photometry in \textit{K2} reconstructed images}
\label{LC}
The main purpose of this effort is to extract precise photometry from
main-mission and \textit{K2}-mission pixel data for sources in crowded
fields and at the faintest end. These two issues are very closely
related, since each 4$^{\prime\prime}$$\times$4$^{\prime\prime}$ pixel
includes many faint sources that we need to take into account. Because
of this crowding, classical aperture photometry has major obvious
limitations.
Different authors showed that photometry on neighbour-subtracted
images leads, on average, to a better photometric precision than on
the original images (see, e.g., \citealt{Mon07} and reference
therein). By subtracting a star's neighbours before measuring its
flux, it is possible to avoid including (or at least to reduce the
impact of) light that could contaminate the true flux of the
target. The knowledge of positions and flux of all sources in the
field is therefore fundamental to our approach.
In order to obtain the best photometric precision with \textit{K2}
data, we used the same method as described in \cite{Nar15} to which we
refer for a more detailed description. This method makes use of
accurate PSF models for each exposure and of a (ground-based) input
list to disentangle from the flux of a given star the contribution of
its close-by sources. The more complete the input list with all
detectable objects in the field, the better will be the final result
of the method. A corollary is also that the transformations of the
input-list positions and fluxes into the individual-exposure reference
system need to be known with high accuracy, as well as the PSF, in
order to subtract the neighbours as well as possible. In the following
we provide the adopted key ingredients.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fig5.ps}
\caption{Time variation of the ePSF shape in the 154 images we used
for the modeling. In the \textit{Bottom} panel, the central-peak
value of the ePSF (interpolated from the samplings) as a function
of the time interval relative to the first-image epoch. In the
\textit{Top-left} and \textit{Top-right} panels we show the ePSF
samplings as in the \textit{Right} column of Fig.~\ref{fig3},
using the same colour-codes for the time of image acquisition.}
\label{fig5}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fig6.ps}
\caption{Difference between the average ePSF and the perturbed ePSF
for an image at the beginning (\textit{Left}) and at the end
(\textit{Right}) of Campaign 0. In these two examples, the
variation ranges between $\sim$$-0.5$ per cent (blue colour) and
$\sim$1.4 per cent (red colour) of the total flux.}
\label{fig6}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fig7.ps}
\caption{\textsf{QFIT} (\textit{Bottom}), 1D positional rms
(\textit{Middle}) and photometric rms (\textit{Top}) as a function
of the ``instrumental'' magnitude of the master frame with (azure
points) and without (black points) perturbing the average ePSF to
account for the time-dependent variations. Each point represents
the median value in 1-magnitude bins. The error bars represents
the 68.27$^{\rm th}$ percentile of the distribution around the
median value divided by $\sqrt{N}$, with $N$ number of points. The
instrumental magnitude in each single catalogue is defined as
$-2.5\times \log (\sum \rm{counts})$, where $\sum\rm{counts}$ is
the sum of the total counts under the fitted PSF. Since we used
the column \texttt{FLUX} while building the full-frame channel
images, these counts are actually fluxes (for this reason we used
the double quotes in the label). Hereafter, we will omit the
double quotes in the text.}
\label{fig7}
\end{figure}
\subsection{The Asiago Input Catalogue (AIC)}
\label{AIC}
Our input list is described in great detail in \cite{Nar15}. It comes
from a set of white-light (i.e., filterless) observations collected at
the Asiago Schmidt telescope. It includes 75\,935 objects. At variance
with \citeauthor{Nar15}, we used all stars measured in white light,
and not only those found in both white-light and $R$-filter lists (see
Sect.~3.5 of their paper). Hereafter, we will refer to this catalogue
as the \textit{Asiago Input Catalogue}, or AIC, which is available at
the ``Exoplanet and Stellar Population Group'' (ESPG)
website\footnote{\href{http://groups.dfa.unipd.it/ESPG/M35NGC2158.html}{http://groups.dfa.unipd.it/ESPG/M35NGC2158.html}}.
The AIC was constructed by measuring the position and flux of each
source found in the Schmidt stacked image via PSF fitting. This input
list was then transformed into the photometric system of a reference
image (among those of the Schmidt data set). The catalogue was purged
for various artifacts, such as PSF bumps and fake detections along the
bleeding columns. The purging is not perfect and it is a compromise
between excluding true, faint stars around bright objects, and
including artifacts in the catalogue. Of the 75\,935 sources included
in the input list, $\sim$77$\%$ could be measured reasonably well with
the \textit{K2} data set.
The stacked Schmidt image has higher angular resolution than the
\textit{Kepler}/\textit{K2} images, allowing us to locate faint
sources whose flux could pollute the pixels of a nearby star. The
relative astrometric accuracy of the AIC is also sufficiently accurate
to allow us to pinpoint a star in any given \textit{K2} image with an
accuracy down to about 20 mas (0.005 \textit{Kepler} pixel). Details
about the absolute astrometry, the stacked images, and other
information of the AIC can be found in \cite{Nar15}.
\subsection{Photometry with and without neighbours}
The procedure for LC extraction is the same as in \cite{Nar15} and can
be summarized as follows. For each star in the AIC, hereafter ``target
star'', we computed six-parameter, local linear transformations to
transform the AIC position of the target into that of each individual
\textit{K2} image. In order to compute the coefficients of the
transformations, we used only bright, unsaturated, well-measured
target's neighbours within a radius of 100 pixels (target star
excluded). If there were not at least 10 neighbour stars within such
radius, we increased the searching area progressively, out to the
whole field. Local transformations were used to minimize the effect of
the geometric distortion, since it does not vary significantly on
small spatial scales.
These reference stars were also used to transform the AIC white-light
magnitude of the target into the corresponding instrumental
\textit{K2} magnitude in each given exposure. The AIC magnitudes in
white light and our $K_{\rm P}$ instrumental magnitudes are obtained
from instruments with a rather similar total-transmission curve and,
as a first approximation, we can safely use a simple zero-point as
photometric transformation between AIC and \textit{K2} photometric
systems.
We extracted the photometry from the original and from the
neighbour-subtracted \textit{K2} images. The neighbour-subtracted
images were created by subtracting from the original images the
flux-scaled perturbed ePSF of all AIC sources within a radius of 35
\textit{Kepler} pixels ($\sim$2.3 arcmin) from the target star. We
postpone the discussion about the quantitative improvements in the
photometry by using neighbour-subtracted images instead of the
original images to Sect.~\ref{photprecwwo}.
On the neighbour-subtracted images, the target flux of each AIC object
was measured using aperture photometry with four aperture sizes (1-,
3-, 5- and 10-pixel radius) and using ePSF-fit photometry. On the
original images, we used only 3-pixel-aperture and ePSF-fit
photometry. These fluxes were stored in a LC file, which also contains
other information such as the KBJD.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{fig8.ps}
\caption{(\textit{Left}): 3-pixel-aperture LC of star \# 1881
(EPIC~209183478) after each step of our correction. From panel (a)
to (d): raw LC normalized to its median flux; LC model (black
points) put on top of the raw LC (light-gray points);
model-subtracted LC; final detrended LC. In panel (e) we show the
comparison between the phased (with a period of $\sim$10.6 days)
LC before (left) and after (right) the correction. It is clear
that the drift-induced scatter in the LC is reduced and more
details arise, e.g., the LC amplitude of this object changed
during the C0. (\textit{Right}): Outline of our correction. In
panel (f) we show the cell and grid-point locations around the
star \# 1881 loci on the channel 81 over the entire Campaign
0. The thick, dark-gray lines mark the pixel boundaries. The
magenta $*$ marks the star location on the channel at a given
time. The squares represent the median values of the
model-subtracted flux in each of the 0.05$\times$0.05-pixel-square
bins in which each pixel is divided, colour-coded accordingly to
the colour bar on top. In panel (g) we zoom-in around the bulk of
the points to highlight the sub-pixel elements of the grid (thin,
light-gray lines). For the star position at a given time, we used
the four surrounding grid points to perform the bi-linear
interpolation (sketched with the arrows) and compute the
correction.}
\label{fig8}
\end{figure*}
\section{Photometric calibration into the $K_{\rm P}$ System}
\label{KPphot}
In order to calibrate our the instrumental magnitudes we extracted
from the individual \textit{K2} exposures to the \textit{Kepler}
Magnitude System ($K_{\rm P}$), we determined and applied a simple
local zero-point. For each photometric approach (1-, 3-, 5-, 10-pixel
aperture and PSF), we made a catalogue with absolute positions from
the AIC and with magnitudes obtained from 3$\sigma$-clipped median
values of the \textit{K2} instrumental magnitudes as measured in each
LC (when available). We then cross-matched these catalogues with the
EPIC obtained from the MAST archive. We computed the zero points as
the 3$\sigma$-clipped median value of the difference between our
magnitudes and the EPIC $K_{\rm P}$ magnitudes. We used only those
bright, unsaturated stars that in our catalogue are within three
magnitudes from the saturation and for which the $K_{\rm P}$ magnitude
in the EPIC was obtained from `gri' photometry. We chose this specific
photometric method among the different methods adopted to compute the
$K_{\rm P}$ magnitude in the EPIC due to the larger number of sources
in common between this EPIC subsample and our well-measured-star
sample. As in \cite{Aig15} and \cite{Lund15}, the zero-points of our
photometric methods we found are between 24.7 (1-pixel aperture) and
25.3 (the other photometric methods).
\section{Detrending of \textit{K2} light curves}
\label{detrend}
The unstable spacecraft pointing results in a well-defined motion of
the stars across the pixel grid. A combination of intra- and
inter-pixel sensitivity variation leads to a correlation between this
motion and the ``raw'' flux of each star that must be corrected in
order to increase the \textit{K2} photometric accuracy and
precision. Different methods have been developed to correct such
systematic effect, e.g., the self-flat-fielding approach of
\cite{V&J14}, the Gaussian process of \cite{Aig15} or the simultaneous
fit for all systematics of \cite{FM15}.
In order to detrend the LCs from the drift-induced effects, we took
into account all usable channel-81 exposures collected during the
entirety of C0, including those taken during the first part of the
campaign, when the fine guiding was still in progress (which caused
the stars to be shifted by up to 20 pixels from their average
position, see Fig.~\ref{fig2}). This is important, since the number of
points to be used for the detrending increases, and it could also be
useful to detect variable stars with periods of $\sim$35 days (the
duration of the C0 after the second safe mode). Briefly, our
correction was performed by making a look-up table of corrections and
applying it with simple bi-linear interpolation.
An overview of our detrending approach is shown in
Fig.~\ref{fig8}. For each target star, we made a model of the raw LC
trend, normalized by its median flux (panel a), by applying a
running-average filter with a window of $\sim$10 hours. The model
(panel b) was then subtracted from the raw LC with the aim of removing
the intrinsic stellar variability. In this way, the model-subtracted
LC (panel c) reflects the systematic effect originated from the motion
of the spacecraft. The window size of the running-average filter was
chosen as a compromise between our attempt of avoiding the removal of
the positional trend and still being able to model short-period
variables.
Each pixel into which a given star fell during the motion was divided
into an array of 20$\times$20 square elements and, in each such
element, we computed the 3.5$\sigma$-clipped median of the
model-subtracted LC flux. The grid is sketched in panel (f) and (g)
of Fig.~\ref{fig8}. For any location on the CCD, the correction was
performed by applying a bi-linear interpolation between the
surrounding four grid points. The correction was not always available,
because for some grid elements there are not enough points to compute
the correction. In these cases, no correction was applied.
The whole procedure was iterated three times, each time making an
improved LC model by using the LC corrected with the available
detrending solution. In Fig.~\ref{fig8} we show the results of the
final iteration of our procedure.
Panel (e) of Fig.~\ref{fig8} shows a direct comparison of the folded
LC before (left) and after (right) the correction. The rms is improved
and allows us to see more details in the LC.
One advantage of our LC-extraction method is a more robust position
measurement. Indeed, as described in Sect.~\ref{LC}, we transformed
the position of a target star as given in the AIC into the
corresponding image reference system by using a subset of close-by
stars, target excluded, to compute the coefficients of the
six-parameter linear transformations. The local-transformation
approach reduces (on average by a factor $N^{-1}$, where $N$ is the
number of stars used) most of the systematic effects that could harm
the stellar positions (e.g., the uncorrected geometric distortion).
The LC detrending is critical both to the removal of the
spacecraft-related systematics that degrade the \textit{K2}
photometric precision and to pushing \textit{K2} performance as close
as possible to that of the original \textit{Kepler} main mission. In
Fig.~\ref{fig9} we show the rms (defined as the 3.5$\sigma$-clipped
68.27$^{\rm th}$-percentile of the distribution about the median value
of the points in the LC) improvement after the detrending process for
the 3-pixel-aperture and PSF photometry on neighbour-subtracted
images. The rms was calculated on the normalized LC, i.e. after the LC
flux was divided by the 3.5$\sigma$-clipped median value of all the
flux measurements. As we will see in sub-section \ref{photprec3psf},
these two photometric methods are the best-performing methods at the
bright- and faint-end regime, respectively. The improvement is greater
for bright stars, i.e. stars with high SNR and better-constrained
positions; while for faint stars the effect is lost in the random
noise.
\subsection{A previously unknown 2.04-day periodic artifact}
\label{204d}
Several LCs turned out to exhibit a periodic drop in their fluxes that
we have not seen described anywhere. These drops are rather sharp and
resemble boxy transits; we illustrate them in Fig.~\ref{fig10}. The
period of this effect is $\sim$2.04 days, and the drops last for
exactly six images, for each of the non-interrupted sub-series within
C0. Not all the stars in the affected images show the drop feature.
We extensively investigated the affected and un-affected stars on
individual \textit{K2}/C0/channel-81 images and found that the effect
could be column-related. The amount of drop in the flux is not always
the same and it correlates with magnitude. We have not found any
description of such an effect either in the \textit{Kepler} manual or
in the literature.
We suspect that it might be due to an over-correction in
correspondence of mis-behaving columns. This over-correction might be
the result of electronic activities related to the periodic momentum
dump of the two remaining reaction wheels through thruster firings;
which happens every two days (\citealt{How14,Lund15}). For example, it
could be a change in the reading rate.
Another possibility, discussed with the referee, is that this effect
can be originated by a non-identical \textit{Kepler}-pipeline
processing of contiguous TPFs that can create some un-physical
discontinuity. Such 2.04-d periodic effect could only be detected when
dealing with more than one TPF at a time, as we did in our
reconstructed-image approach.
Lacking more engineering data and detailed knowledge of the on-board
pre-reduction pipeline, we have limited ourselves to simply describing
and correcting these effects empirically. We first made a model of the
LC trend (by applying a running-average filter with a window of
$\sim$6 hours) and computed the median value of the model-subtracted
LC flux for affected and un-affected data points. If the difference
between these median values was greater than the model-subtracted LC
rms (computed using only the un-affected points), we marked the LC as
flux-drop-affected, and corrected it. The correction to add is
computed as the difference between the model of the LC with only the
un-affected points and the model obtained using only the
flux-drop-affected points (see Fig.~\ref{fig10}).
The drawback of our correction is obviously that any true variable
star with a significant flux drop every 2.04 days was considered as
flux-drop affected and corrected accordingly. Of course, since this
flux-drop is not periodic across safe-mode sub-series, it is possible
to distinguish these events from true eclipses.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{fig9.ps}
\caption{Photometric rms as a function of the $K_{\rm P}$ magnitude
before (red points) and after (azure points) applying our
detrending procedure. We plot only the neighbour-subtracted
photometry from the two measurements that show the best rms in the
bright (3-pixel aperture, \textit{Top} panel) and faint (PSF,
\textit{Bottom} panel) regime. For $K_{\rm P}$$>$15.5 (vertical,
gray dashed line) we show only 15\% of the points for clarity. The
vertical, gray solid line is set at the saturation threshold
$K_{\rm P}$$\sim$11.8.}
\label{fig9}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{fig10.ps}
\caption{Example of a LC (AIC \# 8600, 5-pixel-aperture photometry)
affected by the flux-drop effect described in
Sect.~\ref{204d}. From \textit{Top}, in the first-row panels we
show the first sub-series (before the first safe mode) of points
of the C0/channel-81 LC (\textit{Left}) and the corresponding
phased LC with a period of 2.04 days (\textit{Right}). In the
second and third rows we show the same as above but for the second
(between the first and the second safe modes) and third (after the
second safe mode) sub-series of the LC and the phased LC. The
black dots represent the unaffected points, the red triangles are
the photometric points affected by the flux drop. The azure line
is the LC model. Note that the phase at which the flux drop occurs
is not the same in the different parts of the LC, meaning that
this effect could be related to the instrumentation, and that it
is reset after each break in the C0. Finally, in the
\textit{Bottom} panel we plot the corrected LC.}
\label{fig10}
\end{figure*}
\subsection{The role of \lowercase{e}PSF\lowercase{s} in \textit{K\lowercase{epler}}/\textit{K2} images}
\label{PSFrole}
The ePSF is not only more suitable to performing photometry in crowded
regions and for faint stars, but it can also be used as an additional
diagnostic tool to discern among exposures that are most affected by
some systematic effects, such as the drift motion of the
spacecraft. In the following we show that, by taking advantage of all
the information that we have from the ePSFs adjusted for each
exposure, we can also select the best exposures (that correspond to
points in the LCs) to search for variability (Sect~\ref{search}).
In Fig.~\ref{fig11}, we show an outline of our best-exposure selection
for a variable-candidate LC (AIC star \# 9244). It is well known that
\textit{K2}/C0 data contain a periodic, systematic effect every
$\sim$6 hours, related to the thurster-jet firings used to keep the
roll-angle in position. Indeed high SNR stars show a well-defined
periodic effect every $\sim$0.2452 days (and at its harmonics). In the
top-left panel of Fig.~\ref{fig11} we show the hand-selected points in
the phased LC with a period of $\sim$0.2452 days, while in the
top-right panel we show the corresponding location of such points
during C0. By phasing different LCs with this period, we noticed that
there is one group of points (red, solid triangles) that is more
scattered than the others. These points are associated with the
thruster-jet events. The remaining points, marked with different
colours and shapes, represent different portions of the LC. The less
populated groups (black, solid squares) were taken during the first
part of the campaign; while the more populated clumps (green dots)
were taken during the second part of the C0. The remaining outliers
(azure, open circles) are those points obtained during the coarse
pointing of the spacecraft, flagged in the TPFs accordingly. In
practice, we discarded all points corresponding to the first part of
C0, and the points collected during the thruster-jet events and coarse
pointing. \\
We found very good agreement between such point selection and the ePSF
(middle-left panel) peak and FWHM (middle-right panel). For example,
the exposures taken during a thruster-jet event have a less-peaked
ePSF because the exposure is blurred during the long-cadence
integration time. Hereafter, we use the adjective ``\textit{clean}''
to define a LC based on the stable part on the second half of the
Campaign (green points in Fig.~\ref{fig11}). We run our variable star
finding algorithms only on the clean LCs.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{fig11.ps}
\caption{Clean LC definition. In the \textit{Top-left} panel, we
show the phased LC with a period of 0.2452 days for the object \#
9244 in the AIC. We mark with black filled squares the points
obtained before the second safe mode during the C0 observations
and with azure, open circles the points corresponding to the
``coarse pointing'' flag in the original TPFs. The red triangles
highlight the exposures that are associated to a thruster-jet
event. Finally, with green dots we show the best 1629 out of 2422
points (exposures) for the LC analysis. In the \textit{Top-right}
we show the normalized flux as a function of the time for the same
LC, with the points colour-coded as before. In the
\textit{Middle-left} panel we plot the peak value of the perturbed
ePSF of each exposure as a function of time. In the ePSF peak-FWHM
plane (\textit{Middle-right} panel) it is also clear that there is
a correlation between every effect that could harm the
observations and the ePSF. This is an hint of the usefulness of
the ePSF parameters as a diagnostic tool. Finally, in the
\textit{Bottom} panels we show the LC for the bright object
illustrated above (\textit{Left}) and for a fainter star
(\textit{Right}, AIC \# 27269) in which we plot with green and
gray dots the good and the bad points, respectively.}
\label{fig11}
\end{figure*}
\section{Photometric precision}
\label{photprec}
We used three different parameters to help us to estimate the
photometric precision:
\begin{itemize}
\item \textit{rms} (defined as in Sect~\ref{detrend});
\item \textit{p2p rms}. The point-to-point rms (p2p rms) is defined as
the 3.5$\sigma$-clipped 68.27$^{\rm th}$-percentile of the
distribution around the median value of the scatter (the difference
between two consecutive points);
\item \textit{6.5-hour rms}. The \textit{6.5-hour rms} is defined as
follows. We processed each available LC with a 6.5-hour
running-average filter, and then divided it in 13-point bins. For
each bin, we computed the 3.5$\sigma$-clipped rms and divided it by
$\sqrt{12}$ ($\sqrt{N-1}$, with $N$ the number of points in each
bin). The 6.5-hour rms is the median value of these rms
measurements.
\end{itemize}
All three parameters have been calculated on the normalized LC.
Our PSF-based, neighbour-subtracted technique has been specifically
developed to deal with crowded regions and faint stars. Therefore, we
expect substantial improvements with respect to what is in the
literature in these two specific regimes. In the following, we will
demonstrate the effectiveness of our new approach.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{fig12.ps}
\caption{Photometric rms of LCs extracted from the original (red
points in \textit{Left} panels), and the neighbour-subtracted
(azure points in \textit{Right} panels) images as a function of
the $K_{\rm P}$ magnitude. We show the rms trend for the
``isolated'' (the first and second rows from \textit{Top}) and the
``crowded'' (the third and fourth rows) samples, defined as
described in the text, for LCs obtained using a 3-pixel aperture
and the PSF. The saturation threshold is set at $K_{\rm
P}$$\sim$11.8 (vertical, gray solid line).}
\label{fig12}
\end{figure*}
\subsection{Photometry on images with and without neighbours}
\label{photprecwwo}
By subtracting stellar neighbours before measuring the flux of a given
star, we can obtain performance comparable with that achieved by
others in the literature, but in more crowded regions. In general,
within a single 4$\times$4-arcsec$^2$ pixel, there can be more than
one source that contribute to the total flux. Therefore, we expect the
neighbour-subtraction method to be useful not only in stellar clusters
or fields in the Galactic bulge, but also in relatively-lower-density
regions.
To demonstrate this assertion, we selected two samples of stars: a
``crowded'', sample centered on NGC~2158, and an ``isolated'' sample,
that comes from five different regions where the stellar density is
lower. The two samples contain the same number of sources. We computed
the LC rms for a 3-pixel aperture and for PSF photometry with and
without subtracting stellar neighbours (Fig.~\ref{fig12}).
Without removing stellar neighbours, the light of close-by stars that
falls within the aperture increases the target flux, alters the
faint-end tail of the rms trend (naturally, the problems becomes
larger for larger apertures). The effect is more evident in crowded
regions, in particular when using aperture photometry. For the 3-pixel
aperture photometry, in the crowded region the limiting magnitude on
the original images is $K_{\rm P}\sim$18, to be compared with the
$K_{\rm P}\sim$22 limiting magnitude in neighbour-subtracted images of
the same field (second-to-the-last panels in Fig.~\ref{fig12}). In
summary, in crowded regions, by using the neighbour-subtracted images,
we obtained a more reliable stellar flux and gained about 4 $K_{\rm
P}$ magnitudes for the 3-pixel-aperture photometry. Furthermore, for
both aperture and PSF photometry, we have a lower rms for the LCs, and
the bulk of the LC-rms distribution as a function of magnitude looks
sharper. In conclusion, hereafter we will consider only
neighbour-subtracted LCs.
\subsection{Photometry on bright and faint stars}
\label{photprec3psf}
As expected, and as shown by \cite{Nar15} for ground-based data,
aperture photometry performs, on average, better on isolated, bright
stars, while the PSF photometry gives better results on faint
stars. In Table~\ref{tab:rms1} and \ref{tab:rms2} we list the rms
values in part-per-million (ppm) for each of the five photometric
methods we adopted.
As can be inferred from these Tables, for bright stars,
3-pixel-aperture photometry shows, on average, lower rms than 5- and
10-pixel aperture photometry. The 1-pixel-aperture- and the PSF-based
photometry have almost the same trend in the faint star regime
($K_p\ge 15.5$), with the 1-pixel aperture showing a slightly smaller
rms for $K_{\rm P}$$>$19, while the PSF photometry performs 2-3 times
better on brighter stars. In the following discussion, for faint
stars, we prefer to use PSF rather than 1-pixel-aperture
neighbour-subtracted photometry, although, on average, equivalent in
terms of photometric scatter.
In Fig.~\ref{fig13} we show a comparison between the PSF and the
3-pixel aperture photometric methods. It is clear that for $K_{\rm
P}$$>$15.5 the PSF photometry performs better than the 3-pixel
aperture.
For bright stars, the 3-pixel-aperture 6.5-hour rms is below 100 ppm
between 10$<$$K_{\rm P}$$<$14, with a best value of about 30 ppm for
stars with 11$<$$K_{\rm P}$$<$12.5. For stars brighter than $K_{\rm
P}$$\sim$10, the rms increases, mainly because we are working on
heavily-saturated stars. In the faint-end regime, using PSF photometry
we obtained 6.5-hour rms 2-3 times better than using a
3-pixel-aperture photometry. At 18$<$$K_{\rm P}$$<$19 the 6.5-hour rms
is about 2600 ppm. Such precision allows us to detect a flux drop of a
few hundredths of magnitude, i.e., the LC dimming due to exoplanet
candidate TR1 discovered by \cite{Moc06}. We will further discuss this
object in Sect.~\ref{TR1sect}.
In conclusion, we find that even at the bright end our
neighbour-subtracted technique allows us to obtain performance that is
comparable to those in the literature. This is the case even though we
are dealing with a more crowded region than most previous studies.
\subsection{Comparison with existing works on \textit{K2} data}
\label{rmslit}
At present, there are a number of studies in literature that are
focused on the Campaign 0 data of \textit{K2}. Here, we provide a
comparison, both using the LC rms and through a visual inspection of
the LCs, for the objects in common with previous published studies.
Before making such a comparison, we warn the reader about some aspects
that should be taken into account in the following. A fair comparison
should be made by comparing the single light curves point by
point. Unfortunately, thus far, no one has ever attempted to measure
all stars in the entire super-stamps. Rather it is more common
practice to restrict the analysis to the outer parts of dense stamps,
where the crowding is less severe.
That said, we do have several LCs from stars in common with various
studies; in the best case \citep{V&J14}, we have 40
stars. Furthermore, by simply comparing the rms given in the different
papers, we could introduce some biases. For example, the rms in a
given magnitude bin can be overestimated by the number of variable
stars within that bin; the methods adopted to compute the rms can be
different; there might be different calibration methods to transform
``raw'' magnitudes into the \textit{Kepler} photometric system. In the
latter case, stars can fall in different magnitude bins in different
papers. Finally, our neighbour-subtracted LCs are less affected by
neighbour-light contamination. As noticed above, the light pollution
would result in a brighter LC, which would move the star into a
brighter magnitude bin, and decrease its rms (because of the higher
number of counted photons).
\begin{table*}
\caption{Photometric precision of the 3-pixel-aperture- and
PSF-based photometry evaluated as described in
Sect.~\ref{photprec}. The values are given in part-per-million. We
used the clean LCs to compute these quantities. When no stars were
found in a given magnitude interval, we inserted a ``/'' in the
corresponding cell.} \centering
\label{tab:rms1}
\begin{tabular}{ccccccc}
\hline
\hline
$K_{\rm P}$ Magnitude & \multicolumn{3}{c}{3-pixel aperture} & \multicolumn{3}{c}{PSF} \\
interval & rms & p2p & 6.5-h rms & rms & p2p & 6.5-h rms \\
\hline
8 - 9 & 4365 & 1994 & 840 & / & / & / \\
9 - 10 & 4550 & 1915 & 771 & / & / & / \\
10 - 11 & 3083 & 433 & 101 & 18231 & 3918 & 2054 \\
11 - 12 & 446 & 124 & 44 & 2636 & 949 & 424 \\
12 - 13 & 744 & 168 & 62 & 3300 & 1068 & 459 \\
13 - 14 & 926 & 276 & 97 & 3200 & 983 & 439 \\
14 - 15 & 1977 & 585 & 196 & 3871 & 1085 & 489 \\
15 - 16 & 5720 & 1882 & 633 & 3988 & 1274 & 528 \\
16 - 17 & 10631 & 4731 & 1606 & 4822 & 1875 & 693 \\
17 - 18 & 21219 & 9533 & 3176 & 7214 & 3518 & 1189 \\
18 - 19 & 40885 & 19749 & 6564 & 14070 & 7936 & 2601 \\
19 - 20 & 76625 & 41163 & 13534 & 32813 & 18376 & 6150 \\
20 - 21 & 159131 & 89546 & 29876 & 71199 & 40570 & 13456 \\
21 - 22 & 327174 & 179254 & 71173 & 153875 & 89963 & 30335 \\
22 - 23 & 487721 & 282520 & 109300 & 309403 & 179357 & 73297 \\
23 - 24 & 606784 & 412491 & 140349 & 468002 & 292661 & 114074 \\
\hline
\end{tabular}
\end{table*}
\begin{table*}
\caption{As in Table~\ref{tab:rms1}, but for 1-, 5- and
10-pixel-aperture-based photometry.}
\centering
\label{tab:rms2}
\begin{tabular}{c|ccc|ccc|ccc}
\hline
\hline
$K_{\rm P}$ Magnitude & \multicolumn{3}{c}{1-pixel aperture} & \multicolumn{3}{c}{5-pixel aperture} & \multicolumn{3}{c}{10-pixel aperture} \\
interval & rms & p2p & 6.5-h rms & rms & p2p & 6.5-h rms & rms & p2p & 6.5-h rms \\
\hline
7 - 8 & / & / & / & / & / & / & 5222 & 2032 & 917 \\
8 - 9 & / & / & / & 3983 & 1758 & 802 & 3640 & 1230 & 582 \\
9 - 10 & / & / & / & 8641 & 2982 & 1558 & 6083 & 2311 & 1152 \\
10 - 11 & 9126 & 5411 & 1755 & 10449 & 4554 & 1745 & 18628 & 7103 & 3199 \\
11 - 12 & 5625 & 3552 & 1122 & 451 & 116 & 43 & 1397 & 261 & 87 \\
12 - 13 & 5594 & 3461 & 1098 & 1023 & 246 & 87 & 2972 & 1081 & 437 \\
13 - 14 & 5589 & 3482 & 1113 & 3959 & 989 & 387 & 10915 & 3876 & 1350 \\
14 - 15 & 5918 & 3484 & 1115 & 7777 & 2897 & 1043 & 15341 & 6028 & 2083 \\
15 - 16 & 6108 & 3566 & 1141 & 13369 & 5306 & 1866 & 24145 & 10945 & 3738 \\
16 - 17 & 6473 & 3743 & 1201 & 21162 & 8917 & 3056 & 44430 & 21238 & 7173 \\
17 - 18 & 8003 & 4477 & 1439 & 39276 & 17885 & 6022 & 84555 & 43223 & 14664 \\
18 - 19 & 13766 & 7833 & 2521 & 69978 & 36431 & 12105 & 170644 & 91746 & 34174 \\
19 - 20 & 30515 & 17248 & 5557 & 141595 & 77484 & 26331 & 285471 & 134808 & 69279 \\
20 - 21 & 66356 & 37814 & 12241 & 299378 & 158724 & 63791 & 404477 & 185638 & 87072 \\
21 - 22 & 141837 & 83477 & 27306 & 432749 & 225126 & 97786 & 528062 & 289481 & 116628 \\
22 - 23 & 301746 & 173745 & 66720 & 591510 & 359285 & 128364 & 587527 & 373223 & 138088 \\
23 - 24 & 447170 & 287843 & 106035 & 687907 & 511448 & 147173 & / & / & / \\
\hline
\end{tabular}
\end{table*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{fig13.ps}
\caption{Photometric rms (\textit{Top}), p2p rms (\textit{Middle})
and 6.5-hour rms (\textit{Bottom}) as a function of the $K_{\rm
P}$ magnitude derived from the 3-pixel-aperture- (red points)
and PSF-based (azure points) neighbour-subtracted LCs. The
vertical, gray dashed line is set at $K_{\rm P}$$=$15.5. For stars
fainter than $K_{\rm P}$$\sim$15.5 the PSF photometry has a lower
rms. The vertical, gray solid line is set at the saturation
threshold ($K_{\rm P}$$\sim$11.8). As a reference, we plot two
horizontal, gray solid lines at 100 and 40 parts-per-million
(ppm), respectively. As in Fig.~\ref{fig11}, for $K_{\rm
P}$$>$15.5 we show only the 15\% of the points, for clarity.}
\label{fig13}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\columnwidth]{fig14a.ps}
\includegraphics[width=\columnwidth]{fig14b.ps}
\vskip 5 pt
\includegraphics[width=\columnwidth]{fig14c.ps}
\includegraphics[width=\columnwidth]{fig14d.ps}
\caption{Point-by-point comparison between our light curves (azure
points) and those from \citet{V&J14} (red) and \citet{Arm15}
(black). We show only the points imaged at the same BJD$_{\rm
TDB}$ in common among the three works.}
\label{fig14}
\end{figure*}
\subsubsection{Comparison with \citet{V&J14}}
The first published study on \textit{K2} photometry was that of
\cite{V&J14}, and was based on the Engineering data set. The Campaign
0 data set was subsequently reanalyzed by \cite{Van14}, who worked to
improve the photometric performance for fainter stars.
We downloaded from the MAST archive all his LCs for the targets within
channel 81, and found 40 objects in common with our catalogue. We
computed the three rms as described above. The LCs from \cite{Van14}
show, on average, a larger p2p and 6.5-hour rms than our LCs for the
variable objects, but their p2p and 6.5-hour rms for non-variable
stars was lower. While the former behavior was predictable, (as
reported by \citealt{V&J14}, their self-calibrated-flat-field approach
works best on dwarfs rather than on highly-variable stars) the latter
trend was unexpected. Therefore, we also visually inspected the
location of all stars in common on the images and the regions covered
by the adopted aperture
masks\footnote{\href{https://www.cfa.harvard.edu/~avanderb/k2.html}{https://www.cfa.harvard.edu/$\sim$avanderb/k2.html}}.
For bright stars, \citet{Van14} used a circular aperture that included
several neighbour sources. For faint stars, the ad-hoc aperture that
was designed to avoid light contamination works only partially, since
the flux on the wings of the PSF can still fall inside
it. Furthermore, for blended stars the adopted aperture mask included
all of them and, finally, the EPIC catalog seems not complete
enough. In either cases, the total flux of the source is increased
because of the contribution of the neighbours, resulting in an higher
SNR for the source (with a consequent better Poisson rms), but this
does not correspond to the true SNR of the individual target
source. Of course, we cannot exclude that \cite{Van14} detrending
algorithm works better, further improving the final rms. A larger
sample of stars is required for a more conclusive comparison.
We also found that at least two variable objects identified by
\cite{Van14} were mismatched/blended, difficult for him to identify
due to the limitations introduced by the EPIC (bright, isolated
objects) and to the low resolution of the \textit{K2} images (see next
Section).
It is worth to mention that \cite{Van14} releases for each light curve
a flag that marks the cadence number associated to a thruster-jet
event. We compared their flag with our flag described in
Sect.~\ref{PSFrole}. Among the cadences in common between the two
works, we found that we flagged (and do not used) a lower number of
images. This is probably due to our perturbed-PSF approach that is
able to fit reasonably well objects imaged while the thruster-jet
effect was not severe. However, we found a good agreement between the
two thruster-jet-identification methods.
\subsubsection{Comparison with \citet{Arm15}}
\cite{Arm15} released C0 LCs obtained with a similar method and with
comparable performance to those of \cite{V&J14}. We downloaded the LCs
from their
archive\footnote{\href{http://deneb.astro.warwick.ac.uk/phrlbj/k2varcat/}{http://deneb.astro.warwick.ac.uk/phrlbj/k2varcat/}}
and computed the rms for the 12 objects we have in common. Again, we
found that our photometry provides LCs with a lower rms for almost all
these variable stars.
As the rms cannot give a direct measurement of the goodness of the
photometry, we made a visual comparison of the 12 LCs in common
between our data set and \cite{Arm15} one. In Fig.~\ref{fig14} we
show the LC comparison of 10 out of 12 objects in common. We also
included the LCs of \cite{V&J14} since these objects are present in
their sample too. We plotted the best photometry for all the LCs. On
average, our LCs looks sharper (e.g., LC 4710).
Fig.~\ref{fig15} shows the first of the remaining two objects we have
in common. EPIC~202073445 is an eclipsing binary. The depth of our LC
is smaller than in the other two papers. However, \cite{Nar15} found
that the real eclipsing binary is another star (namely,
EPIC~209186077), very close to EPIC~202073445. To further shed light
on this ambiguity, we compared the LCs of both stars using different
kinds of photometry. The mismatched eclipsing binary does not show any
flux variation with the PSF and 1-pixel-aperture photometry. On the
other hand, the larger the aperture, the deeper the eclipse. Instead,
the ``real'' eclipsing binary shows that the eclipses become dimmer
and dimmer with increasing aperture radius, as expected since its flux
is diluted by the remaining flux of the (un-)subtracted neighbours. We
confirm that the true eclipsing binary is the one identified by
\cite{Nar15} (EPIC~209186077). This mis-identification is also present
in the eclipsing-binary catalogue of \cite{LaC15}.
There is a similar ambiguity between EPIC~202059586 and EPIC~209190225
(Fig.~\ref{fig16}). Again, comparing the PSF- and aperture-based LCs,
we found that the first object is rather an aperiodic star that shows
a flux modulation due to the latter object, while the second one is
the real variable. \\
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fig15.ps}
\caption{LC comparison between EPIC~202073445 (LC \# 2420 in AIC)
and EPIC~209186077 (LC \# 2302). On the \textit{Top} panels we
compared our LC of the probable-blend object (EPIC~202073445) with
those found in the literature as in Fig.~\ref{fig14}. In the
\textit{Middle} panels we show the LC of the object that we found
to be the true variable (EPIC~209186077). We show only our LC
since these objects are neither in \citet{V&J14} nor in
\citet{Arm15} data set. In the \textit{Bottom} panels we finally
show our LCs colour-coded with different shades of green according
to the photometric method (PSF, 1-, 3- and 5-pixel aperture).}
\label{fig15}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fig16.ps}
\caption{As in Fig.~\ref{fig15}, but for the case of
EPIC~202059586 (LC \# 2949) and EPIC~209190225 (LC \# 3044).}
\label{fig16}
\end{figure}
\subsubsection{Comparison with \citet{Aig15}}
\cite{Aig15} developed a different method, which is described in
detail in their paper. Their approach has some similarities with the
effort here described, e.g., the image reconstruction and the adoption
of an input list. Unfortunately, a proper comparison is not possible
since they analyzed only the Engineering data. Campaign 0 showed
different problems and lasted longer. Anyway, for the sake of
completeness, we compared their results with our LCs.
Due to its less-ambiguous definition, we compared the p2p rms. In the
brightest magnitude intervals (9$<$$K_{\rm P}$$<$11), \cite{Aig15}
shows a better p2p rms than we derived here. In the interval
11$<$$K_{\rm P}$$<$15, our 3-pixel-aperture p2p rms ranges from 124 to
585 ppm; while their p2p rms varies between 238 and 867 ppm. For
fainter magnitudes up to $K_{\rm P}$$\sim$19 our p2p rms is slightly
better. On the other hand, our PSF-based photometry performs much
better in the magnitude interval 15$<$$K_{\rm P}$$<$19, with a minimum
p2p-rms value of 1274 ppm and a maximum of 7936 ppm, to be compared
with that of \cite{Aig15} that increases from 1841 to 23673 ppm in the
same magnitude range. It is worth to mention that we could measure
objects up to 5 magnitudes fainter than those measured by
\cite{Aig15}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fig17_rid.ps}
\caption{Example of candidate-variable selection using the AoV
algorithm. Panel (a): the distribution of the periods; in black
we show the distribution after removing the spikes, in grey the
removed spikes. Panel (b): LC periods as a function of the AoV
metric $\Theta$ for all the stars; the stars that have passed the
first selection are plotted in black, the stars having periods
corresponding to the removed spikes in gray. Panel (c): periods
of light curves as a function of $\Theta$ after spikes
suppression. Red dots represent variable candidates.}
\label{fig17}
\end{figure}
\section{Variable Candidates}
\subsection{Search for Variables}
\label{search}
In order to detect candidate variable stars, we followed the procedure
adopted by \cite{Nar15}; we describe the basics of the procedure
below. We obtained the periodograms of all clean LCs using three
different tools included in the \texttt{\small VARTOOLS} v1.32
package, written by \cite{Hart08} and public
available\footnote{\href{http://www.astro.princeton.edu/$\sim$jhartman/vartools.html}{http://www.astro.princeton.edu/$\sim$jhartman/vartools.html}}:
\begin{itemize}
\item The first tool is the Generalized Lomb-Scargle (GLS) periodogram
(\citealt{Press92}; \citealt{Zech09}), useful in detecting
sinusoidal periodic signals. It provides the formal false alarm
probability (FAP) that we used to select variable-star candidates.
\item The second tool is the Analysis of Variance (AoV) periodogram
(\citealt{SC89}), suitable for all kind of variables. We used the
associated AoV FAP metric ($\Theta$) that is a good diagnostic
useful to select stars that have a high probability to be variables.
\item The third tool is the Box-fitting Least-Squares (BLS)
periodogram (\citealt{KZM02}), particularly effective when searching
for box-like dips in an otherwise-flat or nearly-flat light curve,
such as those typical of detached eclipsing binaries and planetary
transits. We used the diagnostic ``signal-to-pink noise''
(\citealt{Pont06}), as defined by \cite{Hart09}, to select
eclipsing-binary and planetary-transit candidates.
\end{itemize}
Figure~\ref{fig17} illustrates the procedure used to identify
candidate variables resulting by the application of the AoV finding
algorithm. The same procedure has been adopted for the GLS and BLS
finding algorithms. First, we built the histogram of the periods ($P$)
of all clean LCs, as shown in panel (a) of Fig.~\ref{fig17}. For each
period $P_0$ we computed the median of the histogram values in the
bins within an interval centered at $P_0$ and extended by $50 \times
\delta P$, where $\delta P$ is the bin width chosen to build the
histogram. We flagged as ``spike'' the $P_0$ corresponding to a
histogram value $5\sigma$ above that median, where $\sigma$ is the
68.27$^{\rm th}$ percentile of the sorted residuals from the median
value itself. These spikes are associated with spurious signals due
to systematic effects such as, e.g., the jet-firing every $\sim$5.88
hours, the long cadence at $\sim$29 minutes ($\sim$0.02 d) and the
periodicity of $\sim$2.04 days (described in Sect.~\ref{204d}) and
their harmonics. Finally, we removed from the catalogue the stars
having periods inside $P_0 \pm \delta P/2 $, keeping only those with
high $\Theta$. We performed our searching to periods between 0.025
(slightly higher than the long-cadence sampling) and 36.5 d (clean-LC
total time interval). GLS, AOV and BLS input parameters were chosen to
optimize the variable finding and are not perfect. Indeed, we selected
a sample of known variables and tuned the input parameters in order to
maximize the signal-to-noise ratio outputted by each of the three
findings. This way, some variables could have been missed because the
three tasks found the highest SNR with a wrong period, e.g., the
0.04-d alias.
In panel (b) of Fig.~\ref{fig17}, for all clean LCs, we plot the
$\Theta$ parameter as a function of the detected period, highlighting
the objects removed because their period coincides with a spike. In
panel (c) we selected by hand the stars that have high $\Theta$.
We ran the \texttt{\small VARTOOLS} algorithms GLS, AoV, and BLS on a
list of 52\,596 light curves. We initially excluded all objects that
in our input list where outside the \textit{K2} TPFs. Then, for each
star, we selected the photometry (1-, 3-, 5-, 10-pixel aperture or
PSF) that gives the best precision at a given magnitude, according to
what was described in Section~\ref{photprec}, Fig.~\ref{fig13}, and
Tables~\ref{tab:rms1} and \ref{tab:rms2}.
We combined the lists of candidate variables obtained by applying the
three variable-detection algorithms and visually inspected each of
them. We excluded all obvious blends by looking at the LC-shape and
position of each star and of its neighbours within a radius of about
11 \textit{K2} pixels ($\sim$43 arcsec).
We found a total of 2759 variables of which 1887 passed our visual
inspection as candidates and 202 were flagged as blends. The remaining
670 LCs were difficult to visually judge. We included them into our
final catalogue, but added a warning flag indicating that their
variable nature is in doubt.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fig18.ps}
\caption{Periods of the light curves as a function of $\Theta$ for
all the candidate variables. With red crosses we show all
variables we found in our analysis of the \textit{K2} data, with
azure, open circles the variable stars from published catalogues
and present in our AIC we failed to identify among our \textit{K2}
LCs. All these variables are included in our final catalogue of
variable candidates, with different flags. Black crosses represent
all variable sources that passes our first selection described in
Fig.~\ref{fig17} but not our visual inspection (see text for
detail).}
\label{fig18}
\end{figure}
\subsection{Comparison with the literature and sample improvement}
\label{literature}
In order to evaluate the completeness of our sample, we matched the
AIC with six catalogues found in the literature: \cite{Hu05},
\cite{Jeon10}, \cite{Kim04}, \cite{Meib09}, \cite{Moc04,Moc06} and
\cite{Nar15}. These studies, which all cover the M\,35/NGC~2158
super-stamp region, analyzed different aspects of stellar variability
and made use of different observational instruments and detection
techniques (e.g., \citealt{Meib09}).
Cross-correlation of these catalogue results in 658 common entries. Of
these, 555 sources are also present in our AIC. The remaining 103
objects are missing for various reasons. A small fraction ($\sim$10\%)
of stars had a very-bright neighbour source that was badly subtracted
because the PSF is still far from perfect or were just too close to
the edge of our field of view. For the remaining missing known
variables, we found that the periods given by the three
\texttt{VARTOOLS} algorithms were close to that of a spurious signal
(spikes in the AoV vs. period plot of Fig.~\ref{fig17}) or lied below
our selection threshold (panel c in Fig.~\ref{fig17}). Therefore they
were excluded before the visual check, even if their LCs showed a
clear variable signature. As an example, we show in Fig.~\ref{fig18}
the AoV parameter $\Theta$ as a function of LC period for all the
candidate variables\footnote{Irregular and long-period objects, such
as the cataclysmic variable V57 found by \cite{Moc04} (period
$\sim$48 days) for which we can see only one peak in the clean LC,
are plotted with an arbitrary period $\sim$36.5 days.} in which we
marked with a different colour the location of such missed
objects. Since there is no reason to exclude them, we added such
previously-discovered variables to our catalogue.
The variable stars found in the literature were also useful to refine
our sample and remove some blends and fake detection left after the
visual inspection of the LCs. Indeed, previous published studies have
made use of images with a higher resolution than that of \textit{K2},
and therefore we chose to rely on the former in ambiguous cases.
After this second refinement of our catalogue, we have 2133 candidate
variables, 444 sources with possible blends or dominated by systematic
effects, and 272 objects for which the LC is difficult to
interpret. In the final catalogue that we release with this paper, we
will properly flag all these different sources (see
Sect.~\ref{electronic}). In any case, we will release all LCs
extracted from the \textit{K2}/C0/channel-81 data, which will be
available to anyone for any further investigation. In
Appendix~\ref{appendixA} we show 10 LCs as example.
\subsection{Variable location on the M35 and NGC~2158 colour-magnitude diagrams}
We also used the $B$$V$$R$$J_{\rm 2MASS}$$H_{\rm 2MASS}$$K_{\rm
2MASS}$ and white-light-magnitude catalogue of \cite{Nar15} to find
the location of our variables in the colour-magnitude diagrams
(CMDs). In Fig.~\ref{fig19}, we show the $B$ vs. ($B-R$) CMDs of the
star sample used to search for variable candidates
(Sect.~\ref{search}) that have a $B$- and $R$-magnitude entry in the
catalogue. We plot candidate variables, difficult-interpretation
objects and blends in different boxes in order to better illustrate
the three samples (panels a, b and c).
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{fig19.ps}
\caption{Distribution of our 2849 candidate variables in the $B$
vs. ($B-R$) CMD. In the three panels we separately show the
likely-variable stars (green crosses in panel a), the objects of
which the LC was of difficult interpretation (orange crosses in
panel b) and the blends (red crosses in panel c).}
\label{fig19}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[clip=true, trim = 5mm 0mm 5mm 0mm, width=\textwidth, keepaspectratio]{fig20_rid.ps}
\caption{(\textit{Left}): 10$\times$10 arcmin$^2$ region around TR1
in the \textit{K2} stacked image. A green circle of radius 3
\textit{Kepler} pixels and a red square with 1 \textit{Kepler}
pixel side are centered on TR1. (\textit{Middle-left}): zoom-in of
the \textit{K2} stacked image around TR1. The covered area is
about 6.5$\times$6.8 pixel$^2$ (about 25$\times$28
arcsec$^2$). The yellow grid represents the \textit{Kepler} CCD
pixel grid. (\textit{Middle-right}): as in the
\textit{Middle-left} panel, but for the Schmidt filter-less
stacked image of \citet{Nar15}. (\textit{Right}):
ACS/WFC@\textit{HST} F606W-filter stacked image (from
\citealt{Bed10}). In all these panels, North is up and East to the
left. It is clear that the higher the image resolution, the higher
the number of detectable polluting sources within the aperture.}
\label{fig20}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{fig21.ps}
\caption{PSF-subtraction of the stellar neighbours (red circles)
around TR1 (green circle). In the \textit{Left} panel, we show all
identified sources in the AIC that must be subtracted before
measuring the TR1 magnitude. The image is the Schmidt stacked image from
which the AIC was extracted. In the \textit{Middle} and
\textit{Right} panels we show a single \textit{K2} image before
and after the neighbour subtraction, respectively. Even if the
PSF-based subtraction is not perfect due to the unavailable PSF
calibration data (see Sect.~\ref{PSF}), the light pollution is
less severe.}
\label{fig21}
\end{figure*}
\section{TR1 as a procedure benchmark}
\label{TR1sect}
\cite{Moc04,Moc06} made an extensive ground-based campaign search for
transiting exoplanets in NGC~2158 and, among the discovered variable
sources, they found an exoplanet candidate with a transit depth of
$\sim$0.037 magnitudes (TR1, following their
nomenclature). \cite{Moc06} suggested that TR1 could be a hot-Jupiter
with a period of 2.3629 days. The hosting star is a NGC~2158 member,
$V_{\rm max}\simeq$19.218, $R_{\rm max}\simeq$18.544,
$(\alpha,\delta)_{\rm J2000.0}\sim(\rm
06^h07^m35^s\!\!.4,+24^{\circ}05^{\prime}40^{\prime\prime}\!\!.8)$.
This object represents an ideal test-bed for our independent pipeline
reduction of \textit{Kepler}/\textit{K2} data in crowded
regions. Figure~\ref{fig20} shows the region of sky around TR1 in
images collected with three different instruments and completely
different resolutions. The light pollution of target neighbours is
evident.
In the left panel, we show an $\sim$10$\times$10 arcmin$^2$ region
(North up and East left) around TR1 from the \textit{K2} stacked image
of channel 81. A green circle of radius 3 \textit{Kepler} pixels and a
red square of 1 \textit{Kepler} pixel per side centered on TR1 are
barely visible in this panel.
In the middle-left panel, a zoomed-in image of about 25$\times$28
arcsec$^2$ centered on TR1. The yellow grid represents the
\textit{Kepler} CCD pixels. The red square shows the location of
TR1. It is clear that, without knowing the positions of the target,
TR1 identification would be hard if not impossible.
The middle-right panel shows the same region, but from the Schmidt
filter-less stacked image of \cite{Nar15}. In this image the pixel
scale is $\sim$0.862 arcsec pixel$^{-1}$. The higher spatial
resolution of this instrument allows us to better identify TR1. We can
identify at least 11 TR1 neighbours within the 3-\textit{Kepler}-pixel
aperture. These stars badly pollute the target LC, dimming the
transit, and therefore leading to an underestimated radius if we do
not properly account for their contribution. Finally, in the
right-most panel, we show the same region as seen in an
ACS/WFC@\textit{HST} F606W-filter stacked image described in
\cite{Bed10} (from GO-10500, PI: Bedin). In this case, the pixel scale
is about 25 mas pixel$^{-1}$ and the image shows that, within a single
\textit{Kepler} pixel, there can be more than one
star. Figure~\ref{fig21} shows an individual \textit{K2} exposure
before and after subtraction of neighbour sources present in the input
list.
The \textit{Kepler} magnitude of TR1 is $K_{\rm P,
max}$$\simeq$18.35. Therefore, this object is at the faint end of
most of the previous studies. With the photometric technique developed
in this paper, TR1 becomes a well-measurable object, as we can push
our photometry almost 5 magnitudes fainter and measure stars in
crowded environments. In this magnitude interval and at this level of
crowding the 1-pixel-aperture and the PSF-based photometry are the
only two photometric approaches that allow us to measure the light
dimming in TR1 LC.
Despite the \textit{Kepler} pixel size, and consequent neighbour
contamination, the detrended light curve from \textit{K2} data is
significantly more complete and precise than that of \cite{Moc06}.
Figure~\ref{fig22} shows the phased and detrended clean LCs. In the
top panel, we plot the original, detrended LC. In the other panels we
show the same LC after we applied a running-average filter of 24-h
window to remove any systematic trend and/or long-period LC
variability in order to better highlight TR1 transits. The period we
found using the BLS algorithm is of 2.36489338 days. In
Fig.~\ref{fig22} we phased the light curve with double the
period.
In the phased LC, both eclipses have more a ``V''-like than a
``U''-like shape\footnote{As suggest by the referee, using
\texttt{\small EXOFAST} tool (\citealt{EGA13}) we found an impact
parameter of 0.764114 and a "planet" (star in TR1 case) to star
radius ratio of 0.184497, confirming our qualitative
classification.}. If we also consider the location of TR1 in the CMD
(\textit{Right} panel of Fig.~\ref{fig22}), it appears more likely to
be a binary with a grazing eclipse rather than a transiting exoplanet
candidate. In a future paper of this series we will make use of the
input list from \textit{HST} data to better characterize TR1. We will
also better determine the LCs for the subsample of stars that fall
within the ACS/WFC footprint of the \textit{HST} program GO-10500
field.
An important by-product of our method is that we can much-better
measure the depth of the eclipses, since our transits are less diluted
by the light coming from neighbour sources than other
approaches. Figure~\ref{fig23} shows a comparison between the
detrended clean TR1 LCs with (black points) and without (azure points)
TR1 neighbours. The uppermost LCs were those obtained with
3-pixel-aperture photometry, while those in the middle and in the
bottom were extracted using the PSF photometry. We phased these light
curves with a period of $\sim$4.73 d as in Fig.~\ref{fig22}. In the
right panel we also binned the LCs with bins of 0.01 phase each and
computed, within such each bin, the median and the 68.27$^{\rm th}$
percentile of the distribution around the median values.
The 3-pixel-aperture-based LCs do not show any significant flux
variation. The median magnitude is three magnitudes brighter and the
rms is smaller than in the PSF-based LCs; all evidences of
neighbour-light contamination as discussed in Sect.~\ref{photprec}.
This LC represents the result of a classic approach found in the
literature.
On the other hand, the remaining PSF-based LCs with (hereafter wLC)
and without (w/oLC) neighbours clearly present at least one flux
drop. First of all, TR1 in the wLC is about 0.25 magnitudes brighter
than in the w/oLC. The w/oLC does not only show a smaller rms and
fewer outliers, but it shows two distinct eclipses; while the wLC
exhibits only the eclipse around phase $\sim$0.2.
Using \texttt{\small VARTOOLS} BLS and \texttt{\small EXOFAST} suit
(\citealt{EGA13}, assuming TR1 is a transiting exoplanet), we
estimated a TR1 eclipse depth on the w/oLC (assuming the original BLS
period of $\sim$2.36 days) of about 3.2\% and 3.4\%,
respectively. Taking into account (1) that our LC and that of
\cite{Moc06} are obtained from different pass-bands, (2) the
measurement errors, (3) the \textit{K2} integration time lasts half an
hour, and (4) the incompleteness of the AIC, we conclude that the two
values are in rather good agreement. Therefore, we confirm that our
PSF-based approach is effective in disentangling blended sources in
crowded regions.
We conclude this section remarking that it would have been simply
impossible to extract the light curve of TR1 from
\textit{Kepler}/\textit{K2} data without using a input list from an
higher resolution data set and subtracting the stellar neighbours.
The approach of our method allows us to reach a 6.5-h photometric
precision of $\sim$2700 ppm in a heavily crowded environment for
sources as faint as TR1.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fig22.ps}
\caption{TR1 clean LCs and CMD location. (\textit{Left}): from
\textit{Top} to \textit{Bottom}: detrendend LC, flattened LC (see
text for details), phased LC with a period of 4.72978676 days and
zoom-in on the phased LC to highlight the two minimums. In the
one-to-the-last panel, the red, vertical dashed lines are set at
phase 0 and 1. (\textit{Right}): $K_{\rm P}$ vs. ($B-R$)
(\textit{Top} panel) and $B$ vs. ($B-R$) (\textit{Bottom} panel)
CMDs of NGC~2158. We considered as cluster members those stars
that are within a radius of $\sim$3 arcmin from NGC~2158
center. The red filled point marks the location of TR1. The
$K_{\rm P}$ magnitude was computed as the 3$\sigma$-clipped median
value of the PSF-based LC magnitude, calibrated into the
\textit{Kepler} photometric system (Sect~\ref{KPphot}).}
\label{fig22}
\end{figure}
\section{Electronic material}
\label{electronic}
For each source in the AIC that fall in a \textit{K2}/C0/channel-81
TPF, we release raw and detrended light curves from the 1-, 3-, 5-,
10-pixel aperture and PSF photometry on the neighbour-subtracted
images. We also make public available the AIC (with each star flagged
as in/out any TPF) and the \textit{K2} stacked image.
For the 2849 candidate variables we
release\footnote{\href{http://groups.dfa.unipd.it/ESPG/Kepler-K2.html}{http://groups.dfa.unipd.it/ESPG/Kepler-K2.html}}
a catalogue made as follow. Column (1) contains the ID of the star in
the AIC. Columns (2) and (3) give J2000.0 equatorial coordinates in
decimal degrees; Columns (4) and (5) contain the pixel coordinates $x$
and $y$ from the AIC, respectively. In columns (6) and (7) we release
the instrumental Schmidt filter-less and the $K_{\rm P}$ magnitudes of
the stars. The $K_{\rm p}$ magnitude is computed as the
3$\sigma$-clipped median value of the magnitude in the LC, calibrated
using a photometric zero-point as described in Sect~\ref{KPphot}. In
column (8) we write the photometric method with which we extracted the
LC of the object and searched for variability.
Column (9) contains a flag corresponding to our by-eye classification
of the LC (Sect.~\ref{search} and \ref{literature}):
\\ $\bullet$ 0: high probability that it is a blend;
\\ $\bullet$ 1: candidate variable;
\\ $\bullet$ 2: difficult to classify;
\\ $\bullet$ 30: star marked as ``difficult to classify'' and that, by
comparison with the literature, could be a possible blend;
\\ $\bullet$ 31: star marked as ``difficult to classify'', but for
which we found a correspondence in the literature;
\\ $\bullet$ 32: star we classified as candidate variable but it is
close to a variable star from the literature and that seems a possible
blend.
Column (10) gives the period, when available. From column (11) to (16)
we give the ID used in other published catalogues, namely
\cite{Nar15}, \cite{Hu05}, \cite{Jeon10}, \cite{Kim04}, \cite{Meib09},
\cite{Moc04,Moc06}, respectively. Finally, we provide (columns from 17
to 22) the $B$$V$$R$$J_{\rm 2MASS}$$H_{\rm 2MASS}$$K_{\rm 2MASS}$
calibrated magnitudes from \cite{Nar15} catalogue, when available.
\section{Conclusions and Future planned works}
In this paper we have presented our first analysis of \textit{K2}
data, focusing our effort on crowded images and faint stars. The
test-beds for our method were super-stamps covering the OCs M\,35 and
NGC~2158.
Though the lack of \textit{Kepler} calibration data -- not made
available yet to the community -- prevented us to optimize our
algorithm, based on a technique we have developed in the last 20 years
on \textit{HST} undersampled images, we nevertheless succeeded in
implementing a photometric procedure based on the ePSF concept
\citep{AK00}. We have shown that by using a crude PSF that is
spatially constant across the channel and allowing for simple temporal
variations, we were able to properly fit stellar objects. Future
efforts will be devoted to further improve the PSF model, possibly
with a better data set at our disposal.
In the second part of the paper, we focused our attention on the
light-curve-extraction method and on the consequent detrending
algorithms. The LC extraction is based on the methods we started to
develop in \citet{Nar15} and makes use of both PSF fitting and of an
high-angular-resolution input list to subtract stellar neighbours
before measuring the stellar flux of any given target. By subtracting
the light of the close-by stars, we are able to decrease the dilution
effects that significantly impact the photometry in crowded regions.
We compared aperture- and PSF-based photometric methods and found that
aperture photometry performs better on isolated, bright stars
(6.5-hour-rms best value of $\sim$30 ppm), while PSF photometry shows
a considerable improvement with respect to the classical aperture
photometry on faint stars and in crowded regions. The extension of the
capability to exploit \textit{Kepler} as well as \textit{K2} data set
to fainter (up to 5 magnitudes fainter than what has been done up to
now in the literature) stars and crowded environments is the main and
original contribution of our efforts.
We release our raw and detrended LCs with the purpose of stimulating
the improvement of variable- and transit-search algorithms, as well as
of the detrending methods.
We are currently working on other clusters imaged during other
\textit{K2} Campaigns, and plan to work on the densest Galactic-bulge
regions. We also plan to go back to open clusters within the
\textit{Kepler} field, i.e. NGC~6791 and NGC~6819.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{fig23.ps}
\caption{TR1 clean LCs with and without TR1 neighbours. From
\textit{Top} to \textit{Bottom}: 3-pixel-aperture LC with
neighbours (black crosses), PSF-based LC with (black dots) and
without (azure dots) neighbours. The LCs are phased with a period
of 4.72978676 d. In the \textit{Right} panels we also plot the
binned LCs. The red points are the median value in 0.01-phase bin,
while the error bars are the 68.27$^{\rm th}$ percentile of the
distribution around the median value divided by $\sqrt{N}$, where
$N$ is the number of points in each bin.}
\label{fig23}
\end{figure*}
\section*{Acknowledgments}
We thank the referee Dr. A. Vanderburg for the careful reading and
suggestions that improved the quality of our paper. ML, LRB, DN and GP
acknowledge PRIN-INAF 2012 partial funding under the project entitled
``The M4 Core Project with Hubble Space Telescope''. DN and GP also
acknowledge partial support by the Universit\`a degli Studi di Padova
Progetto di Ateneo CPDA141214 ``Towards understanding complex star
formation in Galactic globular clusters''. The authors warmly thank
Dr. Jay Anderson for the discussions and improvements to the
text. Finally, we thank Dr. B. J. Mochejska for the discussion about
TR1. |
1510.09196 | \section{Kinetic theory for inertial particle production}
The formulation of a kinetic theory approach to the problem of nonequilibrium particle production in strong fields has been advanced recently in studies of the dynamical Schwinger effect for $e^{+}e^{-}$ plasma creation in high-intensity lasers \cite{Blaschke:2008wf,Blaschke:2014fca}.
Strong and time-dependent fields govern also the particle production in ultrarelativistic heavy-ion collisions.
Here the kinetic theory approach has been developed, e.g., for studying the role of time-dependent masses (the so-called "inertial mechanism" \cite{Filatov:2008}) in the course of the chiral symmetry breaking transition for pion production \cite{Filatov:2008zz} and photon production \cite{Michler:2012mg}.
In the present work we develop the kinetic approach to nonequilibrium pion production in a time-dependent chiral symmetry breaking homogeneous field and investigate the question to what extent the low-momentum pion enhancement observed in heavy-ion collisions at CERN - LHC being discussed as Bose-Einstein condensation of pions \cite{Begun:2015ifa} can be described within this formalism.
To this end we set up a detailed study of the three main processes that are intertwined in this case:
(a) the nonequilibrium $\sigma-$meson production in the time-dependent external field,
(b) the $\sigma\to \pi\pi$ decay and (c) the $\pi\pi$ rescattering and formation of the Bose condensate
\cite{Semikoz:1994zp,Voskresensky:1996ur}.
The results shall be compared with the effect observed at the LHC. Here we report on steps (a) and (b).
In order to address the question of the $\sigma$ and $\pi$ meson production in a heavy-ion collision we consider the simplified situation of one species of pions only, i.e. we neglect the isospin degree of freedom. We describe the evolution of the single-particle distribution functions $f_\sigma(t,\vec{x},\vec{p})$ and $f_\pi(t,\vec{x},\vec{p})$
as solutions of the coupled Boltzmann equations for these relativistic bosons with the dispersion relations,
\begin{equation}
\label{dispersion}
\omega_\sigma(t,\vec{p}) = \sqrt{m_\sigma(T(t))^2 + \vec{p}^2}\;, \\
\quad \omega_\pi(\vec{p}) = \sqrt{m_\pi^2 + \vec{p}^2}\;, \quad m_\pi = 140 \mbox{ MeV}~.
\end{equation}
For the evolution of the mass of the $\sigma$, we apply the following expression
\begin{equation}
\label{eq:sigmamass}
m_\sigma(T(t)) = [m_\sigma(0)-m_\pi] \sqrt{1-\frac{T(t)}{T_c}}+m_\pi\;, \quad T(t) = \frac{T_0~t_0}{t}\;, \quad t\ge t_0\;,
\end{equation}
where $ T_c = 170$ MeV is critical temperature for the chiral transition and $m_\sigma(0)$ is the vacuum
$\sigma$ mass.
The time at the begin of the
3-dimensional spherical expansion is $t_0=9$ fm/c, corresponding to a Hubble flow velocity of
$v_R=0.72$ c for gold nuclei with radius $R_0=6.5$ fm.
The initial temperature $T_0=T(t_0)$ is taken to be $T_0=T_c$.
Here we assume spatial homogeneity, i.e. the distribution functions have no dependence on position and despite their momentum dependence we neglect derivative terms so $df/dt = \partial f/\partial t$. In our simplified model the evolution of $\sigma$ and $\pi$ are dominated by
$\sigma$ production in the evolving chiral condensate (inertial mechanism) and the subsequent decay $\sigma \to \pi\pi$.
The Boltzmann transport equation for $\sigma$ where the rescattering of pions is not yet considered at this step, reads
\begin{eqnarray}
&&\frac{\partial f_\sigma}{\partial t}(t,\vec{p}_\sigma)
\nonumber
=
\frac{\Delta_\sigma(t,\vec{p} _\sigma)}{2}\int_{t_0}^t dt' \Delta_\sigma(t',\vec{p} _\sigma) \left(1+f_\sigma(t',\vec{x},\vec{p} _\sigma)\right)
\cos\left(2\theta_\sigma(t,t',\vec{p} _\sigma)\right)
\nonumber
\\
&&+
\left(1+f_\sigma(t,\vec{p} _\sigma)\right)
\left(
\int \frac{d^3p_1}{(2\pi)^3 2\omega_1}\frac{d^3p_2}{(2\pi)^3 2\omega_2} \Gamma_{\pi\pi\rightarrow\sigma}(\vec{p} _\sigma,\vec{p}_1,\vec{p}_2)
f_\pi(t,\vec{p}_1) f_\pi(t,\vec{p}_2)
\right)
\nonumber
\\
&&-
f_\sigma(t,\vec{p}_\sigma)
\left(
\int \frac{d^3p_1}{(2\pi)^3 2\omega_1}\frac{d^3p_2}{(2\pi)^3 2\omega_2} \Gamma_{\sigma\rightarrow\pi\pi}(\vec{p} _\sigma,\vec{p}_1,\vec{p}_2)
\left(1+f_\pi(t,\vec{p}_1)\right)\left(1+f_\pi(t,\vec{p}_2)\right)
\right)
\;.
\label{eq:sigma_transport}
\end{eqnarray}
Note that the source term for $\sigma$ production occurs just due to the time dependence of the $\sigma$ dispersion law (\ref{dispersion}) and works in the absence of pions.
It is the first term at the right hand side of Eq.~\ref{eq:sigma_transport}, with the following definitions
\begin{equation}
\label{eq:Delta}
\Delta_\sigma(t,\vec{p} _\sigma)=\frac{m_\sigma}{\omega_\sigma^2}\frac{\partial m_\sigma}{\partial t}\;, \quad \theta_\sigma(t,t',\vec{p} _\sigma) = \int_{t'}^t dt'' \omega_\sigma(t'',\vec{p} _\sigma)\;.
\end{equation}
The last two terms in Eq.~\ref{eq:sigma_transport} are due to the $\sigma\to \pi\pi$ decay and regeneration $\pi\pi\to\sigma$.
For the pions, the dispersion law is time-independent for $t>t_0$ so that there is no inertial production mechanism
\begin{eqnarray}
&&\frac{\partial f_\pi}{\partial t}(t,\vec{p}_1)
=\nonumber
\\
&&=
\left(1+f_\pi(t,\vec{p_1})\right)
\left(
\int \frac{d^3p _\sigma}{(2\pi)^3 2\omega_\sigma}\frac{d^3p_2}{(2\pi)^3 2\omega_2} \Gamma_{\sigma\rightarrow\pi\pi}(\vec{p} _\sigma,\vec{p_1},\vec{p_2})
\left(1+f_\pi(t,\vec{p_2})\right)
f_\sigma(t,\vec{p}_\sigma)
\right)
\nonumber
\\
&&-
f_\pi(t,\vec{p_1})
\left(
\int \frac{d^3p_\sigma}{(2\pi)^3 2\omega_\sigma}\frac{d^3p_2}{(2\pi)^3 2\omega_2}
\Gamma_{\pi\pi\rightarrow\sigma}(\vec{p} _\sigma,\vec{p_1},\vec{p_2})
f_\pi(t,\vec{p_2})
\left(1+f_\sigma(t,\vec{p} _\sigma)\right)
\right)
\;.
\label{eq:pi_transport}
\end{eqnarray}
\newline
For the $\sigma$ decay and regeneration we assume a constant matrix element $|M|^2={\rm const}$ so that the momentum dependence of
$\Gamma_{\sigma\to\pi\pi}$ is simply given by the momentum conserving delta-function
\begin{eqnarray}
\Gamma_{\sigma\rightarrow\pi\pi}(\vec{p}_\sigma,\vec{p}_1,\vec{p}_2)
=(2\pi)^4 \delta^4(p_\sigma-p_1-p_2) \vert M \vert^2
\rightarrow
(2\pi)^4 \delta(w_\sigma-w_1-w_2)\delta^3(\vec{p}_\sigma-\vec{p}_1-\vec{p}_2)
\;.
\label{eq:Gamma_delta}
\end{eqnarray}
\begin{figure}[!htb]
\includegraphics[width=\textwidth]{evolution_055-080_new.pdf}
\caption{\label{fig:evolution}Coupled $\sigma-\pi$ kinetics in an evolving scalar background field (upper panels, red dashed lines) with a vacuum $\sigma$ mass term of $550$ MeV (left panels) and $800$ MeV (right panels).
Evolution of the $\sigma$ (middle panels) and $\pi$ (bottom panels) distribution function at four selected momenta $p_1 = 5 $ MeV, $p_2 = 50 $ MeV, $p_3 = 100 $ MeV, $p_4 = 200 $ MeV.
Pion production is active only above the sigma-2pi threshold when $m_\sigma>2~m_\pi$.}
\end{figure}
\section{Results}
In order to apply the inertial mechanism of meson production described by the coupled kinetic equations (\ref{eq:sigma_transport}) and (\ref{eq:pi_transport}) to the case of heavy-ion collisions at LHC, we consider a scenario
where the initial state is described by thermal equilibrium distributions of $\pi$ and $\sigma$ mesons with degenerate mass $m_\pi(T_0)=m_\sigma(T_0)=140$ MeV (chiral symmetry) at $T_0=T_c=170$ MeV
\begin{eqnarray}
f_{i}(t_0,\vec{p})=g_{i} \left[\exp(\sqrt{p^2+m_i^2(T_0)}/T_0) -1\right]^{-1}~,~~i=\pi,\sigma~.
\end{eqnarray}
In the subsequent evolution the $\sigma$ mass departs from the pion mass and rises towards its vacuum value $m_\sigma(0)$ (chiral symmetry breaking) while the
the pion mass keeps the value at the onset of the chiral symmetry breaking due to chiral protection.
According to our model the chiral transition takes place because of very fast 3-dimensional expansion (and thus dilution and cooling) of the fireball.
Therefore it is reasonable to assume that the process $\pi + \pi \rightarrow \sigma$ is strongly suppressed.
In such a case one can disregard the terms containing $\Gamma_{\pi\pi\to\sigma}$ in Eqs.~(\ref{eq:sigma_transport}) and (\ref{eq:pi_transport}).
The resulting system of kinetic equations to be solved is given by
\begin{eqnarray}
\frac{\partial f_\sigma}{\partial t}(t,p_\sigma)
&=&
\frac{\Delta_\sigma(t,p _\sigma)}{2}\int_{t_0}^t dt' \Delta_\sigma(t',p_\sigma) \left(1+f_\sigma(t',p _\sigma)\right)
\cos\left(2\theta_\sigma(t,t',p _\sigma)\right)
\nonumber
\\
&-&
\frac{1}{8\pi}
f_\sigma(t,p_\sigma)
\frac{1}{p_\sigma w_\sigma}
\int_{p_1^-}^{p_1^+} p_1 dp_1
\frac{\vert M \vert^2}{w_1}
\left(1+f_\pi(t,p_1)\right)\left(1+f_\pi(t,p_2(z_0,p_1;p_\sigma)\right)\,,
\label{eq:fullsigma}
\end{eqnarray}
\begin{eqnarray}
\frac{\partial f_\pi}{\partial t}(t,p_1)
&=&
\frac{1}{8\pi}
\left(1+f_\pi(t,p_1)\right)
\frac{1}{p_1 w_1}
\int_{p_\sigma^-}^{p_\sigma^+} p_\sigma dp_\sigma
\frac{\vert M \vert^2}{w_\sigma}
f_\sigma(t,p_\sigma)\left(1+f_\pi(t,p_2(z_0,p_\sigma;p_1))\right)\,,
\label{eq:pi_transport_red}
\end{eqnarray}
where
\begin{eqnarray}
&&
p_1^\pm = \frac{1}{2}\left\vert p_\sigma\pm w_\sigma\sqrt{1-\frac{4 m_\pi^2}{m_\sigma^2}}\right\vert ,\quad\quad p_\sigma^\pm = \frac{m_\sigma^2}{m_\pi^2}\frac{1}{2}
\left\vert p_1 \pm w_1 \sqrt{1-\frac{4 m_\pi^2}{m_\sigma^2}} \right\vert
\;.
\end{eqnarray}
The results for the evolution of the distribution functions due to the coupled $\sigma-\pi$ kinetics in the evolving scalar background field are presented in Fig.~\ref{fig:evolution}.
One can notice that the $\sigma$ distributions in the middle panels of Fig.~\ref{fig:evolution} show oscillatory behaviour which is common for the kinetic approach to particle production and has been discussed also in the context of the dynamical Schwinger effect in lasers
\cite{Blaschke:2008wf,Blaschke:2014fca}.
\begin{figure}[!th]
\begin{minipage}[l]{0.6\textwidth}
\includegraphics[width=\textwidth]{fpi-p_new.pdf}
\end{minipage}\hfill
\begin{minipage}[r]{0.4\textwidth}
\caption{\label{fig:thermal}
Nonequilibrium distribution functions for pions for two different cases of vacuum $\sigma$ mass: $550$ MeV (black solid lines) and $800$ MeV (blue dash-dotted lines) together with their approximations by thermal Bose distributions with ($T=170$ MeV, $\mu_\pi=116$ MeV) and ($T=280$ MeV, $\mu_\pi=45$ MeV), resp. The initial thermal pion distribution with temperature $T=170$ MeV is shown by the orange solid line.}
\end{minipage}
\end{figure}
Results presented in Fig.~\ref{fig:thermal} show $f_\pi(t_f,\vec{p})$ at sufficiently late times $t_f>250$ fm/c plotted together with the Bose distribution of the initial state.
In this stage $f_\sigma(t_f,\vec{p})$ is already negligible because all $\sigma$ mesons have decayed.
In the pion channel we obtain a strong enhancement at low momenta which stems from the decay of the
$\sigma$ mesons produced by the
inertial mechanism.
In order to quantify this effect we introduce a pion enhancement ratio
\begin{equation}
r_\pi = \int_0^\infty dp p^2 f_\pi(t_f,\vec{p}) / \int_0^\infty dp p^2 f_\pi(t_0,\vec{p})
\end{equation}
and obtain for it the value $r_\pi=2.98$ ($r_\pi=3.18$) in the scenario with $m_{\sigma}(0)=0.55$ GeV
($m_{\sigma}(0)=0.80$ GeV).
This pattern of low-momentum pion enhancement and its approximate description by a pion chemical potential has already been noticed for early CERN SPS experiments \cite{Kataja:1990tp}.
It was subsequently disregarded in favour of a description by resonance decays which, however,
appear to be insufficient for explaining the magnitude of the effect as observed recently by the ALICE experiment at LHC.
The effect of pion enhancement from the inertial mechanism as discussed above has the right order of magnitude to explain the observation.
It thus qualifies as a possible microscopic explanation for the nonequilibrium origin.
Very roughly it can be captured by assigning a nonequilibrium chemical potential to the pions.
Thermalization should be accomplished by including elastic $\pi-\pi$ rescattering processes.
\section{Conclusion and outlook}
In the present work we have developed further the kinetic approach to nonequilibrium pion production
by the inertial mechanism in a time-dependent chiral symmetry breaking homogeneous field.
We have addressed the question to what extent the low-momentum pion enhancement observed in
heavy-ion collisions at CERN - LHC being discussed as Bose-Einstein condensation of pions [5] can be described within this formalism. For simplicity we have neglected the isospin of pions here.
Along the lines of this project, we have performed the first two steps (a) and (b) of a detailed study of the three main processes that are intertwined in this case:
(a) the nonequilibrium $\sigma-$meson production in the time-dependent external field,
(b) the $\sigma\to \pi\pi$ decay.
The step (c) consisting in the inclusion of $\pi\pi$ rescattering and formation of the Bose condensate
\cite{Semikoz:1994zp,Voskresensky:1996ur} as well as the comparison of the obtained results with the effect observed at the LHC is subject to current research.
At the present stage we can conclude that the distribution function $f_\pi(t)$ depends strongly on magnitude, shape and duration of the chiral symmetry breaking (inertial) source term $\Delta_\sigma(t)$.
The distribution function $f_\pi(t)$ before $\pi-\pi$ rescattering can roughly be approximated
(for the more realistic case $m_{\sigma}(0)=0.55$ GeV) by a Bose distribution with a nonequilibrium pion chemical potential $\mu_\pi=116$ MeV and a freeze-out temperature $T=170$ MeV.
\ack
We are grateful to V. Begun, W. Florkowski, P.M. Lo, G. R\"opke, L. Turko and D.N. Voskresensky for their enlightening discussions. We thank also E.-M. Ilgenfritz and A. Tawfik for their continued interest in the progress of this project.
This research is supported in part by the Polish Narodowe Centrum Nauki (NCN) under grant number UMO-2014/15/B/ST2/03752 (L.J. and D.B.) and UMO-2013/11/D/ST2/02645 (T.F.).
\section*{References} |
cond-mat/9702221 | \section{Introduction}
Recent developments in photoelectron spectroscopy have challenged the
apparent simple truth that the Fermi surface of cuprate
superconductors is simply the one corresponding to LDA
band structures with the only effect of the closeness to the Mott-Hubbard
insulator being a moderate correlation narrowing of the band width.
The discovery of the `shadow bands'\cite{Aebi,LaRosa}, the
temperature dependent pseudogap in the
underdoped state\cite{Loeser} and the substantial doping dependence of the
quasiparticle band structure\cite{Marshall} leave little doubt that
a simple single-particle description is quite fundamentally
inadequate for these materials. Moreover, photoemission experiments on
one-dimensional (1D) copper oxides\cite{Kim} have shown very clear
signatures of spin charge separation. The equally clear nonobservation
of these signatures in the cuprate
superconductors at any doping level advises against another
apparent simple truth, namely that the Fermi surface seen in the cuprates
is simply that of the `spinons' in a 2D version of the
Tomonaga-Luttinger liquid (TLL) realized in 1D.
Motivated by these developments, we have performed a detailed
exact diagonalization study of the electron removal spectrum in the
1D and 2D $t$$-$$J$ model. This model reads
\[
H = -t \sum_{\langle i,j \rangle,\sigma}
(\hat{c}_{i,\sigma}^\dagger \hat{c}_{j,\sigma} + {\rm H.c.}) + J
\sum_{\langle i,j \rangle} \big( \vec{S}_i \cdot \vec{S}_j
-{1\over 4} n_i n_j \big).
\]
There by the `constrained' Fermion operators are written as
$\hat{c}_{i,\sigma} = c_{i,\sigma} n_{i,\bar{\sigma}}$ and
$\vec{S}_i$ denotes the spin operator on site $i$.
The summation $\langle i, j \rangle$ extends over all pairs
of nearest neighbors in a 1D or 2D square lattice.\\
The electron removal spectrum is defined as
\[
A(\vec{k},\omega) = \frac{1}{\pi} \Im
\langle \Psi_0 | \hat{c}_{\vec{k},\sigma}^\dagger
\frac{1}{ \omega - (E_0 - H) - i0^+} \hat{c}_{\vec{k},\sigma}
|\Psi_0\rangle,
\]
where $E_0$ and $|\Psi_0\rangle$ denote the ground state
energy and wave function. For small finite clusters,
this function can be evaluated numerically by means of the
Lanczos algoritm\cite{Dagoreview}. \\
In 1D the $t$$-$$J$ model is solvable by Bethe ansatz in the case
$J$$=$$2t$\cite{BaresBlatter}, but even for this limit the
complexity of the Bethe ansatz equations precludes an evaluation of
dynamical correlation functions.
For the closely related Hubbard model in the
limit $J/t$$\rightarrow$$0$ the Bethe-ansatz
equations simplify\cite{OgataShiba}, and an actual calculation of the
spectral function becomes possible\cite{SorellaParola,Penc}.
In all other cases Lanczos diagonalization is the only
way to obtain accurate results for $A(\vec{k},\omega)$\cite{Favand}.\\
In order to analyze our numerical results, we first want to develop an
intuitive picture of the scaling properties of the elementary excitations
in 1D, which will turn out to be useful also in 2D.
It has been shown by Ogata and Shiba\cite{OgataShiba} that for
$J/t$$\rightarrow$$0$ the wave functions can be constructed
as products of a spinless Fermion wave function, which depends only
on the positions of the holes, and a spin wave function,
which depends only on the sequence of spins.
A naive explanation for this remarkable property
is the `decay' of a hole created in a N\'eel ordered
spin background into an uncharged spin-like domain wall,
and a charged spinless domain wall.
Then, since it is the kinetic energy $\sim$$t$ which propagates
the charge-like domain walls, whereas the exchange energy $\sim$$J$
moves the spin-like domain walls, one may expect that the two
types of domain walls have different energy scales.
Namely the excitations of the charge part of the wave function
(i.e., the `holons') have $t$ as their energy scale, whereas those of the
spin part (i.e., the `spinons') have $J$ as their energy scale.
Scanning the low energy excitation spectrum of 1D $t$$-$$J$ rings
then shows that indeed most of the excited states
have excitation energies of the form $a\cdot t + b\cdot J$\cite{TK},
which indicates the presence of two different
elementary excitations with different energy scales.\\
Surprisingly enough the low energy spectrum of the 2D model
shows the same scaling behavior of the excitation energies
as in 1D\cite{TK},
which seems to indicate the existence of two types of
spin and charge excitations if very different nature
also in this case. Other cluster results
indicate, however, that these two types of excitations
do not exist as `free particles':
the dynamical density correlation function,
which corresponds to the `particle-hole excitations' of holons
and shows sharp low energy peaks in 1D\cite{BaresBlatter}
is essentially incoherent in 2D and has practically no sharp low energy
excitations\cite{EderOhtaMaekawa}. The optical
conductivity in 2D shows an incoherent high energy part
with energy scale $J$\cite{EderWrobelOhta} - which is completely unexpected
for the correlation function of the current operator which acts only
on the charge degrees of freedom. There is moreover rather clear
numerical evidence\cite{DagottoSchrieffer,EderOhta,RieraDagotto}
that the hole-like low
energy excitations can be described to very good approximation as
spin $1/2$ `spin bags'\cite{Schrieffer}
- i.e., holes dressed heavily by a local cloud of spin excitations.\\
To obtain further information about similarities and
differences between 1D and 2D, also in comparison to
the spectroscopic results, we have performed a systematic
comparison of the electron removal spectra in both cases.
As will become apparent, there are some similarities, but also
clear differences. We suggest that the main difference between
1D and 2D is a strong attractive interaction between `spinon' and
`holon' in 2D, which leads to a band of bound states being pulled out
of the continuum of free spinon and holon states. This band
of bound states - which are nothing but simple spin $1/2$ Fermions
corresponding to the doped holes - then sets the stage for the
low energy physics of the system, i.e., true spin-charge separation
as in 1D never occurs.
\section{One dimension, half filling}
We begin with a discussion of the 1D model at half-filling.
Figure \ref{fig1} shows the
electron removal spectra for the $12$-site ring.
Let us first consider
the left panel, where energies are
measured in units of $J$. Then, one can distinguish
different types of states according to their scaling behavior with
$t$: there is one `band' of peaks (connected by
the thin full line) whose energies relative
to the single-hole ground state at $k$$=$$\pi/2$ remains practically
unchanged under a variation of $t$, i.e., these states have $J$
as their energy scale. As a remarkable fact,
this `band' abruptly disappears half-way in the Brillouin zone,
i.e., there are no peaks whose energy scales with $J$
beyond $k$$=$$\pi/2$. This looks like a half-filled free-electron band
with a Fermi level crossing at $\pi/2$, which however is quite remarkable
because inverse photoemission is not possible at half-filling.
Next, in addition to this `$J$-band', there are several
groups of peaks whose excitation energy shows a very systematic progression
with $t$. Indeed, when plotting the same spectra but
measuring energies in units of $t$ (right panel of Fig.~\ref{fig1})
these peaks
coalesce, i.e., to excellent approximation the energy scale
of these states is $t$. This coexistence of states with different energy
scales can be nicely seen in the `double peak' for $t/J$$=$$2$ and
momentum $2\pi/6$: the peak with lower binding energy
falls into the $J$-band, the one with the higher binding
energy belongs to the $t$-band. The dispersion of the $t$-band
resembles a slightly asymmetric parabola with minimum
near $\pi/2$ for the low excitation energies that we are considering.
The states that fall onto this parabola
correspond to the creation of a spinon with momentum
$k_F$$=$$-\pi/2$, and a holon of momentum $k+\pi/2$. Since
the spinon momentum is fixed, this group of states
then simply traces out the holon dispersion. On the other hand,
the `$J$-branch' corresponds to the holon momentum being
fixed at the minimum of the holon dispersion,
and thus traces out the spinon dispersion.\\
This building principle for the spectra can be pushed further.
Namely, one might expect that not only $k_F$ but any spinon momentum
may serve as the starting point for
a complete branch of peaks which trace out
the full holon dispersion. That this is indeed the case
is shown in Fig.~\ref{fig2}. There, the entire width of the
spectra is shown and we have chosen the
zero of energy at the excitation energy of
either the topmost `$J$-peak' at $k_F=\pi/2$ (left panel)
or the topmost `$J$-peak' at $\pi/3$ (right panel).
Due to this choice of the zero of energy,
the energy $\propto J$ of the spinon with the respective
momentum drops out. Then,
when measuring energies in units of $t$ different holon bands
`become sharp', i.e., their energy
{\em relative to the respective spinon energy} scales accurately with $t$.
Moreover, these different
groups of peaks to good approximation
all trace out the same simple backfolded nearest neighbor hopping
dispersion, i.e., the dispersion of the holon is simply
$2t \cos(k_x)$. As discussed above, the
first holon band is shifted by the spinons' Fermi momentum,
$k_F=\pi/2$, so that its dispersion near the band minimum
at $k=\pi$ could be seen in Fig.~\ref{fig1}.
We have also verified that by alligning the spinon peaks at
$\pi/6$ yet another complete holon band can be identified.\\
We can thus infer the following building principle
for the spectral function: the basis for the whole construction is the
`half-filled' spinon band, with dispersion $-0.65 J \cos(k)$; this
is indicated by the thick dashed line in Fig.~\ref{fig3}a. Then,
each $(k,\omega)$-point of this band provides the `basis' for
a complete holon band $2 t \cos(k)$,
which is `hooked on' to the spinon band at its band maximum;
these holon bands are indicated by the thin full lines in Fig.~\ref{fig3}(a).
Comparison with the numerical results (in this
case for the $20$-site ring) in Fig.~\ref{fig3}(b) shows that indeed to
excellent approximation the poles of the single particle spectral
function fall onto these bands. There are some
deviations at high binding energies, which however are most probably
a deficiency of the Lanczos spectra, which are highly accurate only at
low excitation energy. Moreover, the holon bands in Fig.~\ref{fig3}(b)
have been slightly shifted, i.e., they are `hooked on' to the
spinon band not precisely at their maximum - we have verified that
this shift has oscillating sign for different chain lengths,
so that it probably is a finite size effect.
As an interesting feature, the pole strength
seems to be constant along each of these holon bands,
i.e., the weight is a function only of the spinon momentum
(this seems not to be correct for $k$$=$$0$ and $k$$=$$\pi$; here it should
be noted that for these momenta the holon band intersects itself,
which leads to a doubling of the peak weight).
In the thermodynamic limit, the density of bands increases,
while simultaneously their spectral weight decreases, resulting
in incoherent continua. Comparing with the exact results of
Sorella and Parola\cite{SorellaParola} for the case $J/t$$\rightarrow$$0$ it
is obvious that the outermost holon band in our calculation,
originating from the spinon Fermi momentum,
develops into a cusp-like singularity of the spectral weight.
The spinon band itself, whose energy scale is $J$,
turns into a second dispersionless
cusp in this limit, which skims at zero excitation energy
between $k$$=$$0$ and $k$$=$$\pi/2$. Sorella and Parola
found the excitation energy of the dispersive cusp to be
$-2t \pm 2t\sin(k)$, which corresponds to a
the backfolded and shifted nearest neighbor hopping band,
$-2t + 2t \cos(k-\pi/2)$. \\
Summarizing the data for 1D we see that the entire
electron removal spectrum obeys a very simple
building principle, which moreover holds for all momenta
and frequencies.
Analyzing the scaling of the different features
with $J$ and $t$ one can identify `branches' of states
which trace out the dispersion of the true elementary excitations
of the TLL, namely the collective spin and charge excitations.
The dispersions of the spinons and holons are both
consistent with simple nearest neighbor hopping bands,
the spinons moreover have a half-filled Fermi surface.
While these results may not be really new or surprising, we note that
they demonstrate that exploiting the scaling properties of excitation
energies provides a very useful method to identify
the different `subbands'. In the following, we will make
extensive use of this principle to address the far less understood
problems of 2D and finite doping.
\section{Two dimensions, half filling}
We proceed to the 2D model, and also consider first the case of
half filling.
The spectra shown below refer to the standard $20$-site cluster,
which is the largest cluster for which the
calculation of the electron removal spectrum is feasible
also in the doped case. The $\vec{k}$-net for this cluster,
which is shown in Fig.~\ref{fig3a}, consists of the group of
momenta which roughly follows the $(1,1)$ direction,
and a second group along $(\pi,0)$$\rightarrow$$(0,\pi)$.
We would like to stress that results for other clusters are
completely consistent with those for the $20$-site cluster.
Then, the left panel of Fig.~\ref{fig4} shows the photoemission spectrum
for this cluster at half-filling; thereby we again focus on
energies within a few $J$ from the top of the
band and measure energies in units of $J$.
When the spectra are aligned at the top of the band, the positions
of the other dominant low energy peaks do not show a strong variation with
$t$. Some peaks do show a slight but systematic drift with $t$,
which however is much weaker than in 1D. A peculiar feature is
the peak at $(0,0)$, whose relative excitation energy
decreases rather than increases with $t$. Inspection shows, however,
that the (very weak) dispersion along the line
$(\pi,0)$$\rightarrow$$(0,\pi)$ (i.e., the lowest three momenta
in Fig.~\ref{fig4}) scales with $t$ to good approximation.
A possible explanation is the fact
that a hole in a 2D system has two distinct
mechanisms for propagation, firstly by `string truncation',
which gives effective hopping integrals $\sim$$J$, and secondly
by hopping along spiral paths\cite{Trugman}, which gives (smaller)
effective hopping integrals $\sim$$t$\cite{EderBecker}.
It can be shown\cite{EderBecker} that the
dispersion relation for a single hole to good approximation can be
written as
\[
E(\vec{k}) = J \cdot c_1 (cos(k_x) + \cos(k_y))^2
- t \cdot c_2 \cos(k_x)\cos(k_y)
\]
where $c_1\gg c_2 >0$ are numerical constants. The first term,
which originates from the string truncation mechanism,
gives a dispersion which is degenerate
along $(\pi,0)$$\rightarrow$$(0,\pi)$ and this
degeneracy is lifted by the second term which is the contribution from the
spiral paths; this naturally explains the scaling of
the dispersion along this line with $t$.
Comparing with 1D we note that with the exception of $(\pi,\pi)$
the `$J$-band' is present in the entire Brillouin zone,
i.e., the spinon Fermi surface seen at half-filling
in 1D does not exist. \\
We turn to the right panel of Fig.~\ref{fig4}(b), which shows the entire
width of the spectra, with energies measured in units of $t$.
It is first of all quite obvious that the spectra generally
are more `diffuse' than in 1D, with sharp features existing
only in the immediate
neighborhood of the top of the band (except for one relatively sharp
high energy peak at $(\pi,\pi)$). Next, among the diffuse features
at high energy there are some whose energy accurately
scales with $t$. Although these `peaks' are rather broad,
so that the assignment of a dispersion is not really meaningful,
their centers of gravity can be roughly fitted by the expression
$-2t \pm 2t\sin(|k_x| + |k_y|)$, which is reminiscent of the
dispersion of the `holon-cusp' found by Sorella and
Parola\cite{SorellaParola}
in 1D. An important difference as compared to 1D
is the fact that this $t$-band does not seem to
reach the top of the photoemission spectrum - rather it
stays an energy of $\sim$$t$ below the $J$-band, which forms the
first ionization states.
We believe that `in 1D language' the most plausible interpretation
of the data is the formation of bound states of
spinon and holon: assuming a strong attraction between
these two excitations, which may originate, e.g., from the
well-known string mechanism for hole motion in an
antiferromagnet\cite{Bulaevskii}, one may expect that a band of bound states
is pulled out of the continuum of free spinons and holons.
This band of bound states
corresponds to the $J$-band (which however has a small
contribution $\propto t$ in its dispersion due to the
spiral path mechanism). Such a bound state
of spinon and holon should be a spin-bag--like spin $1/2$ Fermion,
i.e., a hole heavily dressed by spin excitations.
There is strong numerical
evidence\cite{DagottoSchrieffer,EderOhta,RieraDagotto},
that this is indeed the
character of the low energy states in 2D at low doping.
One may expect, however, that such a bound state may not be stable
for all momenta, and we believe that this
is the reason for the absence of a $J$-peak at $(\pi,\pi)$.
In this picture, the 2D analogue of the holon
is not a coherently propagating excitation, because it is bound to
the much slower spinon by the linearly ascending string potential. This
picture fits nicely with the diffuse character of the
dynamical density correlation
function in 2D\cite{EderOhtaMaekawa}: this function,
which in a TLL should measure basically the response of the free
holons, in 2D has almost exclusively diffuse high energy `peaks',
with virtually no sharp low energy peaks.
Moreover, the unexpected (in the framework
of spin charge separation) appearance of $J$ as energy scale
in the optical conductivity is also readily understood
in terms of the dipole-excitations of a bound spinon-holon
pair\cite{EderWrobelOhta}.\\
Summarizing the data for 2D, we see a band of quasiparticle peaks,
which predominantly has $J$ as its energy scale,
and some diffuse high energy `band' with energy scale $t$.
Both, the absence of the `spinon Fermi surface', as well as the lack
of sharp `holon bands' are in clear contrast to the situation in 1D.
The formation of bound states of spinon and holon, resulting in
a split-off band of spin-bag--like spin $1/2$ Fermions
explains this in a natural way.
\section{One dimension, doped case}
We return to 1D and consider the doped case.
Figure \ref{fig5} shows the spectral function for the
$12$-site ring with $2$ holes. Measuring excitation energies
in units of $J$ (left panel) we can again identify the spinon band.
For $2$ holes in $12$ sites the nominal Fermi momentum is
$k_F$$=$$5\pi/12$ (i.e., half way between $\pi/3$ and $\pi/2$)
and the spinon band extends up to this momentum.
As was the case at half-filling,
some other peaks show a systematic progression of their
excitation energy, and switching the unit of energy to $t$
(right panel) again makes a nearly complete `holon band' visible to
which these peaks belong. The holon band again takes the form
of a backfolded tight-binding band, but this time the
top of the parabola around $k$$=$$\pi/2$ is missing.
The holon band now
seems to touch the Fermi energy at $k_F$ and at $3k_F$$=$$9\pi/12$
(the latter momentum is half way between $4\pi/6$ and $5\pi/6$).
This picture of the spectral function nicely fits with the recent
exact calculation in the limit $J/t$$\rightarrow$$0$
by Penc {\em et al.}\cite{Penc}:
on the photoemission side, this calculation showed
a high intensity `band' which is very similar to the
backfolded tight-binding dispersion of the holon
band. In addition there was a dispersionless low intensity
band at zero excitation energy, which
corresponds to the spinon band in the limit $J/t$$\rightarrow$$0$.
For both, the exact result in the limit $J/t$$\rightarrow$$0^+$,
and our numerical data for finite $J$, there are thus
two branches of states which reach excitation energy zero:
the `main band' which touches $E_F$ at $k_F$,
and the `shadow band'\cite{Favand}, which reaches $E_F$ at $3k_F$.
The `Fermi level crossings' of these two bands may be thought of
producing the well known (marginal) singularities
in the electron momentum distribution $n(k)$ at $k_F$ and $3k_F$,
found by Ogata and Shiba\cite{OgataShiba}. \\
The numerical spectra demonstrate
a peculiar feature of the TLL, namely a kind of Pauli
exclusion principle which holds for both holons and spinons:
the dispersions of both types of excitations become
incomplete upon doping, i.e., the spinon Fermi surface shrinks
as if the spinons were spin $1/2$ particles,
while simultaneously the top of the holon band is `sawed off'
as if the holons were spinless Fermions.
It should be noted that this is quite naturally
to be expected in that the rapidities for the different
`particles' in the Bethe ansatz
solution both obey a Pauli-like exclusion principle\cite{BaresBlatter}.
This has negative implications for, e.g., slave boson
mean-field calculations, which necessarily have to treat one type
of excitation as a Boson. While spin-charge separation is often
quoted as justification for the mean-field decoupling,
it is obvious that this approximation must fail to reproduce the
excitation spectrum even qualitatively
in 1D, the only situation where spin-charge
separation is really established.\\
For a more quantitative discussion of the Fermi points,
we note that the Fermi momentum
for hole concentration $\delta$ is $k_F= \frac{\pi}{2}(1-\delta)$.
For this momentum the first branch of low energy excitations
reaches $E_F$. For small $\delta$ the second branch of low energy excitations
comes up to $E_F$ at $-3k_F + 2\pi= \frac{\pi}{2}(1+ 3\delta)$.
The two marginal singularities thus enclose a hole pocket
of length $2\pi \delta$ as one would expect for
holes corresponding to spinless Fermions. It is easy to see, that this
hole pocket is nothing but the manifestation of the
holon `Fermi surface' around $k$$=$$\pi$: the lowest charge excitations,
which may be thought of as corresponding to a particle hole excitation
between the two edges of the holon pocket have wave vector
$4k_F = 2\pi - 2\pi \delta$, i.e., the holon pocket
has a diameter of $2\pi/\delta$, precisely the distance between
the two marginal singularities. The spectral function for the
doped case thus follows the same building principle as for the
case of half filling, with the sole difference being that
occupied spinon or holon momenta are no longer available
for the construction of final states.
The singularities in $n(k)$ may be thought of as
enclosing a hole pocket corresponding to spinless Fermions,
and thus reflect the Fermi surface of the holons.
The two holon pockets are placed such that their inner edges
at $\pm k_F$ enclose the volume corresponding to the
Fermi sea of spinons of density $(1/2)(1-\delta)$.
\section{Two dimensions, doped case}
We proceed to the doped case in 2D.
Let us note from the very beginning that for very simple
technical reasons the situation is much more unfavorable in this
case. To begin with, due to the higher symmetry of 2D clusters
the available $\vec{k}$ meshes are much coarser:
for example, amongst the $18$ allowed momenta in the $18$ site
cluster only $6$ $\vec{k}$-points are actually non symmetry-equivalent,
so that the amount of nonredundant information is much smaller than
in 1D. Next, unlike 1D where a unique relationship exists
between hole density and Fermi momentum,
most electron numbers in small 2D clusters
correspond to open-shell configurations
with highly degenerate ground states for noninteracting particles.
In an open-shell situation multiplet effects are guaranteed
to occur, so that it is in general unpredictable
which momenta are occupied and which ones are not
(this holds for a Fermi liquid, but is most probably true also for
other `effective particles'). Unexpected problems may arise from this.
Bearing this in mind, one therefore may not expect to see a similarly
detailed and clear picture as in 1D.\\
Then, Figure \ref{fig6} shows the photoemission spectra
for the $18$-site cluster,
with two holes. We first consider the left hand panel,
where energies are measured in units of $J$. Comparing
with Fig.~\ref{fig5}, some similarities are quite obvious:
the excitation energies of the topmost peaks at $(0,0)$, $(2\pi/3,0)$
are independent of $t$ (although the spectra for $t/J$$=$$2$ show
a slight deviation) so that we can identify a `band' of states
with energy scale $J$. The situation actually is not
entirely clear, in that the peak at $(\pi/3,\pi/3)$ is so
close in energy to the one at $(2\pi/3,0)$ that it is not possible
to decide if their energy difference scales with
$J$ or $t$. Next, the topmost peaks at $(2\pi/3,2\pi/3)$ and
$(\pi/3,\pi)$ show a systematic progression with $t$, which is
very reminiscent of, e.g., Fig.~\ref{fig1}.
Plotting the same spectra with energy scale $t$ indeed to good
approximation aligns these peaks
(although the peak at $(2\pi/3,2\pi/3)$ still has a slight drift),
i.e., their excitation energy relative to the topmost peak
at $(2\pi/3,0)$ scales with $t$. Moreover one can identify
a number of diffuse `features' at energies
between $-0.5t$ and $-t$, which also are roughly aligned;
these are indicated by the dashed line.
In analogy with 1D, we can thus distinguish different branches of states,
with different energy scales in their excitation energies.
While the coarseness of the $\vec{k}$-meshes introduces some
uncertainty, the data are consistent
with a `$J$-band' dispersing upwards in the interior of the
antiferromagnetic Brillouin zone, and a `$t$-band' dispersing
downwards in the outer part, i.e., the same situation as seen in 1D.
A major difference is the fact that the
`features' at higher binding energies are all very diffuse,
at least for $J/t$$\ge$$3$. More significantly,
despite the fact that its energy scale seems to be $t$,
the dispersion of the `shadow band' is much weaker than in 1D.
In other words, the effective mass of that band is $\sim$$t^{-1}$,
but with a very large prefactor.\\
We proceed to the $20$-site cluster,
also doped with two holes (see Fig.~\ref{fig7}).
Choosing $J$ as the unit of energy, we see the
already familiar situation: the topmost peaks
for the states at $(2\pi/5,\pi/5)$ and
$(\pi/5,3\pi/5)$ are aligned (although $t/J$$=$$2$ again
deviates slightly) and several other peaks
show a systematic progression with $t$
(an unexpected exception is $(0,0)$ where a well defined peak
actually is not observed).
Changing to energy scale $t$ aligns
a number of these peaks, which suggests that these peaks
form a `$t$-band' which originates from
the topmost peak at $(2\pi/5,\pi/5)$.
This is a second unexpected feature of the $20$-site cluster, in that for
the spectra in 1D (and for those of the $18$-site cluster in 2D)
the most intense $t$-band always seemed to originate
from the topmost peak of the photoemission spectrum.
We can only speculate that these unusual features are the consequence of,
e.g., the multiplet effects mentioned above.
We also note in this context that the spectra at $(0,0)$ look actually quite
different for $18$ and $20$ site cluster, which shows the impact
of finite-size effects.\\
Ascribing the special behavior at $(0,0)$ to finite size-effects, we have
a quite similar picture as in the $18$-site cluster, i.e.,
the topmost peaks for spectra inside the antiferromagnetic zone
have $J$ as their energy scale whereas the topmost peaks
in the outer part of the zone have energy scale $t$
(this also holds for $(\pi,0)$ which is on the boundary
of the antiferromagnetic zone).
As was the case in the $18$ site cluster the `shadow band',
while having $t$ as its energy scale,
has a much weaker dispersion than in 1D.
Indeed, fitting the $t$-bands in both $18$ and $20$ site cluster
by an expression of the form $2t_{eff}(\cos(k_x)+\cos(k_y))$
requires to choose $t_{eff} \approx 0.1t$ - it is tempting to
speculate that this may actually be $\delta t$, as one would expect
e.g. in the Gutzwiller picture. Another notable feature is that the
$t$-band is restricted to the outer part of the
Brillouin zone. Only the diffuse high-energy `band' indicated by the
dashed line in Figure \ref{fig7} seems to scale with $t$.\\
For completeness we would like to mention that a similar analysis was
not possible
for the $16$-site cluster with $2$ holes. The reason is essentially that
for some momenta there are no more sharp `peaks', but rather
a multitude of densely spaced small peaks. Due to this,
we were not able to assign any defined `bands', or groups
of peaks which showed a systematic scaling of their excitation energy.
We have also performed this kind of analysis for the $16$ site cluster
with $4$ holes and found no more indication of the energy scale
$J$: at this somewhat higher concentration the entire spectra scale
with $t$.\\
Summarizing the data for 2D, hole doping seems to lead to
behavior which is more reminiscent of 1D than for half filling,
in that the $J$-band dispersing upwards in the inner part of the
Brillouin zone and the $t$-band dispersing downwards in the
outer part seem to exist also in this case. Much unlike 1D,
however, the shadow band, while in principle having $t$ as its energy scale,
still has a very weak dispersion, so that the band structure
in the doped case is practically identical to that
in the undoped system\cite{EderOhtaShimozato}.
We note however, that the fact that the shadow band
has $t$ as energy scale has profound implications for its explanation:
there have been attempts to interpret the shadow band in Bi2212 as
a `dynamical replica' of the main band, created by
scattering of quasiparticles in the standard tight-binding band from
antiferromagnetic spin fluctuations\cite{Chubukov}.
Experimentally, however, the fact that the shadow bands are observed also
in the overdoped compounds\cite{LaRosa}, where antiferromagnetism is
very weak, as well as the fact that they do not seem to become
more pronounced in the underdoped compounds\cite{Marshall},
where antiferromagnetism is strong, both suggest otherwise.
On the theoretical side, we believe that our data very clearly
rule out this interpretation: both the `main band'
and the spin correlation function\cite{EderOhtaMaekawa}
have $J$ as their relevant energy scale,
and it would be very hard to understand how the energy scale of $t$
for the shadow band should emerge from a combination of these two
types of excitations.
In fact, the relatively accurate scaling with physically
very different parameters suggests completely
different propagation mechanisms for the two types
of excitations. We therefore believe that the shadow band is a
separate branch of excitations, probably best comparable to the
states which produce the $3k_F$ singularity in the 1D systems.
\section{Discussion}
In the previous sections we have investigated the photoemission spectrum
for the one and two dimensional $t$$-$$J$ model. By studying the parameter
dependence of the spectra, we could in 1D identify
`branches' of states which trace out the dispersions of the
elementary excitations of the TLL, the spinons and holons.
Both elementary excitations have a simple nearest neighbor hopping
dispersion, but with different band width: that of the spinons
is $\sim$$J$, that of the holons $\sim$$t$.\\
In the doped case there are two groups of
states which touch the Fermi energy (see Figure \ref{figx}).
`Inside' the noninteracting Fermi surface, there is a
whole continuum of bands dispersing upwards to $E_F$.
The uppermost of these bands
traces out the spinon dispersion and has $J$ as its energy scale,
the lowermost band traces out the holon dispersion and has $t$ as its
energy scale. In the thermodynamic limit these bands degenerate
into `cusps' and merge at $E_F$.
In the outer half of the Brillouin zone, there are only
states which have $t$ as their energy scale.
These reach the Fermi energy at $3k_F$,
giving rise to a second Fermi point. While the resolution
in $k$ and $\omega$ available in our finite clusters is not sufficient
to make statements about extreme low energy excitations,
the positions of the singularities in the
electron momentum distribution as determined from exact solutions
clearly shows that both branches of states indeed do touch $E_F$.
The two singularities may be thought of enclosing a hole pocket
of extent $2\pi\delta$, which is essentially the image of the
holon Fermi surface.\\
In 2D at half-filling, the situation is quite different:
while it is still possible to distinguish bands
with different scaling behavior with $J$ and $t$,
the spinon Fermi surface present in 1D does not exist
and the `holons' seem to correspond to overdamped resonances
rather than sharp excitations as in 1D. We propose that
strong attraction between spin and charge excitations,
most probably due to the well-known string mechanism,
pulls a band of bound states out of the continuum of
`free' holon and spinon states. The relevant physics thus is that of
spin-bag--like spin 1/2 quasiparticles, as suggested by
a considerable amount of numerical evidence.\\
For the doped case in 2D, the situation is less clear
and actually somewhat ambiguous. The numerical photoemission spectra
show some analogy with 1D, in that there seems to be
a high intensity `main band' with energy scale $J$
dispersing upwards in the inner part of the Brillouin zone, and a
low intensity `shadow band' with energy scale
$t$ dispersing downwards in the outer part of the Brillouin zone
(see Figure \ref{figx}).
In contrast to 1D the dispersion of the
shadow band is much weaker, i.e., while the energy scale of the dispersion
is $t$, it has an additional very small prefactor (of the order of the
hole concentration).
Moreover the $t$-band seems limited to the outer part of the
Brillouin zone, i.e. there are no indications for a holon band with energy
scale $t$ dispersing upwards in the inner part of the Brillouin zone.
Only in the $18$-site cluster a diffuse `band' with energy scale
$t$ can be roughly identified at higher binding energies.
The different energy scales
of main band and shadow band suggest that these are excitations
of quite different nature, and in particular rule out
the explanation that that the shadow band is
created by scattering from antiferromagnetic spin fluctuations.\\
Turning to experiment, the results for 2D immediately
suggests a comparison
with the data of Aebi { \em et al.}\cite{Aebi}. These authors found that
in addition to the `bright' part of the band structure, which seems
to be consistent with the noninteracting one,
there is also a low intensity `replica', shifted approximately
by $(\pi,\pi)$,
which had been consistently overlooked in all previous studies.
If one wants to make a correspondence to the situation for the
$t$$-$$J$ model, one thus should identify this low intensity
part with the $t$-band dispersing downwards in the $t$$-$$J$ model.
Our data imply that the shadow band should have a slightly different
dispersion than the main band. The limitations of the cluster
method probably preclude any meaningful quantitative statements,
but it might be interesting to see if this difference in dispersion
can be resolved experimentally.\\
We conclude by outlining a somewhat speculative scenario, based on the
assumption that the two bands represent indeed different excitations,
which persist at all temperatures and independent of antiferromagnetic
correlations. In this case, the topology in 2D opens an
interesting possibility:
whereas in 1D the two classes of low energy excitations
forming the $k_F$ and $3k_F$ singularities in $n(\vec{k})$ are
well separated in $\vec{k}$ and $\omega$ space for simple topological
reasons, the
experimental data of Aebi {\em et al.} indicate that the
main and shadow band intersect at certain points in the
Brillouin zone (see Fig.~\ref{fig9}, left panel).
Neglecting the small difference in dispersion between main and
shadow band, we might therefore model the low energy excitation spectrum by
the effective Hamiltonian
\begin{equation}
H_{QP} = \sum_{\vec{k},\sigma}
(\;\epsilon(\vec{k})\;
a_{\vec{k},\sigma}^\dagger a_{\vec{k},\sigma} +
\epsilon(\vec{k}+\vec{Q})\;
b_{\vec{k},\sigma}^\dagger b_{\vec{k},\sigma}\;)
\label{eff}
\end{equation}
where $\epsilon(\vec{k})$ is the dispersion of the main band, $\vec{Q}
=(\pi,\pi)$, and the $a$ and $b$ operators refer to the main and shadow band,
respectively.
Choosing a dispersion of the form
$\epsilon(\vec{k})= -2t(\cos(k_x) + \cos(k_y)) +
4t'\cos(k_x)\cos(k_y)$ this Hamiltonian reproduces the Fermi surface
topology found by Aebi {\em et al.} quite well (see Fig.~\ref{fig9}).
However, as mentioned above
the two branches of excitations intersect at some points of the Brillouin zone,
so that already a small mixing between the two bands, which in turn
may originate from the spinon-holon interaction, has a dramatic
effect on the topology of the
low energy excitation spectrum. Namely
adding a term of the form
\[
H_{mix}= \Delta \sum_{\vec{k},\sigma}
(\; a_{\vec{k},\sigma}^\dagger b_{\vec{k},\sigma} + {\rm H.c.}\;),
\]
i.e., a hybridization between the two types of bands, even
relatively small values of $\Delta$ open up a gap around $(\pi,0)$
and transform the Fermi surface transformed into a hole
pocket (see left panel of Fig.~\ref{fig9}).
Thereby we fix the chemical potential by requiring that the number of
$a$ and $b$ particles remains unchanged; it is easy to see that the
area covered by the pockets then equals the hole concentration
$\delta$, precisely as it was the case in 1D.
Thereby the Fermi surface has predominant main band character
at its inner edge, and shadow band character at the outer edge,
implying a very different `visibility' in photoemission.
Finally, it is tempting to speculate that the
`pseudo gap order parameter' $\Delta$ decreases
with increasing temperature/hole concentration.
Its vanishing at a certain temperature $T^*$ then could
produce a crossover from the hole pockets to the `large' Fermi surface
at $T^*$, which picture would nicely reproduce the
pseudogap phenomenology observed\cite{Loeser} in cuprate superconductors.\\
Financial support of R. E. by the European Community and of Y. O. by the
Saneyoshi Foundation and a Grant-in-Aid for Scientific Reserach from the
Ministry of Education, Science and Culture of Japan is
most gratefully acknowledged.
\begin{figure}
\narrowtext
\caption[]{
Photoemission spectra at half-filling for different values
of $J/t$. The thin lines mark the spinon and holon band.}
\label{fig1}
\end{figure}
\begin{figure}
\narrowtext
\caption[]{Photoemission spectra at half-filling
in the $12$-site ring. For all $J/t$ the energy of the state marked by
the black dot is taken as the zero of energy.}
\label{fig2}
\end{figure}
\begin{figure}
\narrowtext
\caption[]{(a) Schematic building principle of the spectral function
at half-filling: Spinon band (thick dashed lines), and holon bands
(thin full lines).
(b) Theoretical spectrum overlaid with the numerical result
for the $20$-site ring with $J/t$$=$$10$.
The centers of the circles give the
excitation energies and their diameter the pole strength of the peaks
in the electron removal spectrum.}
\label{fig3}
\end{figure}
\begin{figure}
\narrowtext
\caption[]{Allowed momenta for the $20$-site cluster.}
\label{fig3a}
\end{figure}
\begin{figure}
\narrowtext
\caption[]{Photoemission spectra at half-filling for the
2D $20$-site cluster. For all $J/t$ the zero of energy has been set to
the energy of the peak marked by the black dot.}
\label{fig4}
\end{figure}
\begin{figure}
\caption[]{Photoemission spectra for the $12$-site ring with $2$ holes.}
\label{fig5}
\end{figure}
\begin{figure}
\caption[]{Photoemission spectra for the
2D $18$-site cluster with $2$ holes.
The spectra for momenta
marked by an asterisk have been multiplied by a factor of $2$ for
clarity.}
\label{fig6}
\end{figure}
\begin{figure}
\narrowtext
\caption[]{
Photoemission spectra for the 2D $20$-site cluster with $2$ holes.
The spectra for momenta marked by an asterisk have been multiplied by a
factor of $2$ for clarity.}
\label{fig7}
\end{figure}
\begin{figure}
\narrowtext
\caption[]{Schematic representation of the `band structure'
in 1D and 2D systems as deduced from the diagonalization spectra.}
\label{figx}
\end{figure}
\begin{figure}
\narrowtext
\caption[]{
Left hand panel: Fermi surface for the Hamiltonian (\ref{eff})
with $t'/t=0.3$, the full line corresponds to
the main band, the dashed line to the shadow band.
Right hand panel: Fermi surface for $t'/t$$=$$0.3$, $\Delta/t$$=$$0.2$.}
\label{fig9}
\end{figure}
\noindent |
1912.00616 | \section{Introduction}
\label{sec:intro}
The cosmic inflation has emerged as a successful paradigm to resolve various issues in the standard model of cosmology, including the horizon and flatness problems. Inflation can explain the origin of inhomogeneities observed in cosmic microwave background and the structure formation of the universe \cite{guth1981}. A large number of inflationary models have been proposed in the literature such as conformal attractor \cite{conformal}, $\alpha-$attractor \cite{alpha,alpha1,alpha2,alpha3,alpha4}, Starobinsky and the chaotic inflation \cite{staro1980,staro1,staro2,staro3,staro4,GL}. The cosmological predictions of these models are very similar but not identical as the main difference is in the shape of the potentials. These models are in good agreements with the present observational data. In the case of a single field inflation, Starobinsky and $\alpha-$attractor potentials are fully consistent with the Planck 2018 data, whereas the quadratic potential is ruled out \cite{Planck2018}. In this paper, we shall revisit the dynamics of the pre-inflationary universe with the class of $\alpha-$attractor potentials in the framework of loop quantum cosmology (LQC), and explore whether the slow-roll inflation is achieved or not followed by the initial quantum bounce. Recently, the similar results for the $\alpha-$attractor that contains $T$ and $E$ models have been studied in \cite{alamPRD2018}.
All inflationary models that are based on general relativity (GR) suffer from the initial and inevitable singularity \cite{borde1994,borde2003}. Therefore, it is difficult to know how and when to impose the initial conditions. In addition, the inflationary universe should have at least 60 $e$-folds to be consistent with observations. However, more than 70 $e$-folds can be found in a large class of inflationary models in which the size of present universe is smaller than the Planck at the beginning of inflation \cite{martin2014}. As a result, the semi-classical treatments are questionable in these models. This is known as the trans-Planckian problem \cite{martin2001,berger2013}.
The above issues can be addressed in the framework of LQC, which provides a feasible explanation of inflation and pre-inflationary dynamics simultaneously. It is remarkable to note that in such a framework the big bang singularity is replaced by a non-singular quantum bounce \cite{agullo2013a,agullo2013b,agullo2015,ashtekar2011,ashtekar2015,barrau2016}. Furthermore, universe that onsets at the quantum bounce usually enters in the slow-roll inflation \cite{ashtekar2010,psingh2006,zhang2007,chen2015,bolliet2015,schander2016,bolliet2016,Bonga2016,Mielczareka}. For the pre-inflationary universe, in the framework of LQC, two main approaches are discussed in the literature, the dressed metric \cite{agullo2013b,metrica,metricb,metricc} and the deformed algebra \cite{algebraa,algebrab,algebrac,algebrad,algebrae,algebraf}. For the background evolution, both approaches provide the same set of evolution equations but their perturbations are distinct \cite{bolliet2016}. The corresponding non-Gaussianities were investigated in \cite{agullo15,ABS17,ZWKCS18}.
In this work, we consider a family of $\alpha-$attractor potentials, and are mainly interested in the background evolution of the universe. Therefore, the results to be obtained in this paper will be valid to both approaches. Specially, we shall exhibit that, for the kinetic energy dominated (KED) initial conditions, the evolution of the universe before reheating can be divided into three different phases: {\em bouncing, transition and slow-roll inflation}, while this is not possible in the potential energy dominated (PED) case \cite{alamPRD2018,alam2017,Tao2017a,Tao2017b}. The analytical evolution of the background and linear perturbations during these phases have been discussed in \cite{Tao2017a,Tao2017b}. Moreover, many authors have studied various inflationary models in LQC, GR, string-inspired models and Bianchi I universe \cite{yang2009,DL17,adlp,lsw2018a,lsw2018b,agullo18,thiemann,HISY,BG15,sahni18,SW08,killian,nozari},
\cite{BaoFei2019a,BaoFei2019b,wu2018,ma2019,anshu2019,Bea2018,sharma2018,ye2018},
and important results were discussed.
The rest of the paper is organized as follows. In Sec. \ref{sec:alphamod}, the family of $\alpha-$attractor potentials is briefly discussed with four new models. In sec. \ref{sec:EOM}, we study the background equations of the Friedmann-Lemaitre-Robertson-Walker (FLRW) universe in the framework of LQC. The Subsections \ref{subsec:mod1}, \ref{subsec:mod2}, \ref{subsec:mod3} and \ref{subsec:mod4} are devoted to the detailed analysis of the background evolution with $\dot{\phi_B}>0$, and also for the kinetic energy (KE) and potential energy (PE) dominated initial conditions at the quantum bounce. The phase portraits are displayed in Sec. \ref{sec:port}. Our main results are summarized in Sec. \ref{sec:conc}.
\section{A family of $\alpha-$models}
\label{sec:alphamod}
Following \cite{kalloshPRL15,linder15,alam2018}, the Lagrangian density of the
$\alpha-$attractor models with non-canonical kinetic term and a potential is given as
\begin{equation}
\mathcal{L}=\sqrt{-g}\left[ \frac{1}{2} M_{Pl}^2R-\frac{\alpha }{\left( 1-\frac{\varphi ^{2}}{6}%
\right) ^{2}}\frac{\left( \partial \varphi \right) ^{2}}{2}-\alpha
f^{2}\left( \frac{\varphi }{\sqrt{6}}\right) \right]
\label{eq:lag}
\end{equation}
where $M_{Pl}=m_{Pl}/\sqrt{8 \pi}$ denotes the reduced Planck mass, $\alpha
f^{2}$ represents the potential function and $\alpha$ is a parameter. The non-canonical kinetic term in Eq. (\ref{eq:lag}) can be made canonical through the field redefinition $\phi =\sqrt{%
6\alpha }\tanh ^{-1}\left( \frac{\varphi }{\sqrt{6}}\right)$. Therefore, the potential is given by
\begin{equation}
V\left( \phi \right) =\alpha f^{2}\left( \tanh \left(
\frac{\phi }{\sqrt{6\alpha }}\right) \right).
\label{eq:vf}
\end{equation}
Two functional forms of $f$ have been extensively used in the literature,
\begin{eqnarray}
\label{eq:modT}
f(x)&=&c x \\
f(x)&=&c \frac{x}{1+x}
\label{eq:modE}
\end{eqnarray}
where $x =\tanh \left( \frac{\phi }{\sqrt{6 \alpha}}\right)$, and $c$ is a constant that scales the amplitude of the potential. Eq. (\ref{eq:modT}) is known as $T$ model \cite{alpha,alpha2,alpha3}, and reduces to the Goncharov and Linde model for $\alpha=1/9$ \cite{GL}. Eq. (\ref{eq:modE}) is the so-called $E$ model and reduces to Starobinsky's model for $\alpha=1$ \cite{alpha1,staro1980}. The pre-inflationary universe and phase space analysis for $T$ and $E$ models in context of LQC have been examined in \cite{alamPRD2018}.
In this work, we shall choose the following functional forms of $f$, and investigate the pre-inflationary dynamics of the inflaton field in the framework of LQC. We shall examine whether these forms can lead to the desired slow-roll inflation or not, followed by the quantum bounce. These functional forms are
\begin{eqnarray}
\label{eq:funcform1}
f(x)&=&c \frac{1}{x} \\
\label{eq:funcform2}
f(x)&=&c \frac{1}{1+x} \\
\label{eq:funcform3}
f(x)&=&c \frac{1}{\sqrt{1-x^2}} \\
\label{eq:funcform4}
f(x)&=&c \frac{x^2}{\sqrt{1-x^2}}
\end{eqnarray}
The right hand side of equations (\ref{eq:funcform1}), (\ref{eq:funcform2}), (\ref{eq:funcform3}) and (\ref{eq:funcform4}) blows up at $x=0, -1, 1$ and 1, respectively. Furthermore, equation (\ref{eq:funcform4}) vanishes at $x=0$.
The potentials corresponding to equations (\ref{eq:funcform1}), (\ref{eq:funcform2}), (\ref{eq:funcform3}) and (\ref{eq:funcform4}) are
\begin{eqnarray}
\label{eq:pot1}
V(\phi) &=& \alpha c^2~ \left[ \coth \left( \frac{\phi }{\sqrt{6\alpha }}\right)\right]^2\\
\label{eq:pot2}
V(\phi) &=& \frac{\alpha c^2}{4}~ \left[ 1+ \text{exp}\left(-\sqrt{\frac{2}{3\alpha}}\phi \right) \right]^2\\
\label{eq:pot3}
V(\phi) &=& \alpha c^2~ \left[ \cosh \left( \frac{\phi }{\sqrt{6\alpha }}\right)\right]^2\\
\label{eq:pot4}
V(\phi) &=& \alpha c^2~ \left[ \tanh \left( \frac{\phi }{\sqrt{6\alpha }}\right)\right]^4 \left[ \cosh \left( \frac{\phi }{\sqrt{6\alpha }}\right)\right]^2
\end{eqnarray}
Hereafter, we shall refer equations (\ref{eq:pot1}), (\ref{eq:pot2}), (\ref{eq:pot3}) and (\ref{eq:pot4}) to as models 1, 2, 3 and 4, respectively. The evolutions of these models are shown in Fig. \ref{fig:pot}. Models 1 and 2 blow up at $\phi=0$ and $\phi=- \infty$, respectively. Both models monotonically decline to a constant value as $\phi \rightarrow \infty$. Models 3 and 4 show oscillating behaviors as the field approaches to the origin ($\phi=0$), and are symmetric with respect to the point $\phi=0$. In the context of dark energy, theses models have been studied in \cite{varun2018}.
\begin{figure*}[tbp]
\begin{center}
\begin{tabular}{cc}
{\includegraphics[width=2.2in,height=1.7in,angle=0]{pot1.pdf}} &
{\includegraphics[width=2.2in,height=1.7in,angle=0]{pot2.pdf}}
\\
{\includegraphics[width=2.2in,height=1.7in,angle=0]{pot3.pdf}} &
{\includegraphics[width=2.2in,height=1.7in,angle=0]{pot4.pdf}}
\end{tabular}
\end{center}
\caption{This figure is schematically displayed for the models under consideration. Upper left and right panels exhibit the evolution of potentials (\ref{eq:pot1}) and (\ref{eq:pot2}). Both potentials blow up at $\phi=0$ and $\phi=- \infty$, respectively, while monotonically decline to constant behavior as $\phi \rightarrow \infty$. Lower left and right panels correspond to the evolution of potentials (\ref{eq:pot3}) and (\ref{eq:pot4}). Both potentials are symmetric with respect to $\phi=0$, and show oscillating behavior around the origin. For $\phi \rightarrow 0$, potentials (\ref{eq:pot3}) and (\ref{eq:pot4}) are bounded below by unity ($V(\phi) \geq 1$) and zero ($V(\phi) \geq 0$), receptively whereas for $\phi \rightarrow \pm \infty$ they are unbounded. In LQC, the maximum energy density is $\rho_c$ that constraints the value of the field at the bounce. More details are given in the subsections \ref{subsec:mod1}, \ref{subsec:mod2}, \ref{subsec:mod3} and \ref{subsec:mod4}. }
\label{fig:pot}
\end{figure*}
\section{Background equations and numerical evolution}
\label{sec:EOM}
In LQC, the modified Friedmann equation in a spatially flat FLRW universe, and the Klein-Gordon equation with a single scalar field are given, respectively, by \cite{ashtekar2006}
\begin{eqnarray}
H^2=\frac{8 \pi}{3 m_{Pl}^2}~\rho \Big{(}1-\frac{\rho}{\rho_c}\Big{)},
\label{eq:Hub}
\end{eqnarray}
\begin{eqnarray}
\ddot{\phi}+3H \dot{\phi}+ \frac{dV(\phi)}{d\phi}=0,
\label{eq:ddphi}
\end{eqnarray}
where $H=\dot{a}/a$ denotes the Hubble parameter, $\rho=\dot{\phi}^2/2+V(\phi)$ is the energy density of the scalar field, and $\rho_c \simeq 0.41 m_{pl}^4$ \cite{Meissne,Domagala} represents the critical energy density. From equation (\ref{eq:Hub}) one can see that $H=0$ at $\rho=\rho_c$. This implies that the quantum bounce occurs at $\rho=\rho_c$.
The background evolution with a bouncing phase is of great interest, and one of the main tasks is to show the existence of a desired slow-roll inflation with certain initial conditions at the quantum bounce \cite{psingh2006,Mielczarek,zhang2007,chen2015,alam2017,Tao2017a,Tao2017b,ashtekar2011}. To this effect, we shall study ``bounce and slow-roll inflation'' with a family of $\alpha-$attractor models.
We solve Eqs.(\ref{eq:Hub}) and (\ref{eq:ddphi}) numerically with the initial conditions of $a(t)$, $\phi(t)$ and $\dot{\phi}(t)$ at the quantum bounce, at which we have
\begin{eqnarray}
&& \rho = \rho_c = \frac{1}{2}\dot{\phi}^2(t_B)+V(\phi(t_B)), \nonumber\\
&& \dot{a}(t_B)= 0,
\label{eq:bounce}
\end{eqnarray}
where $t_B$ denotes the moment at which the bounce occurs. From (\ref{eq:bounce}), we find
\begin{eqnarray}
\dot{\phi}(t_B) &=& \pm \sqrt{2 \Big{(} \rho_c - V(\phi(t_B)) \Big{)}}.
\label{eq:bounce2}
\end{eqnarray}
Without loss of the generality, one can take
\begin{eqnarray}
a(t_B) &=& 1.
\label{eq:bounce3}
\end{eqnarray}
From Eq.(\ref{eq:bounce2}), one can see that for a given potential, the initial conditions will be described by $\phi_B$ only. Later, we shall find two cases: (a) positive inflaton velocity (PIV):~~ $\dot{\phi}_B > 0$; and (b) negative inflaton velocity (NIV): ~$\dot{\phi}_B < 0$. In this paper, we shall focus only PIV. However, one can easily carry out a similar analysis for the NIV case. Hereafter, we shall denote $\phi(t_B)$ and $\dot{\phi}(t_B)$ by $\phi_B$ and $\dot{\phi}_B$, respectively.
Finally, we define the following quantities that will be used in this paper \cite{alam2017,Tao2017a,Tao2017b}.
(1) The equation of state (EoS) $w(\phi)$ is defined as
\begin{eqnarray}
w(\phi) = \frac{\dot{\phi}^2/2-V(\phi)}{\dot{\phi}^2/2+V(\phi)}.
\label{eq:w}
\end{eqnarray}
In the slow-roll regime, we have $w(\phi)\simeq-1$.
To differentiate the KE and PE dominated initial conditions at the bounce, we define the quantity $w^B$ as
\begin{equation}
w^B \equiv w(\phi) \Big{\vert}_{\phi=\phi_B}
= \begin{cases} > 0, \qquad \text{KE} > \text{PE}, \\
= 0, \qquad \text{KE}=\text{PE}, \\
< 0, \qquad \text{KE} < \text{PE}. \end{cases}
\label{eq:wb}
\end{equation}
(2) The slow-roll parameter $\epsilon_H$ is defined as
\begin{eqnarray}
\epsilon_H = - \frac{\dot{H}}{H^2}.
\label{eq:epsilon}
\end{eqnarray}
In the slow-roll region, we have $\epsilon_H \ll 1$.
(3) The number of $e$-folds $N_{inf}$ during the slow-roll inflation is expressed as
\begin{eqnarray}
N_{inf} = ln \Big{(} \frac{a_{end}}{a_i} \Big{)} = \int_{t_i}^{t_{end}} H(t) dt \nonumber \\
= \int_{\phi_i}^{\phi_{end}} \frac{H}{\dot{\phi}} d\phi \simeq \int_{\phi_{end}}^{\phi_i} \frac{V}{V_{\phi}} d\phi,
\label{eq:Ninf}
\end{eqnarray}
where $a_i$ ($a_{end}$) exhibits the scale factor when the inflation onsets (ends), that is $\ddot{a}(t_i) \gtrsim 0$ and $w(\phi_{end})=-1/3$.
(4) The analytical expression of the scale factor $a(t)$ during the bouncing regime can be expressed as \cite{alam2017,Tao2017a,Tao2017b}
\begin{eqnarray}
a(t) &=& a_B \left( 1+ \delta \frac{t^2}{t_{Pl}^2} \right)^{1/6},
\label{eq:a}
\end{eqnarray}
where $a_B=a(t_B)$, $\delta = {24 \pi \rho_c}/{m_{Pl}^{4}}$ is a dimensionless parameter, and $t_{Pl}$ represents the Planck time.
In the following subsections, we shall study the class of $\alpha-$attractor models for $\dot\phi_B > 0$ (PIV), and see whether following the bounce a desired slow-roll inflation generically exists or not.
\subsection{Model 1}
\label{subsec:mod1}
\begin{figure}[tbp]
\begin{center}
\begin{tabular}{ccc}
{\includegraphics[width=1.9in,height=1.65in,angle=0]{mod1KEaSR.pdf}} &
{\includegraphics[width=1.9in,height=1.6in,angle=0]{mod1KEwSR.pdf}} &
{\includegraphics[width=1.9in,height=1.6in,angle=0]{mod1KEepSR.pdf}}
\end{tabular}
\end{center}
\caption{This figure represents the numerical results for model 1 [Eq.(\ref{eq:pot1})] with $\dot{\phi_B}>0$. The evolution of $a(t)$, $w(\phi)$ and $\epsilon_H$ is shown for the same set of the KED initial conditions of $\phi_B$ with $\alpha = 1 m_{Pl}^2$, $c = 8.343 \times 10^{-7} m_{Pl}$ and $m_{Pl}=1$. The PED initial conditions are not possible to be imposed in the entire range of $\phi_B$. The analytical solution of the scale factor $a(t)$ [Eq.(\ref{eq:a})] is also exhibited in order to compare it with the numerical results. }
\label{fig:mod1}
\end{figure}
Let us first study some features of model 1 [Eq.(\ref{eq:pot1})]. The evolution of the potential (\ref{eq:pot1}) vs the scalar field is shown in the upper left panel of Fig. \ref{fig:pot}. This potential becomes asymptotically flat for the large field limit ($\phi \rightarrow \infty$), and blows up at the origin ($\phi =0$). In LQC, the maximum energy density is $\rho_c$ that constraints the value of $\phi_B$ as $(\phi_{min}, \infty)$,
where
\begin{eqnarray}
\phi_{min} &\simeq & \sqrt{6 \alpha}~ \text{arccoth} \left( \sqrt{\frac{\rho_c}{\alpha c^2}} \right).
\label{eq:mod1phimin}
\end{eqnarray}
To find the values of $\alpha$ and $c$ that are consistent with the Planck 2018 data for an inflationary universe \cite{Planck2018}, we follow the prescription provided in Appendix A. In particular, choosing $H_* = 2.0\times 10^{-5} M_{Pl}$, we can find $\phi_*$ from Eq.(\ref{eq:HubSR}) for the given potential in this model. Then, setting $\epsilon_V = 1$ in Eq.(\ref{eq:ev}) we find $\phi_{end}$. With such obtained $\phi_*$ and $\phi_{end}$, we can find $(\alpha, c)$ from Eq.(\ref{eq:Ninf2}) by setting $N_{inf} = 60$. In doing so, we find various sets of $(\alpha, c)$, which are all consistent with the Planck 2018 data. All of these cases give similar conclusions. So, in the following we shall consider only one representative case, which is given by
\begin{eqnarray}
\alpha &=& 1 m_{Pl}^2, \qquad\qquad c = 8.343 \times 10^{-7} m_{Pl}.
\label{eq:mod1alphac}
\end{eqnarray}
Then, we numerically solve Eqs. (\ref{eq:Hub}) and (\ref{eq:ddphi}) with PIV ($\dot{\phi}_B>0$) for model 1. The results for a set of KED initial conditions with $\alpha = 1 m_{Pl}^2$ and $c = 8.343 \times 10^{-7} m_{Pl}$ are shown in Fig. \ref{fig:mod1}, where the scale factor $a(t)$, EoS $w(\phi)$, and slow-roll parameter $\epsilon_H$ are exhibited for the same set of $\phi_B$. The initial values of inflaton field at the bounce are governed by the KED conditions with the entire range of $\phi_B$, while the PED initial conditions are not possible at all in the whole range. Similar results were discussed for $T-model$ in Ref. \cite{alamPRD2018}.
From the middle panel of Fig. \ref{fig:mod1}, one can clearly see that the evolution of the universe before reheating can be divided into three distinct phases: bouncing, transition and slow-roll inflation. In the bouncing phase, KE dominates, and $w(\phi) \simeq +1$. During the transition region, $w(\phi)$ decreases rapidly from $+1$ $(t/t_{Pl} \simeq 10^4)$ to $-1$ $(t/t_{Pl} \simeq 10^5)$. This transition phase is very short in comparison with the other two phases. In the slow-roll phase, $w(\phi)$ approaches to $-1$, and remains constant till the end of the slow-roll inflation.
It is very interesting to note that the evolution of $a(t)$ (the left panel of Fig. \ref{fig:mod1}) during the bouncing phase is universal, and shows consistent behavior with the analytical solution (\ref{eq:a}).
The range of the initial conditions is $\phi_B \in(\phi_{min}, \infty)$, in which the KED condition at the bounce is assured, as in this range $\dot{\phi_B}^2/2\gg V(\phi_B)$ is always true, and
it always leads to a slow-roll inflationary phase. Next, we turn to consider the total number of $e$-folds during the slow-roll inflation for various values of
$\phi_B$. To be consistent with the Planck 2018 results \cite{Planck2018}, at least 60 $e$-folds are required for a successful inflationary model. However, in the case $\alpha = 1 m_{Pl}^2$ and $c = 8.343 \times 10^{-7} m_{Pl}$
the $e$-folds are less than 60, which are shown in Table \ref{tab:mod1} for different values of $\phi_B$.
We also analyzed the case with $\alpha = 0.5 m_{Pl}^2$ and $c = 1.611 \times 10^{-6} m_{Pl}$, and noticed that the conclusion is the same. In fact, as we mentioned previously, we found that this is true for all the sets of $(\alpha, c)$ that satisfy the Planck 2018 data. So, in order not to repeat the calculations, we do not present the detailed analyses for this case, as well as the other ones.
\begin{table}[tbp]
\caption{This table represents model 1 [Eq.(\ref{eq:pot1})] with $\dot{\phi}_B > 0$. We demonstrate various parameters of inflation for different values of $\phi_B$ in the case of $\alpha = 1 m_{Pl}^2$ and $c = 8.343 \times 10^{-7} m_{Pl}$. For each value of $\phi_B$, we get less than 60 $e$-folds. Therefore, these initial values of $\phi_B$ are not consistent with observations.}
\begin{center}
\resizebox{\textwidth}{!}{
\begin{scriptsize}
\begin{tabular}{cccccccc}
\hline
$\phi_B/m_{Pl}$~~~ & Inflation~~~ & $t/t_{Pl}$~~~ & $\epsilon$~~ & $w$ ~~& $N_{inf}$ &~~~${w}^B$\\
\hline
0.01 ~~~& begin~~~& $1.17480 \times 10^5$~~~& 1.0~~ & $-1/3$ ~~& ~~~& ~~~&\\
& slow-roll~~~& $2.84048 \times 10^5$ ~~~& 0.073~~ & $-0.950$ ~~& 31.38 ~~~& $>0$\\
& end~~~& $1.0407 \times 10^7$ ~~~& 0.174~~ & $-1/3$ ~~& ~~~& ~~~& \\\\
1 ~~~& begin~~~& $1.39037 \times 10^5$~~~& 0.999~~ & $-1/3$ ~~& ~~~& ~~~&\\
& slow-roll~~~& $3.18522 \times 10^5$ ~~~& 0.074~~ & $-0.950$ ~~& 27.70 ~~~& $>0$\\
& end~~~& $1.0371 \times 10^7$ ~~~& 0.149~~ & $-1/3$ ~~& ~~~& ~~~& \\\\
10 ~~~& begin~~~& $1.58197 \times 10^5$~~~& 0.999~~ & $-1/3$ ~~& ~~~& ~~~&\\
& slow-roll~~~& $3.50170 \times 10^5$ ~~~& 0.074~~ & $-0.950$ ~~& 25.37 ~~~& $>0$\\
& end~~~& $1.0687 \times 10^7$ ~~~& 0.218~~ & $-1/3$ ~~& ~~~& ~~~& \\
\hline
\end{tabular}
\end{scriptsize}}
\label{tab:mod1}
\end{center}
\end{table}
\subsection{Model 2}
\label{subsec:mod2}
In this subsection, we study some characteristics of model 2 [Eq.(\ref{eq:pot2})], for which the potential is displayed in the upper right panel of Fig. \ref{fig:pot}. In the large field limit ($\phi \rightarrow \infty$), the potential monotonically declines to a finite value $V(\phi) \rightarrow \alpha c^2/4$, whereas at $\phi \rightarrow -\infty$, it diverges. In LQC, $\rho_c$ constraints the value of $\phi_B$ as $(\phi_{min}, \infty)$, and $\phi_{min}$ is given by
\begin{eqnarray}
\phi_{min} &\simeq & -\sqrt{\frac{3\alpha}{2}}~ \text{Log} \left( \sqrt{\frac{4\rho_c}{\alpha c^2}}-1 \right).
\label{eq:mod2phimin}
\end{eqnarray}
\begin{figure}[tbp]
\begin{center}
\begin{tabular}{ccc}
{\includegraphics[width=1.9in,height=1.65in,angle=0]{mod2KEaSR.pdf}} &
{\includegraphics[width=1.9in,height=1.6in,angle=0]{mod2KEwSR.pdf}} &
{\includegraphics[width=1.9in,height=1.6in,angle=0]{mod2KEepSR.pdf}}
\\
{\includegraphics[width=1.9in,height=1.65in,angle=0]{mod2KEaNSR.pdf}} &
{\includegraphics[width=1.9in,height=1.6in,angle=0]{mod2KEwNSR.pdf}} &
{\includegraphics[width=1.9in,height=1.6in,angle=0]{mod2KEepNSR.pdf}}
\\
{\includegraphics[width=1.9in,height=1.6in,angle=0]{mod2PEaSR.pdf}} &
{\includegraphics[width=1.9in,height=1.6in,angle=0]{mod2PEwSR.pdf}} &
{\includegraphics[width=1.9in,height=1.6in,angle=0]{mod2PEepSR.pdf}}
\end{tabular}
\end{center}
\caption{This figure demonstrates the numerical evolution of $a(t)$, $w(\phi)$ and $\epsilon_H$ for model 2 [Eq.(\ref{eq:pot2})] with $\dot{\phi_B}>0$. Top (KED) and bottom (PED) panels provide the slow-roll inflationary phase, whereas a subset of the KED initial conditions (middle panels) do not lead to the slow-roll inflation. When plotting out the figure, we had set $\alpha = 1 m_{Pl}^2$, $c = 4.074 \times 10^{-8} m_{Pl}$ and $m_{Pl}=1$. }
\label{fig:mod2}
\end{figure}
\begin{table}[tbp]
\caption{This table corresponds to model 2 [Eq.(\ref{eq:pot2})] with $\dot{\phi}_B > 0$. We show the number of $e$-folds $N_{inf}$ and other parameters of inflation for different choices of $\phi_B$ with the set of $\alpha = 1 m_{Pl}^2$ and $c = 4.074 \times 10^{-8} m_{Pl}$.}
\begin{center}
\resizebox{\textwidth}{!}{
\begin{scriptsize}
\begin{tabular}{cccccccccc}
\hline
$\phi_B/m_{Pl}$~~~ & Inflation~~~ & $t/t_{Pl}$~~~ & $\epsilon$~~ & $w$ ~~& $N_{inf}$ &~~~${w}^B$\\
\hline
$-20.9$ ~~~& begin~~~& 0.01 ~~~& 3.17~~ & $-1/3$ ~~& ~~~& ~~~&\\
& slow-roll~~~& 2.06 ~~~& 0.043~~ & $-0.978$ ~~& 254.98 ~~~&$<0$\\
& end~~~& $1.735 \times 10^7$ ~~~& 0.329~~ & $-1/3$ ~~& ~~~& ~~~& \\\\
$-10$ ~~~& begin~~~& $8.764 \times 10^3$ ~~~& 0.999~~ & $-1/3$ ~~& ~~~& ~~~&\\
& slow-roll~~~& 4.4992$\times 10^4$ ~~~& 0.057~~ & $-0.961$ ~~& 68.25 ~~~&$>0$\\
& end~~~& $2.432 \times 10^7$ ~~~& 0.332~~ & $-1/3$ ~~& ~~~& ~~~& \\\\
$-9.7$ ~~~& begin~~~& $1.1622 \times 10^4$ ~~~& 0.999~~ & $-1/3$ ~~& ~~~& ~~~&\\
& slow-roll~~~& 7.6452$\times 10^4$ ~~~& 0.054~~ & $-0.964$ ~~& 60.45 ~~~& $>0$\\
& end~~~& $1.541 \times 10^7$ ~~~& 0.326~~ & $-1/3$ ~~& ~~~& ~~~& \\\\
$-9$ ~~~& begin~~~& $2.2426 \times 10^4$ ~~~& 0.999~~ & $-1/3$ ~~& ~~~& ~~~&\\
& slow-roll~~~& 1.10808$\times 10^5$ ~~~& 0.058~~ & $-0.961$ ~~&50.68 ~~~&$>0$\\
& end~~~& $2.066 \times 10^7$ ~~~& 0.331~~ & $-1/3$ ~~& ~~~& ~~~& \\
\hline
\end{tabular}
\end{scriptsize}}
\label{tab:mod2}
\end{center}
\end{table}
To find the values of $\alpha$ and $c$ that are consistent with the Planck 2018 data \cite{Planck2018}, following what is prescribed in Appendix A, we find various sets of $\alpha$ and $c$, similar to Model 1. In the current model, it is sufficient to consider only the following two representative cases,
\begin{eqnarray}
\alpha &=& 1 m_{Pl}^2, \qquad~~~~ c = 4.074 \times 10^{-8} m_{Pl}\nonumber \\
\alpha &=& 5 m_{Pl}^2, \qquad~~~~ c = 2.449 \times 10^{-7} m_{Pl}.
\label{eq:mod2alphac}
\end{eqnarray}
The value of $\phi_{min}$ can be obtained for any choice of $\alpha$ and $c$ from Eq. (\ref{eq:mod2phimin}). For example, for $\alpha = 1 m_{Pl}^2$ and $c = 4.074 \times 10^{-8} m_{Pl}$, we find $\phi_{min}=-21.14 m_{Pl}$. We investigate the entire range of inflaton field in order to identify the initial values that provide the slow-roll inflation.
\begin{eqnarray}
\frac{\phi_B}{m_{Pl}} =\begin{cases}
\in (\phi_{min}, -20.73), & \text{PED (slow-roll)},\cr
= -20.72 , & \text{KE=PE (slow-roll)}, \cr
\in (-20.71,-3.5), & \text{KED (slow-roll)}, \cr
\in (-3.4, \infty), & \text{KED (no slow-roll)},\cr
\end{cases}
\label{eq:mod2phiB}
\end{eqnarray}
where
$\phi_{min}$ is given by Eq. (\ref{eq:mod2phimin}). The results of background evolution for KED and PED initial conditions are exhibited in Fig. \ref{fig:mod2} with various choices of $\phi_B$. In the KED case, the evolution of $a(t)$ shows the universal feature during the bouncing phase, that is, it neither depends on potential nor on the initial values of $\phi_B$, and is well described by the analytical solution (\ref{eq:a}). This is because during the whole
phase, the potential remains almost constant, and does not essentially affect the evolution of the background. From the evolution of $w(\phi)$, one can see that in the KED case the background evolution is split up into three different phases: bouncing, transition and slow-roll. The period of the transition phase is very short in comparison with the other two phases. During the bouncing regime, $w(\phi) \simeq +1$, in the transition regime, it decreases drastically from $+1$ $(t/t_{Pl} \simeq 10^4)$ to $-1$ $(t/t_{Pl} \simeq 10^6)$, and in the slow-roll regime $w(\phi) \simeq -1$ until the end of slow-roll inflation. In the case of KED initial conditions, we also find a subset where the slow-roll inflation is not possible, which is clearly displayed in the middle panels of Fig. \ref{fig:mod2}. In the PED case, the universality of $a(t)$ disappears, and the bouncing and transition phases do not exist any more, however the slow-roll inflation can still be obtained as shown in lower panels of Fig. \ref{fig:mod2}.
Table \ref{tab:mod2} shows various parameters of inflation. In particular, $N_{inf}$ decreases as $\phi_B$ grows. From this table, one can find the range of $\phi_B$ that provides 60 or more $e$-folds to be compatible with observations, which is
\begin{eqnarray}
\frac{\phi_B}{m_{Pl}} &\in& (\phi_{min}, -9.7), \; \;\; N_{inf} \gtrsim 60,
\label{eq:mod2phiB60}
\end{eqnarray}
where $\phi_{min}$ is given by Eq. (\ref{eq:mod2phimin}).
We also examined the other set of Eq. (\ref{eq:mod2alphac}), namely $\alpha = 5 m_{Pl}^2$ and $c = 2.449 \times 10^{-7} m_{Pl}$, and observed that the subset of the KED case, which does not provide an inflationary phase found in the case of $\alpha=1 m_{Pl}^2$, disappears. In fact, we found that this is true for all the cases with a large enough value of $\alpha$. Therefore, we conclude that the entire range of KE and PE at the bounce provides inflationary phase. Though, a portion of this entire range provides less than 60 $e$-folds. Similar to Eq. (\ref{eq:mod2phiB60}), in this case, we shall also get restricted range of the inflaton field that is consistent with current observations. Moreover, the results are highly depend on the values of $\alpha$ and $c$.
\begin{figure}[tbp]
\begin{center}
\begin{tabular}{ccc}
{\includegraphics[width=1.9in,height=1.65in,angle=0]{mod3KEaSR.pdf}} &
{\includegraphics[width=1.9in,height=1.6in,angle=0]{mod3KEwSR.pdf}} &
{\includegraphics[width=1.9in,height=1.6in,angle=0]{mod3KEepSR.pdf}}
\\
{\includegraphics[width=1.9in,height=1.6in,angle=0]{mod3PEaSR.pdf}} &
{\includegraphics[width=1.9in,height=1.6in,angle=0]{mod3PEwSR.pdf}} &
{\includegraphics[width=1.9in,height=1.6in,angle=0]{mod3PEepSR.pdf}}
\end{tabular}
\end{center}
\caption{This figure is for model 3 [Eq.(\ref{eq:pot3})] with $\dot{\phi_B}>0$. The potential (\ref{eq:pot3}) is symmetric with respect to $\phi=0$. Therefore, one can get similar results for $\dot{\phi_B}<0$. In the entire range of the initial conditions of $\phi_B$ (top: KED and bottom: PED), the slow-roll inflation is always obtained. When plotting out the figure, we had set
$\alpha = 0.5 m_{Pl}^2$ and $c = 3.915 \times 10^{-7} m_{Pl}$ and $m_{Pl}=1$. }
\label{fig:mod3}
\end{figure}
\begin{table}[tbp]
\caption{This table designates the model 3 [Eq.(\ref{eq:pot3})] with $\dot{\phi}_B > 0$, and $\alpha = 0.5 m_{Pl}^2$ and $c = 3.915 \times 10^{-7} m_{Pl}$.}
\begin{center}
\resizebox{\textwidth}{!}{
\begin{scriptsize}
\begin{tabular}{cccccccccc}
\hline
$\phi_B/m_{Pl}$~~~ & Inflation~~~ & $t/t_{Pl}$~~~ & $\epsilon$~~ & $w$ ~~& $N_{inf}$ &~~~${w}^B$\\
\hline
$26.3$ ~~~& begin~~~& 0.01 ~~~& 2.55~~ & $-1/3$ ~~& ~~~& ~~~&\\
& slow-roll~~~& 4.5 ~~~& 0.007~~ & $-0.960$ ~~& 485.59 ~~~&$<0$\\
& end~~~& $2.38 \times 10^7$ ~~~& 0.333~~ & $-1/3$ ~~& ~~~& ~~~& \\\\
6 ~~~& begin~~~& 9.85948 $\times 10^3$~~~& 1.0~~ & $-1/3$ ~~& ~~~& ~~~&\\
& slow-roll~~~& 2.73168$ \times10^4$ ~~~& 5.02677$ \times10^{-6}$~~ & $-1$ ~~& 99.38 ~~~&$>0$\\
& end~~~& $1.371 \times 10^7$ ~~~& 0.329~~ & $-1/3$ ~~& ~~~& ~~~& \\\\
3.75 ~~~& begin~~~& 3.23086 $\times 10^4$~~~& 0.999~~ & $-1/3$ ~~& ~~~& ~~~&\\
& slow-roll~~~& 8.95696$ \times10^4$ ~~~& 2.42$ \times10^{-5}$~~ & $-1$ ~~& 60.21 ~~~&$>0$\\
& end~~~& $1.2465 \times 10^7$ ~~~& 0.322~~ & $-1/3$ ~~& ~~~& ~~~& \\\\
3 ~~~& begin~~~& 4.79092 $\times 10^4$~~~& 0.999~~ & $-1/3$ ~~& ~~~& ~~~&\\
& slow-roll~~~& 1.32921$ \times10^5$ ~~~& 6.85$ \times10^{-5}$~~ & $-1$ ~~& 49.48 ~~~&$>0$\\
& end~~~& $1.337 \times 10^7$ ~~~& 0.325~~ & $-1/3$ ~~& ~~~& ~~~& \\\\
$-8.22$ ~~~& begin~~~& 2.77706 $\times 10^4$~~~& 1.0~~ & $-1/3$ ~~& ~~~& ~~~&\\
& slow-roll~~~& 1.40387$ \times10^5$ ~~~& 3.0$ \times10^{-2}$~~ & $-0.98$ ~~& 60.15 ~~~&$>0$\\
& end~~~& $1.0837 \times 10^7$ ~~~& 0.266~~ & $-1/3$ ~~& ~~~& ~~~& \\\\
$-9$ ~~~& begin~~~& 1.69003 $\times 10^4$~~~& 0.99~~ & $-1/3$ ~~& ~~~& ~~~&\\
& slow-roll~~~& 6.14094$ \times10^4$ ~~~& 4.5$ \times10^{-2}$~~ & $-0.97$ ~~& 76.56 ~~~&$>0$\\
& end~~~& $1.1434 \times 10^7$ ~~~& 0.308~~ & $-1/3$ ~~& ~~~& ~~~& \\
\hline
\end{tabular}
\end{scriptsize}}
\label{tab:mod3}
\end{center}
\end{table}
\subsection{Model 3}
\label{subsec:mod3}
In this subsection, let us consider potential (\ref{eq:pot3}) (model 3). The evolution of this potential is exhibited in the lower left panel of Fig. \ref{fig:pot}. The potential is symmetric with respect to $\phi=0$, bounded below by unity ($V(\phi) \geq 1$), and shows oscillations as the field approaches to the origin ($\phi \rightarrow 0$). In the large field limit ($\phi \rightarrow \pm \infty$), the potential is unbounded, and the maximum energy density $\rho_c$ restricts the range of $\phi_B$ to $(\phi_{min}, \phi_{max})$, where
\begin{eqnarray}
\phi_{max, \; min} &\simeq & \pm \sqrt{6 \alpha}~ \text{arccosh} \left( \sqrt{\frac{\rho_c}{\alpha c^2}} \right)
\label{eq:mod3phimin}
\end{eqnarray}
where $\phi_{max}$ and $\phi_{min}$ correspond to the positive ($+$) and negative ($-$) signs, respectively. The set of $\alpha$ and $c$ that is in good agreement with the Planck 2018 results \cite{Planck2018} is,
\begin{eqnarray}
\alpha &=& 0.5 m_{Pl}^2, \qquad~~ c = 3.915 \times 10^{-7} m_{Pl}
\label{eq:mod3alphac}
\end{eqnarray}
Other sets of ($\alpha, c$) that also satisfy the Planck 2018 data are found to yield similar results. Then, we numerically evolve Eqs. (\ref{eq:Hub}) and (\ref{eq:ddphi}) with (\ref{eq:pot3}) for PIV. Due to the symmetric behavior of the potential, the initial conditions at the bounce have the symmetry $(\phi_B,\dot{\phi}_B) \rightarrow (-\phi_B,-\dot{\phi}_B)$, and the results for NIV can be easily found by applying this symmetry. Furthermore, the initial conditions at the bounce are divided into two sub-cases; KED and PED, and are given by
\begin{equation}
\frac{\phi_B}{m_{Pl}} =
\begin{cases}
\in (\phi_{min}, -25.98), & \text{PED (slow-roll)},\cr
= \pm 25.97, & \text{KE=PE (slow-roll)}, \cr
\in (-25.96, 25.96), & \text{KED (slow-roll)},\cr
\in (25.98, \phi_{max}), & \text{PED (slow-roll)}, \cr
\end{cases}
\label{eq:mod3phiB}
\end{equation}
where $\phi_{max,\; min}$ are given by Eq. (\ref{eq:mod3phimin}). The numerical results for model 3 are presented in Fig. \ref{fig:mod3} with a set of KED and PED initial values at the bounce.
One of the important result of model 3 in the case $\alpha = 0.5 m_{Pl}^2$ and $c = 3.915 \times 10^{-7} m_{Pl}$ is that we don't get non-slow-roll phase in the entire range of the inflaton field, see Fig. \ref{fig:mod3} and Eq. (\ref{eq:mod3phiB}). However, some of the initial conditions of $\phi_B$ provide less than 60 $e$-folds as shown in table \ref{tab:mod3}, where different inflationary parameters are presented.
From table \ref{tab:mod3}, one also concludes that $N_{inf}$ grows as the value of $|{\phi}_B|$ increases. Thus, to get enough $e$-folds during the desired slow-roll inflation, the range of $\phi_B$ is restricted to (see table \ref{tab:mod3}),
\begin{eqnarray}
\frac{\phi_B}{m_{Pl}} =\begin{cases}
\in (\phi_{min}, -8.22), & N_{inf} \gtrsim 60, \cr
-8.22 < \frac{\phi_B}{m_{Pl}} <3.75, & N_{inf} < 60, \cr
\in (3.75, \phi_{max}), & N_{inf} \gtrsim 60, \cr
\end{cases}
\label{eq:mod3phiB60}
\end{eqnarray}
where $\phi_{max, \; min}$ are given by Eq. (\ref{eq:mod3phimin}).
As mentioned previously, we also numerically studied other sets of ($\alpha, c$) that satisfy the Planck 2018 data, and found that they give the same results. Therefore, we shall not repeat the calculations again for these cases.
\begin{figure}[tbp]
\begin{center}
\begin{tabular}{ccc}
{\includegraphics[width=1.9in,height=1.65in,angle=0]{mod4KEaSR.pdf}} &
{\includegraphics[width=1.9in,height=1.6in,angle=0]{mod4KEwSR.pdf}} &
{\includegraphics[width=1.9in,height=1.6in,angle=0]{mod4KEepSR.pdf}}
\\
{\includegraphics[width=1.9in,height=1.65in,angle=0]{mod4KEaNSR.pdf}} &
{\includegraphics[width=1.9in,height=1.6in,angle=0]{mod4KEwNSR.pdf}} &
{\includegraphics[width=1.9in,height=1.6in,angle=0]{mod4KEepNSR.pdf}}
\\
{\includegraphics[width=1.9in,height=1.6in,angle=0]{mod4PEaSR.pdf}} &
{\includegraphics[width=1.9in,height=1.6in,angle=0]{mod4PEwSR.pdf}} &
{\includegraphics[width=1.9in,height=1.6in,angle=0]{mod4PEepSR.pdf}}
\end{tabular}
\end{center}
\caption{This figure corresponds to model 4 [Eq.(\ref{eq:pot4})] with $\dot{\phi_B}>0$. Due to the symmetric nature of the potential (\ref{eq:pot4}), similar results can be obtained for $\dot{\phi_B}<0$.
When plotting out the figure, we had set $\alpha = 0.5 m_{Pl}^2$ and $c = 2.818 \times 10^{-7} m_{Pl}$ and $m_{Pl}=1$. }
\label{fig:mod4}
\end{figure}
\begin{table}[tbp]
\caption{This table is displayed for model 4 [Eq.(\ref{eq:pot4})] with $\dot{\phi}_B > 0$, and $\alpha = 0.5 m_{Pl}^2$ and $c = 2.818 \times 10^{-7} m_{Pl}$.}
\begin{center}
\resizebox{\textwidth}{!}{
\begin{scriptsize}
\begin{tabular}{cccccccccc}
\hline
$\phi_B/m_{Pl}$~~~ & Inflation~~~ & $t/t_{Pl}$~~~ & $\epsilon$~~ & $w$ ~~& $N_{inf}$ &~~~${w}^B$\\
\hline
$26.7$ ~~~& begin~~~& 0.11 ~~~& 4.5~~ & $-1/3$ ~~& ~~~& ~~~&\\
& slow-roll~~~& 1.22 ~~~& 0.080~~ & $-0.970$ ~~& 479.76 ~~~& $<0$\\
& end~~~& $1.25508 \times 10^7$ ~~~& 0.326~~ & $-1/3$ ~~& ~~~& ~~~& \\\\
5 ~~~& begin~~~& 2.25906 $\times 10^4$~~~& 0.999~~ & $-1/3$ ~~& ~~~& ~~~&\\
& slow-roll~~~& 6.25623$\times 10^4$ ~~~& 3.04$\times 10^{-5}$~~ & $-1$ ~~& 69.27 ~~~& $>0$\\
& end~~~& $1.21626 \times 10^7$ ~~~& 0.318~~ & $-1/3$ ~~& ~~~& ~~~& \\\\
4 ~~~& begin~~~& 3.83533 $\times 10^4$~~~& 0.999~~ & $-1/3$ ~~& ~~~& ~~~&\\
& slow-roll~~~& 9.9128$\times 10^4$ ~~~& 1.49$\times 10^{-3}$~~ & $-0.999$ ~~& 60.65 ~~~& $>0$\\
& end~~~& $3.18112 \times 10^7$ ~~~& 0.333~~ & $-1/3$ ~~& ~~~& ~~~& \\\\
3.5 ~~~& begin~~~& 5.00098 $\times 10^4$~~~& 0.999~~ & $-1/3$ ~~& ~~~& ~~~&\\
& slow-roll~~~& 1.14321$\times 10^5$ ~~~& 1.50$\times 10^{-2}$~~ & $-0.990$ ~~& 46.73 ~~~& $>0$\\
& end~~~& $1.30343 \times 10^7$ ~~~& 0.322~~ & $-1/3$ ~~& ~~~& ~~~& \\\\
$-8$ ~~~& begin~~~& 4.61463 $\times 10^4$~~~& 1.0~~ & $-1/3$ ~~& ~~~& ~~~&\\
& slow-roll~~~& 2.40478$\times 10^5$ ~~~& 2.99$\times 10^{-2}$~~ & $-0.980$ ~~& 45.14 ~~~&$>0$\\
& end~~~& $1.12755 \times 10^7$ ~~~& 0.285~~ & $-1/3$ ~~& ~~~& ~~~& \\\\
$-8.73$ ~~~& begin~~~& 2.89127 $\times 10^4$~~~& 1.0~~ & $-1/3$ ~~& ~~~& ~~~&\\
& slow-roll~~~& 1.48336$\times 10^5$ ~~~& 2.99$\times 10^{-2}$~~ & $-0.979$ ~~& 60.61 ~~~&$>0$\\
& end~~~& $1.32331 \times 10^7$ ~~~& 0.325~~ & $-1/3$ ~~& ~~~& ~~~& \\\\
$-9$ ~~~& begin~~~& 2.43324 $\times 10^4$~~~& 1.0~~ & $-1/3$ ~~& ~~~& ~~~&\\
& slow-roll~~~& 1.24506$\times 10^5$ ~~~& 2.99$\times 10^{-2}$~~ & $-0.980$ ~~& 64.18 ~~~& $>0$\\
& end~~~& $1.16183 \times 10^7$ ~~~& 0.308~~ & $-1/3$ ~~& ~~~& ~~~& \\
\hline
\end{tabular}
\end{scriptsize}}
\label{tab:mod4}
\end{center}
\end{table}
\subsection{Model 4}
\label{subsec:mod4}
Finally, we consider the case with the potential (\ref{eq:pot4}) (model 4). The evolution of this potential is shown in the lower right panel of Fig. \ref{fig:pot}. The potential is bounded below by zero ($V(\phi) \geq 0$) and unbounded from above, and oscillates around the origin ($\phi = 0$). The behavior of this potential is symmetric with respect to $\phi=0$. In the large field limit ($\phi \rightarrow \pm \infty$), the critical energy density $\rho_c$ constrains the initial conditions of the inflaton field at the bounce that depends on the value of $\alpha$ and $c$. The following combination of $\alpha$ and $c$ is compatible with the Planck 2018 data \cite{Planck2018} (see appendix)
\begin{eqnarray}
\alpha &=& 0.5 m_{Pl}^2, \qquad~~ c = 2.818 \times 10^{-7} m_{Pl}.
\label{eq:mod4alphac}
\end{eqnarray}
In this subsection, we shall investigate the dynamics of the pre-inflationary universe with
such given $\alpha$ and $c$ only for $\dot\phi_B >0$, and the other possibilities ($\dot\phi_B <0$, as well as in other sets
of $\alpha$ and $c$) will yield similar results. The corresponding value of $\phi_{max,\; min}$ at the bounce will be $\pm27.2 m_{Pl}$. Similar to model 3, in model 4 the potential is also symmetric. Therefore, we shall not consider the NIV case, due to the symmetry $(\phi_B,\dot{\phi}_B) \rightarrow (-\phi_B,-\dot{\phi}_B)$. We numerically solve Eqs. (\ref{eq:Hub}) and (\ref{eq:ddphi}) with potential (\ref{eq:pot4}) for $\alpha = 0.5 m_{Pl}^2$ and $c = 2.818 \times 10^{-7} m_{Pl}$. The results are illustrated in Fig. \ref{fig:mod4}. we obtain a subset of initial conditions that does not provide the slow-roll inflation as shown in the middle panel of Fig. \ref{fig:mod4}. The rest of the cases (KED \& PED) will be quite similar to those studied in model 3, so we shall not repeat the analysis here, but simply summarize the final results with various ranges of the initial conditions of $\phi_B$,
\begin{eqnarray}
\frac{\phi_B}{m_{Pl}} = \begin{cases}
\in (-27.2, -26.57), & \text{PED (slow-roll)},\cr
= \pm 26.56, & \text{KE=PE (slow-roll)}, \cr
\in (-26.55, -5.1), & \text{KED (slow-roll)},\cr
-5 \leqslant \frac{\phi_B}{m_{Pl}} < -0.1, & \text{KED (no slow-roll)},\cr
\in (-0.1, 26.55), & \text{KED (slow-roll)},\cr
\in (26.57, 27.2), & \text{PED (slow-roll)}.\cr
\end{cases}
\label{eq:mod4phiB}
\end{eqnarray}
The results of model 4 are shown in Fig. \ref{fig:mod4} and table \ref{tab:mod4}. Again, we shall not explain the detail of Fig. \ref{fig:mod4}, as the evolution is quite similar to model 3. However, we obtain a subset of initial conditions that does not provide the slow-roll phase. By looking at table \ref{tab:mod4}, the physical viable initial conditions of $\phi_B$ that generate enough $e$-folds for the desired slow-roll inflation are
\begin{eqnarray}
\frac{\phi_B}{m_{Pl}} = \begin{cases}
\in (4, 27.2), & N_{inf} \gtrsim 60, \cr
\in (-8.73, -27.2), &N_{inf} \gtrsim 60.\cr
\end{cases}
\label{eq:mod4phiB60}
\end{eqnarray}
Within these ranges, $N_{inf}$ always increases as $|{\phi}_B|$ grows.
\section{Phase portraits and desired slow-roll inflation}
\label{sec:port}
Let us investigate the phase spaces for the models under our considerations. First, we consider model 1 for $\alpha = 1 m_{Pl}^2$ and $c = 8.343 \times 10^{-7} m_{Pl}$.
In this case, as shown previously, the entire range of the initial conditions does not yield a desired slow-roll inflation with enough $e$-folds, which are inconsistent with the observational data, as shown explicitly in table \ref{tab:mod1}. Hence, we shall not draw the phase portrait for model 1.
Second, we examine the phase portrait for model 2 with $\alpha = 1 m_{Pl}^2$ and $c = 4.074 \times 10^{-8} m_{Pl}$. In this case, we find the inflationary and non-inflationary phases for different sets of $\phi_B$ as displayed in Figs. \ref{fig:mod2} and \ref{fig:port}. The left panel of Fig. \ref{fig:port} exhibits the evolution of the phase space trajectories in the $(\phi/m_{Pl}, \dot{\phi}/m_{Pl}^2)$ plane for both of the PIV and NIV cases, and also for the KED and PED initial conditions. The initial data surface is semi-finite: $| \dot{\phi}_B |/m_{Pl}^2 < 0.91 $ and $\phi_B/m_{Pl} \in (-21.14, \infty)$ due to the shape of the potential (\ref{eq:pot2}). The solid (blue) trajectories correspond to the inflationary region that do not provide the desired slow-roll inflation as the number of $e$-folds is not sufficient. The dashed (blue) trajectories exhibit the non-inflationary region. Only the red trajectories demonstrate the desired slow-roll inflation that are consistent with observations, that is, a slow-roll inflationary phase with enough e-folds. Likewise, the solid and dashed (blue) parts of the boundary surface is governed by the inflationary (not consistent with observations as it does not generate sufficient $e$-folds) and non-inflationary phases, while the red surface is in good agreement with observations as it produces at least 60 $e$-folds and more. From Eqs. (\ref{eq:mod2phiB}), (\ref{eq:mod2phiB60}) and the left panel of Fig. \ref{fig:port}, one can see that the region of the desired slow-roll inflation is less than the region of the non-inflationary phase, and also less than the part that does not give the desired slow-roll inflation. Hence, in this case only a small portion of the initial conditions produce the desired slow-roll inflation with sufficient e-folds. In the left panel of Fig. \ref{fig:port}, we show this small portion of the initial conditions, while the whole range is given by Eq. (\ref{eq:mod2phiB}).
Next, we carry out the phase space analysis for model 3 with $\alpha = 0.5 m_{Pl}^2$ and $c = 3.915 \times 10^{-7} m_{Pl}$. The phase portrait for this model is depicted in the middle panel of Fig. \ref{fig:port}. The initial data surface is totally compact: $| \dot{\phi}_B |/m_{Pl}^2 < 0.91 $ and $\phi_B/m_{Pl} \rightarrow \pm 26.58$, as the critical energy density $\rho_c$ puts the bound on the initial values of $\phi_B$. The red trajectories and surface generate the desired slow-roll inflation which is compatible with observations, whereas the blue ones are not. The middle panel of Fig. \ref{fig:port} exhibits the evolution of PIV and NIV, and also for the KED and PED initial values at the bounce. More preciously, it covers the whole phase space. Regions close to the boundary correspond to the large energy density where the quantum effects dominate, while the low energy limit exists near the origin in the $(\phi/m_{Pl}, \dot{\phi}/m_{Pl}^2)$ plane. All curves start from the surface of the bounce ($\rho=\rho_c$) and move towards the origin which is a single stable point. In the entire phase space, the blue region is much less than the red one. Therefore, in this model a substantial fraction of initial values of the inflaton field produces the desired slow-roll inflation, and the occurrence of a slow-roll inflation is practically inevitable.
Finally, for model 4 with $\alpha = 0.5 m_{Pl}^2$ and $c = 2.818 \times 10^{-7} m_{Pl}$, the phase portrait is presented in the right panel of Fig. \ref{fig:port}. In model 4, the boundary surface is also finite: $| \dot{\phi}_B |/m_{Pl}^2 < 0.91 $ and $\phi_B/m_{Pl} \rightarrow \pm 27.2$. In this case, we get non-inflationary phases. The rest of the analysis is quite similar to model 3, so we shall not repeat it.
\begin{figure}[tbp]
\begin{center}
\begin{tabular}{ccc}
{\includegraphics[width=1.9in,height=2in,angle=0]{mod2p.pdf}} &
{\includegraphics[width=1.9in,height=2in,angle=0]{mod3p.pdf}} &
{\includegraphics[width=1.9in,height=2in,angle=0]{mod4p.pdf}}
\end{tabular}
\end{center}
\caption{This figure shows the phase portraits of models 2 (left), 3 (middle) and 4 (right) in the $(\phi/m_{Pl}, \dot{\phi}/m_{Pl}^2)$ plane. All trajectories (with arrowheads) start at the bounce at which we have $\rho=\rho_c$ (boundary surface without arrowheads). The red trajectories generate the desired slow-roll inflation, while the blue (solid) ones do not. The dashed (blue) trajectories demonstrate the case without inflation. In model 2 (left; $\alpha = 1 m_{Pl}^2$ and $c = 4.074 \times 10^{-8} m_{Pl}$), the initial data is in the range, $\phi/m_{Pl} \in (\phi_{min}, \infty)$ (see Eq.(\ref{eq:mod2phiB})), but here we show only a part of it. Since the left panel extends from $\phi_{min}$ to $\infty$, the length of the blue curves (solid and dashed) is very long in comparison with the red ones. Therefore, a slow-roll inflation exists for a short period. For models 3 and 4, the initial surface extends to $\phi/m_{Pl} \rightarrow \pm 26.58$ (middle panel; $\alpha = 0.5 m_{Pl}^2$ and $c = 3.915 \times 10^{-7} m_{Pl}$) and $\phi/m_{Pl} \rightarrow \pm 27.2$ (right panel; $\alpha = 0.5 m_{Pl}^2$ and $c = 2.818 \times 10^{-7} m_{Pl}$), respectively. In the middle and right panels, the lengths of the blue trajectories are very short in comparison with the red ones. As a result, the slow-roll inflation is almost inevitable. }
\label{fig:port}
\end{figure}
\section{Conclusions}
\label{sec:conc}
In this paper, we studied the dynamics of the pre-inflationary universe with a family of $\alpha-$attractor potentials for $\dot{\phi}_B > 0$ in the framework of LQC. First, we investigated numerically the background evolution for model 1 with $\alpha = 1 m_{Pl}^2$ and $c = 8.343 \times 10^{-7} m_{Pl}$.
In this case, the initial conditions at the bounce are dominated only by KE as the PED initial conditions do not exist during the whole bouncing phase. Similar results were obtained for $T-model$ in Ref. \cite{alamPRD2018}.
The numerical results for model 1 are presented in Fig. \ref{fig:mod1}, where $a(t)$, $w(\phi)$ and $\epsilon_H$ are displayed for several values of ${\phi}_B$. From the numerical evolution of $w(\phi)$, one can see that the universe is split into three different phases prior to reheating: {\it bouncing, transition and the slow-roll inflation}. During the bouncing phase, the evolution of $a(t)$ is universal for a wide range of initial conditions, and is well described by the analytical solution (\ref{eq:a}), as shown in the left panel of Fig. \ref{fig:mod1}. In this phase, $w(\phi) \simeq +1$. However, it decreases quickly from $w(\phi) \simeq +1$ to $w(\phi) \simeq -1$ during the transition phase, and then stays pegged at $w(\phi) \simeq -1$ in the slow-roll phase. The period of transition phase is very short in comparison with the other two phases. We also found the number of $e$-folds during the slow-roll inflation that is shown in Table \ref{tab:mod1}. For model 1, we always get less than 60 $e$-folds during the slow-roll inflationary phase for any given value of ${\phi}_B$ in the range. Hence, this model is not observationally favorable.
Second, we studied numerically the evolution of the background for model 2 with $\alpha = 1 m_{Pl}^2$ and $c = 4.074 \times 10^{-8} m_{Pl}$. In the case of $\alpha = 1 m_{Pl}^2$ and $c = 4.074 \times 10^{-8} m_{Pl}$, the range of ${\phi}_B$ is divided into the KED and PED initial conditions, and the numerical results are presented in Fig. \ref{fig:mod2}. For the KED case (except for a subset), the evolution of the scale factor $a(t)$ during the bouncing phase shows universal feature, that is, it does not depends on initial conditions and is well described by the analytical solution (\ref{eq:a}). During the bouncing phase, the EoS $w(\phi) \simeq +1$. It drastically decreases from $+1$ to $-1$ in the transition phase. Soon, the universe enters into the slow-roll phase, where $\epsilon_H$ is still large initially, but quickly declines to zero, and the slow-roll inflation takes place, as shown by the upper panels of Fig. \ref{fig:mod2}. A subset of the KED initial conditions does not lead to inflation as shown in the middle panels of Fig. \ref{fig:mod2}. In the case of the PED initial conditions, the universality of $a(t)$ is lost. Bouncing and transition phases do not exist any more. Though, the slow-roll inflation can still be achieved for a long period. We also showed other parameters in Table \ref{tab:mod2}, where physically viable initial conditions of ${\phi}_B$ were identified, which produce enough $e$-folds. From Table \ref{tab:mod2}, we can see that $N_{inf}$ decreases as ${\phi}_B$ grows.
On the other hand, for models 3 and 4, we examined numerically the background evolutions with $\alpha = 0.5 m_{Pl}^2$ and $c = 3.915 \times 10^{-7} m_{Pl}$ (model 3) and $\alpha = 0.5 m_{Pl}^2$ and $c = 2.818 \times 10^{-7} m_{Pl}$ (model 4), respectively. The results are shown in Figs. \ref{fig:mod3} and \ref{fig:mod4}. The whole range of the initial values of ${\phi}_B$ provide the slow-roll inflationary phase for model 3, whereas in model 4, a subset of the initial conditions exists without inflation.
The number of $e$-folds $N_{inf}$ and other inflationary parameters are displayed in Tables \ref{tab:mod3} and \ref{tab:mod4}, where $N_{inf}$ increases as the absolute value of ${\phi}_B$ grows.
Finally, we presented the phase portraits for models 2, 3 and 4 in Fig. \ref{fig:port}. We did not display the phase portrait for model 1 as all the initial conditions of inflaton field provide less than 60 $e$-folds that are not consistent with observations. For model 2 with $\alpha = 1 m_{Pl}^2$ and $c = 4.074 \times 10^{-8} m_{Pl}$, the quantum bounce surface is semi-finite: $| \dot{\phi}_B |/m_{Pl}^2 < 0.91 $ and $\phi_B/m_{Pl} \in (-21.14, \infty)$, whereas for models 3 and 4, the bounce surface is compact. In particular, in model 3 with
$\alpha = 0.5 m_{Pl}^2$ and $c = 3.915 \times 10^{-7} m_{Pl}$, we found $| \dot{\phi}_B |/m_{Pl}^2 < 0.91 $ and $\phi_B/m_{Pl} \rightarrow \pm 26.58$, while for model 4 with $\alpha = 0.5 m_{Pl}^2$ and $c = 2.818 \times 10^{-7} m_{Pl}$, we obtained $| \dot{\phi}_B |/m_{Pl}^2 < 0.91 $ and $\phi_B/m_{Pl} \rightarrow \pm 27.2$. In Fig. \ref{fig:port}, the dashed blue trajectories correspond to the case without inflation, and the solid trajectories (red and blue) can lead to the slow-roll inflation. However, only the red curves generate sufficient $e$-folds that are compatible with the Planck 2018 data, and not the blue ones \cite{Planck2018}.
\acknowledgments
A.W. would like to thank ITPC - ZJUT for their hospitality during the summer of 2019, in which part of the work was done. His research is supported in part by the National Natural Science Foundation of China (NNSFC) with the Grants Nos. 11975203 and 11675145. M. Al Ajmi is supported by Sultan Qaboos University under the Internal Grant (IG/SCI/PHYS/19/02). Part of the work is also supported by the Ministry of Education and Science, the Republic of Kazakhstan, with Grant No. 0118RK00693. |
1712.01663 | \section{Introduction \label{introduction}}
As remarked in \cite{Lookman2016}, a key element of developing advanced materials is to learn from materials knowledge and available materials data to guide the next experiments or calculations in order to focus on materials with targeted properties. Traditionally, materials knowledge has been discovered by experimental studies. In the last few decades, the knowledge has also been discovered by a conventional approach, called computational materials science, whose scope is to model or predict the behavior of materials based on their composition, micro-structure, process history, and interactions.
Recently, the development of materials informatics \cite{Agrawal2016,Rodgers2006}, known as a combination of materials science and data science, has opened up a new opportunity for accelerating the discovery of new materials knowledge. Regarding the literature, data science \cite{Dhar2013} is a field of study that employs a wide range of data-driven techniques from a large number of research fields, such as applied mathematics, statistics, computational science, information science, and computer science, in order to understand and analyze data. In materials informatics, data-driven techniques are applied into existing materials data for the purpose of automatically discovering new materials knowledge such as hidden features, hidden chemical and new physical rules, and new patterns \cite{ Ghiringhelli2015,Isayev2015,Yousef2012}. Remarkablely, materials informatics is expected not only to provide foundations for a new paradigm of materials descovery \cite{Rajan2015}, but also to be the next generation of exploring new materials \cite{Takahashi2016}.
Over the years, a large volume of materials data has been generated \cite{Lookman2015}, and these data are commonly described by using a set of atoms with their coordinates and periodic unit cell vectors and categorized as unstructured data \cite{Lam2017}. In practice, data-driven techniques can be hardly applied directly on materials data. Before applying data-driven techniques, materials data have to be transformed into new representations (or descriptors). The representations need to reflect the nature of materials and the actuating mechanisms of chemical/physical phenomena. In addition, the operators such as comparison and calculations can be performed by using the representations.
So far, various methods for representing materials have been developed. Behler and co-workers \cite{Behler2011, Eshet2010, Eshet2012} utilized atom-distribution-based symmetry functions to represent the local chemical environment of atoms and employed a multilayer perceptron to map this representation to atomic energy. The arrangement of structural fragments has also been used to represent materials in order to predict the physical properties of molecular and crystalline systems \cite{Pilania2013}. Isayev used the band structure and density of states fingerprint vectors as a representation of materials to visualize material space \cite{Isayev2015}. Rupps and co-workers developed a representation known as Coulomb matrix for the prediction of atomization energies and formation energies \cite{Faber2015, Matthias2015,Matthias2012}. In \cite{Lam2017}, the authors pointed out that distribution of valence orbitals (electrons) of atoms in materials is important information that should be included in the representation of materials. The author in \cite{Lam2017} also proposed a representation method, called orbital-field matrix, which exploits the distribution.
It is well-known that properties of almost materials are determined by the chemical bonds which may result from the electrostatic force of attraction between atoms with opposite charges, or through the sharing of electrons. In addition, chemical bonds hold an enormous amount of energy and building and breaking chemical bonds is part of the energy cycle. Therefore, in this research, we aim at developing a new representation method that mainly based on chemical bonds. In short, the main contributions of the research include (1) a new method to exploit chemical bonds of atoms in materials and (2) a new method to utilize local structures of a material by adopting statistics point of view.
\section{The proposed representation method}
Generally, a material is composed of chemical bonds that connect atoms together. Let us consider a material, denoted by $X$, which consists of $N$ chemical bonds denoted by $B_1, B_2,..., B_N$. Assume that a chemical bond $B_{k}$ with $1 \leq k \leq N$ is generated by a connection between two atoms $P$ and $Q$, and this bond is surrounded by several other atoms, each of them can connect to atom $P$ or atom $Q$, as illustrated in Figure \ref{fig:bondkth}. The surrounding atoms generate a chemical environment that holds chemical bond $B_k$ in a stable state.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{bond.png}
\caption{\label{fig:bondkth} Chemical bond $B_k$ and its chemical environment.}
\end{figure}
Chemical bond $B_k$ and its chemical environment can be considered as an unified unit corresponding to a local structure of material $X$. In other words, material $X$ can be separated into $N$ local structures corresponding to $N$ chemical bonds and their chemical environments.
Atoms are represented by 32-dimensional vectors, called one-hot vectors \cite{Lam2017}, which are generated by using a set of valence subshell orbitals $D = \{s^1, s^2, p^1, p^2, ..., p^6, d^1, d^2, ..., d^{10}, f^1, f^2, ..., f^{14}\}$ (e.g. $p^2$ indicates that the valence $p$ orbital holds 2 electrons in the electron configuration). In addition, we adopt the method of using orbital-field matrix \cite{Lam2017} for representing two atoms $P$ and $Q$. Let $M_p$ and $M_q$ denote two orbital-field matrices corresponding two atoms $P$ and $Q$, respectively. Two matrices $M_p$ and $M_q$ are defined by
\begin{equation}
\begin{split}
M_p= \vec{P}^T\times \vec{E_p};\\
M_q= \vec{Q}^T \times \vec{E_q},\\
\end{split}
\end{equation}
where $\vec{P}$ and $\vec{Q}$ are two one-hot vectors corresponding to atoms $P$ and $Q$, and $\vec{E_p}$ and $\vec{E_q}$ are two vectors representing chemical environments of these two atoms \cite{Lam2017}. Two vectors $\vec{E}_p$ and $\vec{E}_q$ are defined by:
\begin{equation}
\begin{split}
\vec{E}_{p} = \sum_{i=1}^{N_p} v_{i} \times \vec{O}_i;\\
\vec{E}_{q} = \sum_{i=1}^{N_q} v_{i} \times \vec{O}_i,
\end{split}
\end{equation}
where $N_p$ and $N_q$ are total numbers of atoms connecting to atoms $P$ and $Q$ respectively, and $v_{i}$ is a coefficient representing the importance role of atom $O_i$. Weight coefficient $v_{i}$ is defined by
\begin{equation}
v_i=\frac{\theta_i}{\theta_{max}} \times \frac{1}{r_i^2},
\end{equation}
where $\theta_i$ is the solid angle determined by the face of the Voronoi polyhedron \cite{Aurenhammer1991} that separates atom $O_i$ and its connected atom (atom $P$ or atom $Q$), $\theta_{max}$ is the maximum solid angle among solid angles corresponding the atoms that connect to the connected atom, and $r_i$ is the distance between atom $O_i$ and its connected atom.
Chemical bond $B_k$ and its chemical environment are then represented by a matrix $U_k$ as follows:
\begin{equation}
U_{k} = w_p \times M_p+ w_q \times M_q,
\end{equation}
where $w_p$ and $w_q$ are the coefficients representing the importance roles of atoms $P$ and $Q$ in chemical bond $B_k$, respectively. Coefficients $w_p$ and $w_q$ should be selected according to specific applications; here, we propose that these coefficients are computed by the following equation:
\begin{equation}
w_p= w_q=\frac{ log_{10}(Z_p \times Z_q) }{r_{p,q}^n},
\end{equation}
where $Z_p$ and $Z_q$ are the atomic numbers of two atoms $P$ and $Q$ respectively, and $r_{p,q}$ is the distance between these two atoms.
Because material $X$ contains $N$ chemical bonds, this material is separated into $N$ local structures corresponding to matrices $U_1, U_2,..., U_N$. Regarding the statistics point of view, the set containing the number of local structures, mean and standard deviation of local structures can be used to describe material $X$. Here, mean and standard deviation of local structures, denoted by $\bar{U}$ and $S$, are defined as follows:
\begin{equation}
\begin{split}
\bar{U} = \{\bar{u}_{i,j}\} \text{\ with \ }
\bar{u}_{i,j} = \frac{1}{N} \times \sum_{k=1}^{N} u^{(k)}_{i,j};\\
%
S = \{s_{i,j}\} \text{\ with \ } s_{i,j}=\sqrt{\frac{1}{N} \times \sum_{k=1}^{N} \abs{ u^{(k)}_{i,j}-\bar{u}_{i,j}}^2};\ \\
\text{\ where \ }
U_k = \{ u^{(k)}_{i,j} \} \text{ for } k = \overline{1,N} .
\end{split}
\end{equation}
We propose that using this set to represent material $X$. Furthermore, in order to apply data-driven techniques, the representation of material $X$ needs to be transformed into a vector or matrix. Therefore, mean and standard deviation matrices are raved and then combined with the number of chemical bonds in order to form a vector. In other words, material $X$ is represented by a vector as follows:
\begin{equation}
X=(N,\bar{u}_{1,1},\bar{u}_{1,2},...,\bar{u}_{32,31},\bar{u}_{32,32},s_{1,1},s_{1,2},...,s_{31,32},s_{32,32}).
\end{equation}
Let us consider two materials represented as $X=\{x_i\}$ and $Y=\{y_i\}$ respectively. One can employ various types of distance measurements for measuring the similarity between these two materials, such as listed below:
\begin{itemize}
\item[-] Euclidean distance \cite{Deza2009}, denoted by $d_{eucl}$:
\begin{equation}
d_{eucl}(X, Y) = \sqrt{\sum_{i}(x_{i} - y_{i})^2}.
\end{equation}
\item[-] Manhattan distance \cite{Krause1987}, denoted by $d_{man}$:
\begin{equation}
d_{man}(X, Y) = \sum_{i}|x_{i} - y_{i}|.
\end{equation}
\item[-] Cosine distance \cite{Singhal2001}, denoted by $d_{cos}$:
\begin{equation}
d_{cos}(X, Y) = 1 - \frac{\sum_{i}{x_{i} \times y_{i}}}{\sqrt{\sum_{i}x_{i}^2} \times \sqrt{\sum_{i}y_{i}^2}}.
\end{equation}
\item[-] Bary-Curtis distance \cite{Bray1957}, denoted by $d_{bar}$:
\begin{equation}
d_{bar}(X, Y) = \frac{\sum_{i}|x_{i} - y_{i}|}{\sum_{i}|x_{i}| + |y_{i}|}.
\end{equation}
\item[-] Canberra distance \cite{Lance1966}, denoted by $d_{can}$:
\begin{equation}
d_{can}(X, Y) = \sum_{i}\frac{|x_{i} - z_{i}|}{|x_{i} + y_{i}|}.
\end{equation}
\end{itemize}
\section{Experiment}
To evaluate the new representation method, we applied it into a materials informatics application that aims at predicting atomization energies by using machine learning \cite{Murphy2012}. For analyzing materials data in the application, we selected linear regression technique \cite{Murphy2012} with two learning algorithms, k-nearest neighbors (KNN) \cite{Murphy2012} and kernel ridge (KR) \cite{Murphy2012}. Additionally, we selected QM7 data set \cite{Matthias2012} for the application. This data set contains 7165 materials (molecules), each of them is composed of a maximum of 23 atoms including C, N, O, S, and H. Coordinates of atoms in materials are presented by Cartesian coordinate system. Information about Coulomb matrix and atomization energies of materials is available in the data set; and the atomization energies are ranging from -800 to -2000 $kcal/mol$. To determine chemical bonds atoms in materials, we employed pymatgen \cite{pymatgen2013}, an open-source library for analyzing materials; however, Voronoi polyhedra \cite{pymatgen2013} could not be determined for 250 materials; thus, they were eliminated from the data set. As a consequence, 6195 materials were actually used in the experiments.
For comparison, we selected two state-of-the-art representastion methods, orbital-field matrix \cite{Lam2017} and Coulomb matrix (eigenspectrum) \cite{Montavon2012,Matthias2012}, as two baselines. For measuring performances of predicting atomization energies we used three well-known assessment methods \cite{Murphy2012}: mean absolute error ($MAE$), root-mean-square error ($RMSE$), and coefficient of determination ($R^2$). Moreover, we applied 5 times 10 folds cross validation into the experiments.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{n.png}
\caption{\label{fig:n_R_2} The impact of lengths of chemical bonds on performances of prediction according assessment method $R^2$.}
\end{figure}
In order to measure the impacts of distances between atoms in chemical bonds (or the lengths of chemical bonds) on performances of predicting atomization energies, we chose KNN learning algorithm with the number of nearest neighbors (denoted by $K$) $K=5$ and Euclidean distance method. The performance according to assessment method $R^2$ was presented in Figure \ref{fig:n_R_2}. As we can see in this figure, the performance increases when $n<5$ and then decreases when $n \geq 5$. It also can be seen that the application archives high accuracy of prediction when the values of $n$ from 3 to 5.
\begin{figure}[ht]
\centering
\includegraphics[width=0.49\textwidth]{knn_chemical_bond}
\includegraphics[width=0.49\textwidth]{krr_chemical_bond}
(a)$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $ (b)
\caption{\label{fig:knn_krr_chemical_bond} Comparison of predicted atomization energies by using KNN (part a) and KR (part b) learning algorithms and reference atomization energies calculated by using DFT.}
\end{figure}
Next, we measure the performance of prediction by using the proposed representation method with $n=4$ and both learning algorithms KNN and KR. For KNN, we selected $ K=5$ and Euclidean distance method, and for KR, we selected Laplacian kernel \cite{Murphy2012}. The results of prediction are illustrated in Figure \ref{fig:knn_krr_chemical_bond}. In this figure, parts (a) and (b) show performances of prediction by using KNN and KR learning algorithms, respectively. As we can observe, the performances according to KR are better than those according KNN.
\begin{table}[ht]
\centering
\small
\caption{Cross-validated $MAE$, $RMSE$ and $R^2$ in the prediction of the atomization energies obtained by using learning algorithm KNN with the selected distance measurement methods.}
\label{tab:KNN}
\def1.6{1.6}
\begin{threeparttable}
\begin{tabular}{c||c|c|c||c|c|c||c|c|c} \hline
Distance & \multicolumn{3}{c||}{$MAE$} & \multicolumn{3}{c||}{$RMSE$} & \multicolumn{3}{c}{$R^2 $}\\ \cline{2-10}
measure & (*) & (**) & (***)& (*) & (**) & (***) & (*) & (**) & (***) \\ \hline \hline
Euclidean & \textbf{12.877} & 14.411 & 78.721 & \textbf{24.071} & 30.015 & 102.528 & \textbf{0.988} & 0.981 & 0.790 \\ \hline
Manhattan & \textbf{11.447} & 14.102 & 68.664 & \textbf{22.967} & 30.218 & 90.181 & \textbf{0.989} & 0.980 & 0.838 \\ \hline
Cosine & \textbf{26.690} & 42.836 & 85.885 & \textbf{55.503} & 97.061 & 111.835& \textbf{0.934} & 0.798 & 0.751 \\ \hline
Bary-Curtis & \textbf{11.684} & 14.346 & 68.829 & \textbf{23.665} & 30.839 & 90.347 & \textbf{0.988} & 0.980 & 0.8372 \\ \hline
Canberra & 71.527 & 47.010 & \textbf{18.832} & 110.528& 72.887 & \textbf{25.526} & 0.738 & 0.886 & \textbf{0.987} \\ \hline
\end{tabular}
\begin{tablenotes}\footnotesize
\item[(*) ] Chemical bond-based
\item[(**) ] Orbital-field matrix
\item[(***) ] Coulomb matrix (eigenspectrum)
\end{tablenotes}
\end{threeparttable}
\end{table}
To compare the proposed representation method with two selected baselines, we also selected $n=4$. The results of comparison were summarized in Tables \ref{tab:KNN} and \ref{tab:KRR}. In these tables, each assessment method for a representation method is represented in a column, and the bold values indicate the best performances in each row and according to the corresponding evaluation assessment method. As detailed in Table \ref{tab:KNN}, the proposed representation method is better than two baselines with the first four distance measure methods, and the representation method by using Coulomb matrix is more effective than the proposed method and the other baseline according to the Canberra distance method. In addition, it can be seen in Table \ref{tab:KRR}, the proposed method achieves the best performance according to criterion $MAE$, and the representation method by using Coulomb matrix obtains the best performances according to criteria $RMSE$ and $R^2$. However, as observed in Table \ref{tab:KRR}, the performances of the proposed method can be comparable with those of the representation method by using Coulomb matrix.
\begin{table}[ht]
\centering
\caption{Cross-validated $MAE$, $RMSE$ and $R^2$ in the prediction of the atomization energies obtained by learning algorithm KR.}
\label{tab:KRR}
\def1.6{1.6}
\begin{threeparttable}
\begin{tabular}{c||c|c|c||c|c|c||c|c|c} \hline
\multirow{2}{*}{Kernel} & \multicolumn{3}{c||}{$MAE$} & \multicolumn{3}{c||}{$RMSE$} & \multicolumn{3}{c}{$R^2 $}\\ \cline{2-10}
& (*) & (**) & (***)& (*) & (**) & (***) & (*) & (**) & (***) \\ \hline \hline
Laplacian & \textbf{9.934}& 13.942 & 9.960 & 15.106 &24.769 & \textbf{13.886} &0.995 & 0.987 & \textbf{0.996} \\ \hline
\end{tabular}
\begin{tablenotes}\footnotesize
\item[(*) ] Chemical bond-based
\item[(**) ] Orbital-field matrix
\item[(***) ] Coulomb matrix (eigenspectrum)
\end{tablenotes}
\end{threeparttable}
\end{table}
\section{Conclusion}
In this paper, we have proposed a new method for representing materials in materials informatics applications. This method focuses on exploiting information about chemical bonds among atoms in materials and also inherits the benefit of orbital-field matrix representation that is based on the distribution of valence shell electrons. Additionally, we have demonstrated that different similarity measure methods can be integrated with the proposed method. Note that, the proposed method can apply into a large diversity of atomic compositions and structures and facilitate the learning and predicting targeted properties of molecular and crystalline systems.
In the experiment, the proposed method is tested with an application that aims to predict atomization energies; and the results of the experiment indicate that the proposed method is more effective in most the cases when comparing with two selected baselines. In the near future, we plan to further evaluate the proposed method by using different materials data as well as materials informatics applications.
\bibliographystyle{plain} |
2302.11600 | \section{Introduction} \label{intro}
From the gravitational form factors (GFF) of the energy-momentum tensor, it is found that there is a correspondence between the rest-energy decomposition and that of the equilibrium condition.
We shall explore the roles the trace anomaly plays from such a correspondence and point out the analogy among hadrons, vortices in type-II superconductors and the cosmological constant. The different ways the pressure acts between the gauge theories and general relativity is also discussed. Finally, the surprising spatial distribution of the trace anomaly in the pion resolves a pion mass puzzle and, at the same time, confirms that the conformal (scale) symmetry breaking is intricately linked to chiral symmetry breaking.
\section{Mass and Rest Energy} \label{mass}
Einstein's equation $E_0 = mc^2$ shows that the mass and the rest energy are equal. However, this does
not imply that their expressions in terms of their components or their origins are the same. In fact, many of their attributes are different. For one thing, the mass is a Lorentz scalar
while the energy is a component of the 4-momentum vector. In the example of $e^+ e^-$ annihilation to two photons $e^+ e^- \longrightarrow \gamma\gamma$, it is pointed out that the mass of the two photon system deduced from the rest energy is $2 m_e$, not the sum of the two photon masses~\cite{Okun:1991nr,Okun:2000kf}. This shows that, while momentum and energy have additivity properties, mass is not the sum of their constituent masses in general. When there is mass there is energy, but not vice versa.
The distinction between the mass and rest energy in QCD can be demonstrated through the energy-momentum tensor (EMT). From the forward matrix element of the EMT
\begin{equation}
\langle P|T^{\mu\nu}|P\rangle = 2 P^{\mu}P^{\nu},
\end{equation}
the hadron mass can be obtained from the trace of the EMT. It is known that the trace of the EMT in QCD has an anomaly after renormalization~\cite{Chanowitz:1972vd,Crewther:1972kn,Chanowitz:1972da,Collins:1976yq},
\begin{equation} \label{trace}
T^{\mu}_{\mu} = T_{a\,\, \mu}^{\mu} + T_{q\,\, \mu}^{\mu},
\end{equation}
where $T_{a\,\, \mu}^{\mu}$ is trace anomaly and $T_{q\,\, \mu}^{\mu}$ the sigma term from the quark condensates. They have the
expressions
\begin{eqnarray} \label{trace_q,g}
T_{a\,\, \mu}^{\mu} = \frac{\beta(g)}{2g} G^{\alpha\beta} G_{\alpha\beta}
\label{trace_g} + \sum_f \gamma_m (g)\, m_f \bar{\psi}_f \psi_f, \hspace{1cm}
T_{q\,\, \mu}^{\mu} = \sum_f m_f \bar{\psi}_f \psi_f, \label{trace_q}
\end{eqnarray}
and both are renormalization group invariant. Thus,
\begin{equation} \label{invariant_M}
M=\frac{\langle P| \int d^3 \vec{x}\, \gamma T^{\mu}_{\mu}(x)|P\rangle}{\langle P|P\rangle}
= \frac{\langle P| \int d^3 \vec{x}\, \gamma (T_{a\,\, \mu}^{\mu}(x) + T_{q\,\, \mu}^{\mu}(x))|P\rangle}{\langle P|P\rangle}
\end{equation}
This shows that the hadron mass is both scale and frame independent, just as expected for the mass which is a scalar.
For QCD with 2+1 light flavors, the $u$ and $d$ contribution to the $T_{q\,\, \mu}^{\mu}$, i.e., the pion-nucleon sigma
term, is $\sigma_{\pi N} = \frac{m_u+m_d}{2} \langle P|\bar{u}u + \bar{d}d|P\rangle_{\vec{P} =0}/2M$ = 39.7 (3.6) MeV from the FLAG (Flavor Lattice Averaging Group) average~\cite{FlavourLatticeAveragingGroupFLAG:2021npn} and the strangeness sigma term is $\sigma_s = m_s \langle P|\bar{s}s |P\rangle_{\vec{P}=0}/2M$ = 40.2(3.9) MeV from a lattice calculation~\cite{Yang:2015uis}. Together, they account for $\sim$ 8.5(8)\% of the nucleon mass. The rest of the nucleon mass is due to the trace anomaly.
On the other hand, the decomposition of the rest energy can be obtained from the Hamiltonian and the gravitational form factor (GFF). The Belinfante form of the EMT is a symmetric rank two tensor,
which can be separated into traceless and trace components in irreducible representations~\cite{Ji:1994av,Ji:1995sv}
\begin{equation} \label{trace_separation}
T^{\mu\nu} = \overline{T}^{\mu\nu} + \frac{1}{4} g^{\mu\nu} T_{\rho}^{\rho}.
\end{equation}
So far, this separation is
scale and scheme independent. $\bar{T}^{\mu\nu}$ can be further split into the quark and glue parts. In this case,
the Hamiltonian, being the spatial integral of $T^{00}$, i.e., $ H = \int d^3\vec{x}\, T^{00}(x)$, can be written as
\begin{equation} \label{H_3-term}
H = H_q + H_g + \frac{1}{4} (H_{a} + H_{m}),
\end{equation}
where
\begin{eqnarray}
H_q &=& \int d^3\vec{x}\, (\frac{i}{4} \sum_f \bar{\psi}_f \gamma^{\{0}\!\stackrel{\leftrightarrow}{D}\!{}^{0\}}\psi_f
- \frac{1}{4} T_{q\, \mu}^{\mu}), \label{H_q}\\
H_g &=& \int d^3\vec{x}\, \frac{1}{2} (B^2 + E^2), \label{H_g}, \label{H_g} \\
H_{a} &=& \int d^3\vec{x}\, T_{a\,\, \mu}^{\mu} = \frac{\beta(g)}{2g} G^{\alpha\beta} G_{\alpha\beta}
\label{trace_g} + \sum_f \gamma_m (g) \, m_f\bar{\psi}_f \psi_f \\
H_m &=& \int d^3\vec{x}\, T_{q\,\, \mu}^{\mu} = \sum_f m_f \bar{\psi}_f \psi_f. \label{H_m}
\end{eqnarray}
%
$H_q$ and $H_g$ are the Hamiltonian operators for the quark energy and glue field energy.
They are in the form of bare operators with the understanding that
they need to be renormalized in a scheme in order to obtain the results for the
rest energy at certain scale. A practical and physical scheme to computer their matrix elements and compare with experiments is the lattice approach, where the bare operators are discretized, renormalized and mixed in the non-perturbative RI/MOM scheme and matched to the $\overline{\rm MS}$ scheme at 2 GeV~\cite{Yang:2018nqn}. The traceless operators in Eqs.~(\ref{H_q}) and (\ref{H_g}) are calculated with $T^{0i}$ or $T^{00} - T^{ii}/3$ operators~\cite{Yang:2018nqn,Constantinou:2020hdm,Wang:2021vqy}.
On the other hand, $H_a$ and $H_m$ from the trace part of $T^{00}$ are the renormalization group invariant operators from the trace anomaly and the sigma terms as shown in Eq.~(\ref{trace_q,g}).
The $\sigma$ terms are calculated with the direct calculations or through the Feynman-Hellman theorem~\cite{FlavourLatticeAveragingGroupFLAG:2021npn} on the lattice.
It is shown that the trace anomaly emerges with the lattice regulation after renormalization~\cite{Caracciolo:1989pt,Makino:2014taa,DallaBrida:2020gux}. The only complication in the lattice calculation is that it can mix with the lower-dimensional operator $\bar{\psi}\psi$ with a $1/a$ power divergence. The trace anomaly has been calculated in the charmonium where the $\beta/2g$ and $\gamma_m$ are fitted with several valence quark masses~\cite{He:2021bof}. They can be used for light hadrons on the same lattice. The $1/a$ term has been included in the fit, but the signal is too weak to be isolated in this calculation at the lattice spacing $a = 0.114$ fm~\cite{He2022}. We should note that
the traceless operators $H_q$ and $H_g$ do not mix with the trace operators $H_a$ and $H_m$ on the lattice. They are in different irreducible representations of the $O(4)$ group.
From the operator product expansion of the deep inelastic scattering (DIS), the forward matrix elements of the traceless EMT for the nucleon are 3/4 of the second moments of the parton distribution functions (PDFs) which are the momentum fractions on the light front, i.e., $\langle x\rangle_q(\mu)$ and
$\langle x\rangle_g(\mu)$. They incorporate renormalization and mixing of the operators of $H_q$ and $H_g$ at the scale $\mu$. Thus, the rest energy for the proton with $2+1$ flavors is
\begin{equation} \label{E0H}
E_0 = \frac{3}{4} \,[\langle x\rangle_q (\mu)+ \langle x\rangle_g (\mu)] M + \frac{1}{4} [\langle H_a\rangle + \sigma_{\pi N} + \sigma_s].
\end{equation}
We shall define $\langle H_{...}\rangle = \langle P|H_{...}|P\rangle/\langle P|P\rangle$ at $\vec{P} = 0$.
We see that 3/4 of $E_0$ is due to the quark and glue momentum fractions (or quark and glue field energies) and 1/4 is from the trace anomaly and the $\sigma$ terms~\cite{Ji:1994av,Ji:1995sv}.
We plot the fractional contributions of these components in the pie chart for the nucleon in Fig.~\ref{rest_energy}.
The fractions in the figures include $f_{f,g}^{H} = \langle H_{f,g}\rangle/M = 3/4\,\langle x\rangle_{f,g}$, where the quark flavor $f = u+d\, (\pi N),s,c,b,t$, $f^N_{\rm trace\,\,anomaly} = \langle H_a\rangle$/4M, and
$f_f^N = \langle H_{m_f}\rangle$/4M.
For the fractions of the traceless part $f_{f,g}^H$, we shall use the second moments of the PDFs from
CT18~\cite{Hou:2019efy,T.J.Hou} in the global analysis of experiments. The $\sigma$ terms are from the lattice calculations~\cite{Yang:2015uis,Gong:2013vja} using the overlap fermions. Since the separation of the quark and glue
momentum fractions are scale dependent, we plot the fractions at the hadronic scale at $\mu = 2 $ GeV in Fig.~\ref{RE-3f}
and considered $2+1$ flavors for the $\sigma$ terms. In Fig.~\ref{RE-6f}, we plot them at the weak scale of
$\mu = 250$ GeV with 6 flavors for the $\sigma$ terms. As we see from Fig.~\ref{RE-3f} and Fig.~\ref{RE-6f}, when the scale is increased, the valence partons (i.e., $u$ and $d$) fractions are shifted more toward sea and gluon partons, but their total
contributions stays at 3/4 of the proton rest energy. Similarly, in the 1/4 contribution from the trace, the inclusion of heavy quarks ($c,b,$ and $t$) shrinks the trace anomaly contribution. This reflects the fact that that, to leading order in the heavy quark expansion, the matrix element for the heavy quark $\bar{Q}Q$ is proportional to the glue $G^{\alpha\beta}G_{\alpha\beta}$ matrix element in the nucleon~\cite{Shifman:1978zn}, i.e., $m_Q \langle P | \bar{Q} Q | P \rangle_{\stackrel{\longrightarrow}{m_Q \rightarrow \infty}} - \frac{1}{3}(\frac{\alpha_s}{4 \pi})\langle P | G^2 |P \rangle$. The coefficient $- \frac{1}{3}(\frac{\alpha_s}{4 \pi})$ corresponds to the $n_f$ term in the leading $\alpha_s/4\pi$ expansion of $\frac{\beta(g)}{2g}$ with
a negative sign. This shows that for $n_f$ heavy enough quarks, the introduction of their sigma terms is absorbed by
the trace anomaly with a change in the $\beta$ function. The net heavy quark contribution of
a heavy quark with mass $M_H$ is $\mathcal{O} (1/M_H)$, in accordance with the decoupling theorem~\cite{Appelquist:1974tg,Kaplan:1988ku}. To study the quark mass dependence, a lattice
calculation with the overlap fermion has been carried out~\cite{Gong:2013vja}. It is found that the sigma terms for quark masses heavier than $\sim 1/2$ of the charm mass are the same within errors. We take this finding to mean that the sigma terms for the charm, beauty and top quarks are the same. For the charm, it is found~\cite{Gong:2013vja} that $f_c^N = 0.024(8)$, which is taken to be the same for $f_b^N$ and $f_t^N$.
\begin{figure}[htbp] \centering
\subfigure[]
{\includegraphics[width=0.445\hsize]{figures/energy_pie_2GeV.png}
\label{RE-3f}}
\subfigure[]
{\raisebox{4ex}
{\includegraphics[width=0.465\hsize]{figures/energy_pie_250GeV.png}}
\label{RE-6f}}
\caption{Proton rest energy decomposition in terms of quark sigma terms of different flavors, the trace anomaly and the quark and
glue momentum fractions. They are plotted as the percentage fractions of the proton mass. The fractions $f_{f,g}^H$ are from the second moments (3/4 $\langle x \rangle_{f,g}$) of the PDFs from the CT18 global analysis~~\cite{Hou:2019efy,T.J.Hou}. The sigma terms are obtained from lattice calculations~\cite{Yang:2015uis,Gong:2013vja}. (a) is for the 2+1 flavor case at the scale of
$\mu = 2$ GeV and (b) for the case including the charm, bottom and top momentum fractions and their sigma terms
at the weak interaction scale $\mu = 250$ GeV. \label{rest_energy}
}
\end{figure}
Not all the components in the rest energy have been associated with experimental observables so far, notably the $\sigma$ terms other than the $\pi N$ sigma term and the trace anomaly. However, all of these components are amenable to lattice calculations and they have been calculated on the lattice.
From the above discussion of the mass and rest energy, we realize that the question `Since the $u$ and $d$ quark masses are only $\sim 1$\% of the proton mass, where does the rest of proton mass come from?' is misguided. As we see from the above $e^+ e^- \rightarrow \gamma\gamma$ example and
the bound states like the hydrogen atom, the mass is not the sum of those of its constituents. If one attempts to separate the quark and the glue contributions in the mass expression in Eq.~\ref{trace}, they become scheme and scale dependent~\cite{Hatta:2018sqd,Metz:2020vxd}. Instead, one should have asked `Since the nucleon sigma terms are small ($\sim 10$\% of the proton mass), where does the rest of the nucleon rest energy come from?'.
\subsection{Gravitational Form Factors} \label{sec:GFF}
The nucleon mass and rest energy can also be obtained from the gravitational form factors (GFF) of the EMT.
They contain the following terms~\cite{Kobzarev:1962wt,Pagels:1966zza,Ji:1996ek}
for the quarks and gluons
\begin{eqnarray} \label{GFF}
\langle P'| T_{q, g}^{\mu\nu}|P\rangle /2 M&=& \bar{u}(P')[A_{q,g}(q^2,\mu) \gamma^{(\mu} \bar{P}^{\nu)} +
B_{q,g}(q^2,\mu) \frac{\bar{P}^{(\mu} i \sigma^{\nu)\alpha} q_{\alpha}}{2M} \nonumber \\
&+& D_{q,g}(q^2,\mu)\frac{q^{\mu}q^{\nu} - g^{\mu\nu}q^2}{M} + \bar{C}_{q,g}(q^2, \mu) M g^{\mu\nu} ] u(P)
\end{eqnarray}
where the forward matrix element $A_{q,g}(0)= \langle x\rangle_{q,g} (\mu)$ is the momentum fraction and $A_{q,g} (0) + B_{q,g} (0) = 2J_{q,g} (\mu)$ the angular momentum fraction~\cite{Ji:1996ek}.
By making a connection to the stress tensor of the continuous medium, it is shown ~\cite{Polyakov:2002yz,Polyakov:2018zvc} that $D_{q,g}(q^2)$ is related to the internal force of the hadron and
encodes the shear forces and pressure distributions of the quarks and glue in the nucleon. $\bar{C}(0)$ is shown
to be related to the normal stress $T^{ii}(0)$. Thus, it is the pressure-volume work~\cite{Lorce:2017xzd,Lorce:2018egm,Liu:2021gco}.
From the GFF in Eq.~(\ref{GFF}), one can get the total forward trace and $T^{00}$ matrix elements
\begin{equation} \label{Tmumu}
\langle T_{\mu}^{\mu}\rangle = (\langle x\rangle_q + \langle x\rangle_g) M + 4 (\bar{C}_q(0) + \bar{C}_g (0)) M,
\end{equation}
\begin{equation} \label{T00}
\langle T^{00}\rangle = (\langle x\rangle_q + \langle x\rangle_g) M + (\bar{C}_q(0) + \bar{C}_g (0)) M
\end{equation}
and $\bar{C}(0)$ is related to the the forward matrix element of the normal stress, i.e., $\langle T^{ii}\rangle$
\begin{equation} \label{Tii}
\langle T^{ii} \rangle = - 3 \,(\bar{C}_q(0) + \bar{C}_g (0))M
\end{equation}
Since $T^{ii} = T^{00} - T_{\mu}^{\mu}$, Eqs.~(\ref{trace}), (\ref{trace_q,g}), (\ref{T00}) and (\ref{Tii}) can be combined to solve $\bar{C}(0)$ which gives
\begin{equation} \label{barC}
\bar{C}_q(0) + \bar{C}_g (0) = \frac{1}{4} (f_a^N + f_q^N - (\langle x\rangle_q + \langle x\rangle_g))
\end{equation}
where $f_a^N$ and $f_q^N = \sum_f f_f^N$ are the fraction of trace anomaly and sigma term contributions of the nucleon mass and $\langle x\rangle_q = \sum_f \langle x\rangle_i$ is the total momentum fraction of the quarks.
From this expression of $\bar{C}(0)$, one obtains from Eqs.~(\ref{Tmumu}) and ~(\ref{T00})
\begin{eqnarray} \label{Tmumu00}
\langle T_{\mu}^{\mu}\rangle \! \!\! &=&\!\!\! M = (f_a^N + f_q^N) M, \nonumber \\
\!\!\!&=& \!\!\! E_0 = 3/4 (\langle x\rangle_q + \langle x\rangle_g) M + 1/4 (f_a^N + f_q^N) M.
\end{eqnarray}
which are in agreement with the mass expression from the trace in Eq.~(\ref{invariant_M}) and the rest energy decomposition from the Hamiltonian in Eq.~(\ref{E0H}) as a cross check. This is easy to understand. The GFFs are organized by Lorentz covariance and CPT symmetry, not by the irreducible representations of the EMT. As such, the $A_q$ and $A_g$ terms include both the traceless and
trace contributions. By subtracting the trace contributions in the $\bar{C}$ term in Eq.~(\ref{barC}), does one obtain the traceless contribution of $E_0$ from $\langle T^{00}\rangle$, which is $3/4 (\langle x\rangle_q + \langle x\rangle_g) M$ and the trace part is the remainder of $\bar{C}$, which is $1/4 (f_a^N + f_q^N) M$ as shown in Eq.~(\ref{Tmumu00}). By the same token, the $A_q$ and $A_g$ terms in the trace matrix element $\langle T_{\mu}^{\mu}\rangle$ are cancelled by the same from the $\bar{C}$ so that the traceless part vanishes and it leaves the remainder of $\bar{C}$ to be just the trace $(f_a^N + f_q^N) M$ as given in Eq.~(\ref{Tmumu00}).
Since the EMT is conserved, i.e.,
$\partial_{\nu} T^{\mu \nu}= 0$, this leads to the sum $\bar{C}_q(0) + \bar{C}_g (0) = 0$ as can be readily verified
in Eq.~(\ref{barC}). In view of this, $\bar{C}$ has been dropped from the GFF in some of the recent
literature. As a consequence, one finds that
$\langle T_{\mu}^{\mu}\rangle = \langle T^{00}\rangle = (\langle x\rangle_q + \langle x\rangle_g) M$. This is not a
correct physical decomposition, as $\langle T_{\mu}^{\mu}\rangle$ does not have the trace anomaly and the
rest energy from $\langle T^{00}\rangle$ does not have all the relevant physical contents as in Eq.~(\ref{E0H}). This will be explained in more detail in Sec.~\ref{EEC}.
As we mentioned, $\bar{C}(0)$ is the negative of $T^{ii}(0)$ in Eq.~(\ref{Tii}), which is the pressure-volume work
and $\bar{C}_q(0,\mu)$ and $\bar{C}_g(0,\mu)$ separately have $\mu$ dependence, but when combined
in Eq.~(\ref{barC}), do not have scale dependence. Since the total $\bar{C}$ is zero, it is the
equilibrium condition, i.e.,
\begin{equation} \label{equilibrium}
PV = - \frac{d E}{dV} V = -( \bar{C}_q(0) + \bar{C}_g (0)) M = - \frac{1}{4} (f_a^N + f_q^N) M + \frac{1}{4} (\langle x\rangle_q + \langle x\rangle_g)) M = 0,
\end{equation}
where the positive pressure from the quarks' energies and the glue field energy are balanced by the negative pressure from the trace anomaly and quark condensates. Furthermore, it is important to note that the equilibrium condition in Eq.~(\ref{equilibrium}) and the rest energy in Eq.~(\ref{E0H}) (same as Eq.~(\ref{Tmumu00})) involve the
same matrix elements, but with different coefficients. Since
$PV = - dE(V)/d\log V$, the coefficients in Eq.~(\ref{equilibrium}) are the exponents of the volume dependence in the equation of state. In particular, the unity coefficient of the trace term indicates that
it is linear in volume and, thus, yields a negative constant pressure which confines the hadrons. Whereas, the positive pressure-volume work is $ - \frac{1}{3}$ of the quark and glue energies in $E_0$. This infers that their volume dependence is $V^{-1/3}$. Further examination of the significance of this rest energy and equilibrium correspondence are presented in the next section, Sec.~\ref{EEC}.
\subsection{Energy -- equilibrium correspondence} \label{EEC}
All bound or confined states have their characteristic sizes. In certain cases, the rest energies are
expressed in terms of these sizes, i.e., an equation of state $E (V)$. For a stable state, it would inevitably involve at least two types of energies with different size dependences so that there can be an equilibrium size with the condition
$\frac{d E}{d V}|_{V_0} = 0$ and the stability condition $\frac{d^2 E}{d^2 V}|_{V_0} > 0$. In particular, when these energies
with power dependence on the volume (or radius),
\begin{equation}
E (V) = \sum_i \epsilon_i V^{p_i},
\end{equation}
then the equilibrium condition is
\begin{equation} \label{power}
PV = - \frac{d E}{dV} V = - \sum_i p_i\, \epsilon_i V^{p_i}|_{V_0} = 0.
\end{equation}
It involves the same $\epsilon_i V^{p_i}$ terms as in $E(V)$, but weighted with the power of the volume dependence
$p_i$ with a negative sign.
We can give a few examples of this rest energy -- equilibrium correspondence.
\begin{enumerate}
\item MIT bag model:
In this model with relativistic quarks and gluons confined in a bag with certain bag boundary condition~\cite{Chodos:1974je,Chodos:1974pn}
the rest energy of a hadron is expressed as
\begin{equation}
E(V) = BV + \frac{\Sigma_{q,g}}{R}
\end{equation}
where $B$ is the confining bag constant and $\frac{\Sigma_{q,g}}{R}$ are the eigenenergies of the quarks and
gluons with the boundary condition. The equilibrium radius $R$ is determined from $\frac{d E(V)}{dV}|_{V_0}= 0$ and
the PV equilibrium is
\begin{equation} \label{bagPV}
PV = - \frac{dE}{dV} V|_{V_0} = - (BV_0 - \frac{1}{3} \frac{\Sigma_{q,g}}{R_0}) =0,
\end{equation}
where $R_0 = (3 V_0/4 \pi)^{1/3}$. The rest energy is then $E_0 = BV_0 + \frac{\Sigma_{q,g}}{R_0}$.
The unit and $ - \frac{1}{3}$ factors in Eq.~(\ref{bagPV}) in front of the the $BV$ and $\frac{\Sigma_{q,g}}{R}$ terms simply reflects their volume dependences in $E (V)$ as demonstrated in Eq.~(\ref{power}).
\item Non-relativistic potentials:
For one-body non-relativistic potential problems with a potential as a power of the radius, i.e., $V(r) = k r^n$,
the rest energy is the sum of the kinetic energy and the potential energy
\begin{equation} \label{T+V}
E_0 = \langle T\rangle + \langle V\rangle
\end{equation}
With a variational wavefunction characterized by its size $R$, the kinetic energy would scale as $R^{-2}$ and
the potential energy scales as $R^n$. Therefore the equation of state for $E(R)$ is
\begin{equation} \label{NRE}
E(R) = \frac{\epsilon_T}{R^2} + \epsilon_V R^n,
\end{equation}
where $\epsilon_T = \langle T\rangle (R) R^2$ and $\epsilon_V = \langle V\rangle (R)/ R^n$ are constant matrix elements.
Upon differentiation, one obtains
\begin{equation} \label{FR}
FR = - R\, dE/dR|_{R_0} = 2 \langle T\rangle (R_0)- n \langle V\rangle (R_0)= 0
\end{equation}
To the extent that the matrix elements $\langle T\rangle (R_0)$ and $\langle V\rangle (R_0)$ from the variational approach are good approximation of those from the solution of the Schr\"{o}dinger equation, Eq.~(\ref{FR}) is just the
well-known virial theorem. Thus, the virial theorm for the bound states with the non-relativisitc potential
$r^n$ can be understood in terms of the equilibrium condition. Again, the factors in front of the scaled matrix elements
in the equilibrium condition reflect the exponents of the size dependence of $E(R)$ in Eq.~(\ref{NRE}).
It is also clear that the equilibrium condition ($PV = 0 $ or $FR = 0$) is owning to the cancellation between the
pressures from the potential energy and the kinetic energy. To this end, it is physical and more meaningful to
have both the kinetic and potential energies present in the decomposition of the rest energy as given in the above
examples. On the other hand, from the virial theorem, one can obtain for the case of the Coulomb potential,
$E_0 = - \langle T\rangle$ or $E_0 = \langle V\rangle/2$. But neither is an acceptable physical interpretation of
the bound state energy of the hydrogen atom, as they do not have the full physical contents as in Eq.~(\ref{T+V}).
Consequently, expressing the rest energy as $E_0 = \langle T^{00}\rangle = (\langle x\rangle_q + \langle x\rangle_g) M$
by dropping the $\bar{C}$ terms first in the GFF as discussed in Sec.~\ref{sec:GFF}, is not a
valid physical decomposition. It lacks the potential energy from the trace part of the EMT.
\end{enumerate}
From Eq.~(\ref{power}) and the above examples, it is clear that the rest-energy - equilibrium correspondence
between the rest energy in Eqs.~(\ref{E0H}), (\ref{Tmumu00}) and the $PV =0$ in Eq.~(\ref{equilibrium}) implies that the equation of motion for the hadron energy $E_H(V)$ as a function of $V$ is
\begin{equation} \label{EoS}
E_H (V) = E_S + E_T = \epsilon_S V + \epsilon_T V^{-1/3}
\end{equation}
where $\epsilon_S = E_S/V$ is the density of the singlet trace energy $E_S = 1/4 (f_a^N + f_q^N) M$ and
$\epsilon_T = E_T V^{-1/3}$ is the scaled density of the triplet traceless energy
$E_T = 3/4 (\langle x\rangle_q + \langle x\rangle_g) M$. Incidentally, Eq.~(\ref{EoS}) has the same volume dependence
as in the MIT bag model and the bag constant has been suggested to be from the gluon condensate~\cite{Jacobs:2004qv}.
The equilibrium condition $PV = - (E_S - \frac{1}{3} E_T)$
corresponds to Eq.~(\ref{equilibrium}). Thus, the emerged trace anomaly is the negative of the energy density of the gluon condensate times the volume of the hadron and, thereby, the volume can be defined as
%
\begin{equation} \label{volume}
V = \int d^3 r \, \rho_{a} (r)/|\langle (\beta/2g) \,G^{\alpha\beta}G_{\alpha\beta}\rangle|,
\end{equation}
where $\rho_{a} (r)$ is the radial distribution of the glue part of the trace anomaly in the hadron which can
be obtained from the Fourier transform of the trace anomaly form factor.
$\langle \beta/2g \, G^{\alpha\beta}G_{\alpha\beta}\rangle$ is the density of the gluon condensate in the vacuum, which is negative. Here we have neglected the $\sigma$ terms which are small in the nucleon.
It has been suggested~\cite{Shifman:1978by,Shuryak:1978yk,Ji:1995sv,Teryaev:2016edw,Ji:2021pys} that the trace anomaly comes about because there is a gluon condensate in the vacuum.
The trace anomaly in the hadron is measured relative to this vacuum background.~\footnote{In the calculation of
the trace anomaly (or the disconnected insertion of quark loops) in the hadron in the Euclidean path-integral, one
takes the correlated insertion in the ensemble averages, i.e., $\langle O G_2\rangle - \langle O\rangle\, \langle G_2\rangle$
where $G_2$ is a hadron propagator and the uncorrelated vacuum condensate $\langle O\rangle$ is subtracted.}
Thus, the physical picture emerges as this. Due to the conformal symmetry breaking, there is a gluon condensate in the vaccuum. A hadron is formed as a bubble in this gluon condensate sea, taking a volume V and quarks and gluons are put in the volume like air molecules inside a bubble. The replacement of the gluon condensate causes energy which is the trace anomaly in the hadron. The bubble is in equilibrium due to the balance between the negative pressure from the trace anomaly and the positive pressure due to the quark energy and the glue field energy as in Eq.~(\ref{equilibrium}).
What's learned further in the energy-equilibrium correspondence as revealed in the GFF is that the
trace anomaly has a linear volume dependence. This not only verifies the suggestion about the origin of the trace anomaly, it demonstrates that it yields a constant negative pressure and is thus the source of confinement. This is consistent with
the finding that the large Wilson loop with a spatial distance $r$ in the presence of the trace anomaly in the quenched approximation gives the potential between the infinitely heavy quarks in the form of
$V(r) + r \frac{dV(r)}{dr}$~\cite{Dosch:1995fz,Rothe:1995hu}. Using a lattice calculation of the glue part of the trace
anomaly in the charmonium~\cite{Sun:2020pda} and assuming a linear potential between the charm quarks, the deduced string tension agrees very well~\cite{Liu:2021gco} with that in the Cornell potential model used to fit the charmonium
spectrum~\cite{Mateu:2018zym}.
\section{Vortices in Type II Superconductor}
External magnetic fields penetrate through the core region and beyond with a London penetration depth $\lambda_L$ in a type II superconductor in the vortex phase is illustrated in Fig.~\ref{vortex}
in the radial direction. $n_c$ is the local density of the superconducting electrons and the coherence length $\xi$ is the
characteristic exponent of the density variations of the superconducting components. Type II is the case when the Ginzburg-Landau parameter $\kappa = \frac{\lambda_L}{\xi} > \frac{1}{\sqrt{2}}$.
\begin{figure}[htbp] \centering
{\includegraphics[width=0.4\hsize,angle=270]{figures/Coherent_Length.pdf}
}
\caption{Illustration to depict a vortex in the type II superconductors between the normal phase
where the external magnetic field $B$ in the core extends out with the London penetration depth $\lambda_L$ and the superconducting phase where $n_c$ is the superconducting electron density and $\xi$ is the coherence length. \label{vortex}
}
\end{figure}
Type II superconductors can be described by the Ginzburg-Landau equation which solves the superconducting electron wavefunction $\psi(r)$ with $n_c = |\psi(r)|^2$ being the local density. Here we shall dwell on the energetics, in particular on the origin of various contributions to the energy of the core vortex~\cite{Clem1975}. There are several contributions to the Ginzburg-Landau free energy of a vortex relative to that of the Meissner state,
\begin{equation} \label{F_energy}
F = F_B + F_{sc} + F_c,
\end{equation}
where $F_B$ and $F_{sc}$ are the magnetic field energy and the energy of supercurrents
\begin{eqnarray}
F_B &=& \frac{1}{2\mu_0} \int dv\, B^2, \label{FB}\\
F_{sc} &=& \frac{\mu_0}{2} \int dv \, \lambda_L^2 \vec{J_s} \cdot \vec{J_s}, \label{Fsc}
\end{eqnarray}
and $F_c$ is the cost of depleting the Cooper pair condensate by the critical magnetic field $H_{c1}$
\begin{equation} \label{Fc}
F_c = \frac{\kappa \phi_0\, H_{c1}}{8\pi} \int dl \rho' d\rho' \, (1 - n_c^2)^2
\end{equation}
where $\rho' = \rho/\lambda_L$, $\phi_0 = hc/2e$ is the flux quanta, and $H_{c1}$ is the critical magnetic field between
the Meissner and the vortex state. $n_c$ is the normalized superconducting electron density. We see that the free energy decomposition is a close analogy to that of the hadron rest energy. The magnetic field energy $F_B$ Eq.~(\ref{FB}) corresponds to the chromo-electric and -magnetic field energies in Eq.~(\ref{H_g}). The supercurrent energy
in Eq.~(\ref{Fsc}) corresponds to the quark energy in Eq.~(\ref{H_q}). The $F_c$ in Eq.~(\ref{Fc}) is the cost of energy to
remove or reduce the superconducting pairing condensation in the vortex region. This is analogous to the trace anomaly which is the cost of energy to deplete the glue condensate in the QCD vacuum inside the hadron. There is a quark condensate term in Eq.~(\ref{H_m}) which appears to be missing in the free energy in Eq.~(\ref{F_energy}). This is because the matrix element of $ \bar{\psi}\psi$ in the non-relativistic limit is the same as $ \bar{\psi}\gamma_0\psi$
which is the fermion number. Thus, $\langle H_m\rangle$ in the condensed matter just measures the total electron mass,
which is a constant and can be factored out from the problem.
A variational approach is carried out~\cite{Clem1975} where the the wavefunction is assumed to have the form
$\Psi(\rho, \phi) = f(\rho) e^{-i\phi}$ where $f(\rho) = \frac{\rho}{\sqrt{\rho^2 + R^2}}$. $\rho$ is the cylindrical radial coordinate
and $R$ is the variational parameter for the core radius. This defines the density in Eq.~(\ref{Fc}) as $n_c = |\Psi|^2 = f(\rho)^2$.
The free energy per unit length per vortex line in the unit of $\phi_0 H_c/(2\sqrt{2}\pi)$ is obtained
\begin{equation}
\frac{F}{l\,\phi_0 H_c/(2\sqrt{2}\pi)}= \frac{1}{8} \kappa R'^2 + \frac{1}{8\kappa} + \frac{K_0(R')}{2\kappa\,K_1(R') R'},
\end{equation}
where $R' = R/\lambda_L$ and $K_0$ and $K_1$ are the modified Bessel functions of the second kind.
The first term is due to the cost of condensation energy in Eq.~(\ref{Fc}) and the second and third terms are from Eqs.~(\ref{FB}) and (\ref{Fsc}). From the equilibrium condition of the variation with respect to the area $-\frac{dF}{dA}A = 0$,
we see that the potential energy in Eq.~(\ref{Fc}) gives a constant negative two-dimensional pressure which is balanced by
the positive pressures from the energies of the magnetic field and the suppercurrent.
The scripts for the confinement of hadrons and vortices in the type II superconductors are basically the same. Both have
condensates from symmetry breaking. The gluon condensate is due to
the conformal (scale) symmetry breaking and the Cooper pair condensate is due to the gauge symmetry breaking. When the condensates are repleted to make room for quarks and glue field in a hadron in QCD and supercurrents and magnetic field in a vortex in QED, they provide constant negative pressures for confining the systems.
There are many facets for color confinement in QCD~\cite{Greensite:2003bk,Shifman:2010jp,Brodsky:2014yha}. In this
manuscript, we consider the role of the trace anomaly, a color-singlet operator, which is responsible for the realization of volume confinement in light hadrons and linear confinement in heavy quarkoniums.
\section{Cosmological Constant} \label{cosmological_constant}
For the purpose of obtaining a static Universe, Einstein introduced a cosmological constant $\Lambda$ in his equation of general relativity~\cite{Einstein:1917ce}
\begin{equation} \label{Einstein-eq}
R^{\mu\nu} - \frac{1}{2} R\, g^{\mu\nu} = 8\pi G\, T^{\mu\nu} + \Lambda\, g^{\mu\nu},
\end{equation}
where $R^{\mu\nu}$ is the Ricci curvature tensor and $R$ is the scalar curvature. $G$ is Newton's constant and
the source $T^{\mu\nu}$ is the energy-momentum tensor. The positive constant $\Lambda$ is introduced to the $g^{\mu\nu}$ term, which can be considered an extra term in the EMT, so that it balances the gravitational pull from a static uniform matter density $\rho$. Einstein
found the solution of $\Lambda$ to be~\cite{Einstein:1917ce,ORaifeartaigh:2017uct}
\begin{equation}
\Lambda = 4\pi G \rho.
\end{equation}
This can be seen from the Friedman equation for the the Friedmann-Robertson-Walker scale parameter $a(t)$
\begin{equation} \label{Friedman}
\frac{\ddot a}{a} = - \frac{4\pi G}{3} (\rho + \rho_{\Lambda}+ 3 (P + P_{\Lambda})),
\end{equation}
where the energy density $\rho_{\Lambda} = \Lambda/8\pi G$ and the pressure density $P_{\Lambda} = - \Lambda/8\pi G$ have opposite signs due to the
metric $g^{\mu\nu}$. In a matter-dominated Universe, Einstein's solution is consistent with ${\ddot a}/a = 0$.
Much like in the case of general relativity, the trace anomaly in the $g^{\mu\nu}$ term of the QCD EMT in Eq.~(\ref{trace_separation}) serves the purpose as the hadron cosmological constant which provides a negative pressure for confinement.
However, the cosmological constant works differently from the confinement mechanism in hadrons and type II
superconductors, even though all share negative pressures. Contrary to gauge theories, the source in the equation
of motion in general relativity (i.e., Eq.~(\ref{Einstein-eq})) is the energy-momentum tensor. As such, the pressure from $T^{ii}$ and $g^{ii} \Lambda$ also gravitate and the negative pressure, like negative mass, bestows anti-gravity. Therefore, if
$3P_{\Lambda}$ from the cosmological constant (dark energy) is negative enough, i.e., $\rho + \rho_{\Lambda}+ 3 (P + P_{\Lambda}) < 0$, the Universe experiences an accelerated expansion as has been observed. Furthermore, the
negative pressure for the hadrons and vortices are from the reaction of the respective condensate, while the
cosmological constant is the vacuum energy itself~\cite{Zeldovich:1967gd,Weinberg:1988cp}.
\section{Pion mass puzzle and trace anomaly form factor}
As pointed out in Eq.~(\ref{invariant_M}), the pion mass can be obtained from the trace of the EMT
%
\begin{equation} \label{pion_m}
m_{\pi}= \frac{\langle \pi | \int d^3 \vec{x}\, \gamma \big [\frac{\beta(g)}{2g} G^{\alpha\beta} G_{\alpha\beta} +
\sum_f \gamma_m (g)\, m_f \bar{\psi}_f \psi_f \big ] |\pi\rangle}
{\langle \pi|\pi\rangle} + \frac{\langle \pi | \int d^3 \vec{x}\, \gamma \sum_f m_f \bar{\psi}_f \psi_f |\pi\rangle}
{\langle \pi|\pi\rangle}
\end{equation}
Ignoring the strangeness (it is found to be negligibly small for light pions in a lattice calculation~\cite{Yang:2014xsa}), the second term (sigma term) gives half of the pion mass. This can be proven
from the Gell-Mann-Oakes-Renner relation $f_{\pi}^2 m_{\pi}^2 = - (m_u \langle \bar{u}u \rangle + m_d \langle \bar{d}d \rangle)$ and the Feynman-Hellman theorem $m_u \partial m_{\pi}/\partial m_u + m_d \partial m_{\pi}/\partial m_d
= m_u \langle \pi |\bar{u}u| \pi \rangle + m_d \langle \pi |\bar{d}d |\pi\rangle$. Furthermore, it is
proportion to the $\sqrt{m_q}$ for the SU(2) case where $m_q = m_u = m_d$. The puzzle of the mass relation
in Eq.~(\ref{pion_m}) is why the trace anomaly in the first term should also decrease with the quark mass as
$\sqrt{m_q}$. There is no obvious or symmetry reason why the trace anomaly in the pion should approach zero at
the chiral limit. Does it mean that the size (e.g., root-mean-square radius) from the effective volume defined in Eq.~(\ref{volume}) vanishes at the chiral limit? Also, in the analysis of the pion rest energy in a lattice calculation~\cite{Yang:2014xsa}, all the terms are positive and approach zero in the chiral limit. There is no cancellation among these contributions.
%
\begin{figure}[hbtp]
\centering
{\includegraphics[width=0.5\hsize]{figures/TA_density.pdf}
}
\caption{The distribution of the glue part of the trace anomaly $\bar{\rho}_H$ in the nucleon, $\rho$ and $\pi$ as a function of the distance between the glue operator and the sink position of the respective hadron propagator. This is from Ref.~\cite{He:2021bof}.
Its Fourier transform would give the corresponding form factor of the glue operator. \label{pion_ta}
}
\end{figure}
In light of this puzzle, a lattice calculation has been carried out to examine the spatial distribution $\rho_H$ in the nucleon, $\rho$ and pion~\cite{He:2021bof}. The spatial coordinate is between the glue part of the trace anomaly operator and the sink position
of the interpolation field of the hadron so that they are the Fourier transform of the form factors of the glue
operator in the respective hadron. The results of $\rho_H$ are plotted in Fig.~\ref{pion_ta}. We see that the density distributions for the nucleon and the $\rho$ are monotonic as in the electric and axial charge distributions. However, the distribution for the pion is unusual. When the quark mass is small, the distribution changes sign such that the integral of the distribution vanishes at the chiral limit. It is verified~\cite{He2022} that the matrix element $\langle \pi | \int d^3 \vec{x}\, \gamma \frac{\beta(g)}{2g} G^{\alpha\beta} G_{\alpha\beta} |\pi\rangle/
\langle \pi|\pi\rangle $ is proportional to $\sqrt{m_q}$ in the partially-quenched calculation with different
valence quark masses.
This solves the puzzle we put forward above. It still has a finite size even though the
effective volume defined in Eq.~(\ref{volume}) diminishes as the chiral limit is approached due to the cancellation
of glue part of the trace anomaly. This is achieved by modifying the structure of the vacuum condensate -- making
the glue condensate more negative than that in the vacuum in the inner core of the pion and more positive than that of the vacuum in the outer shell so that it takes no energy to create a pion with massless quarks.
Glue condensate is the consequence of conformal symmetry breaking and chiral symmetry breaking leads to the Gell-Mann-Oakes-Renner relation. The finding of the trace anomaly behavior in the pion is a concrete evidence that
the conformal (scale) symmetry breaking and chiral symmetry breaking are intricately coupled in QCD.
Since the form factor is the Fourier transform of the spatial distribution in Fig.~\ref{pion_ta}, we predict that the glue trace anomaly form factor of the pion will change sign. It would be interesting to detect this experimentally, such as via the suggested $J/\Psi$ production at the threshold of photoreaction~\cite{Kharzeev:1995ij,Hatta:2018ina,Duran:2022xag}.
There are efforts to look for conformal windows with multiflavor simulations~\cite{DelDebbio:2010zz}. One could examine the relation between chiral symmetry and conformal symmetry in these studies. Also, using it as an indicator, one could calculate it in nuclei to see if the conformal symmetry is partially restored.
\section{Summary} \label{summary}
We found that components of the rest energy of hadrons have a one-to-one correspondence with those of the free energy of the vortices in type II superconductors. Even the scripts for their confinement are basically the same -- the respective vacuum condensates from the symmetry breaking are depleted to accommodate the hadrons and the vortices with positive energies which are proportional to the volume and area. This results in a constant negative confining pressure to balance the positive pressures from the fermion and gauge field energies in both cases. Heavy quarkoniums have a similar picture, where the glue part of the trace anomaly is proportional to the volume of a flux tube. With a constant transverse electric field distribution in the cross section of the tube, found in lattice calculations~\cite{Bali:1997am,Baker:2018mhw}, the potential energy is linear in the distance between the heavy quark and antiquark, leading to a linear confinement.
We have also drawn an analogy between the trace anomaly and the cosmological constant as a metric term, which provides a constant negative pressure to balance the gravitational pull of the matter in Einstein's static Universe. Thus, the QCD trace anomaly behaves like a hadron cosmological constant~\cite{Liu:2021gco}. However, there is a fundamental difference between the gauge theories and the general relativity. In general relativity, the source of the equation of motion is the EMT, where the cosmological constant can be considered a part. Thus a negative pressure from the cosmological constant (dark energy) anti-gravitates. This gives a repulsive effect which is opposite to the effect of a negative pressure in gauge theories. When it is more negative than the matter and radiation densities, the expansion of the Universe will accelerate according to the Friedman's equation in Eq.~(\ref{Friedman}).
The role of the glue part of the trace anomaly is illustrated further in regard to the pion mass. An intriguing structure of the trace anomaly in the pion is found to be responsible for resolving the puzzle regarding the trace anomaly part of the pion mass. As the quark mass approaches zero, the spatial distribution of the glue part of the trace anomaly changes sign so that the trace anomaly is proportional to $\sqrt{m_q}$. At the chiral limit, the glue condensate is distorted in such way that it takes no energy to create a pion with massless quarks. This change of sign will be reflected in the pion trace anomaly form factor and should be verified experimentally. This is a clear direct evidence that the conformal (scale) symmetry breaking in the pion is linked to the chiral symmetry breaking.
Since the trace anomaly in the hadron is an indicator of confinement at zero temperature, it should serve as the confinement-deconfinement order parameter. This opens up a number of issues on the nature of phase transitions that can be studied on the lattice. It has been shown that the gauge field tensor and the gauge action, to order $\cal{O}$$(a^2)$, can be derived from the color-spin trace of the diagonal overlap Dirac operator~\cite{Liu:2007hq,Liu:2006wa,Alexandru:2008fu}. Through the study of the spectral density in terms of the overlap Dirac eigenvalues, there is an evidence of a phase above the crossover temperature that displays infrared scale invariance~\cite{Alexandru:2019gdm}. It would be useful to find out what bearing it may have on the glue condensate as a function of the temperature and chemical potential.
\section{Acknowledgment}
The author is indebted to P. Boyle, S. Brodsky, V. Burkert, M. Chanowitz, S. Das, T. Draper, W. Gannon, I. Horv\'{a}th, T. Hatsuda, F. He, Y. Hatta, X. Ji, D.E. Kharzeev, D. Lin, C. Lorc\'{e}, A. Metz, \mbox{Z. Meziani,} G. Murthy, J.C. Peng, M. Peshkin, O.V. Teryaev, B. Wang, Y.B. Yang, and F. Yuan for fruitful discussions. He also thanks T.J. Hou for providing the CT18 data and B. Wang for help with the figures. This work is partially supported by the U.S. DOE Grant No. DE-SC0013065 and No.\ DE-AC05-06OR23177 which is within the framework of the TMD Topical Collaboration.
This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No.\ DE-AC05-00OR22725. This work used Stampede time under the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation Grant No. ACI-1053575.
We also thank the National Energy Research Scientific Computing Center (NERSC) for providing HPC resources that have contributed to the research results reported within this paper.
We acknowledge the facilities of the USQCD Collaboration used for this research in part, which are funded by the Office of Science of the U.S. Department of Energy. |
1310.3732 | \section{Introduction}
\label{Sec:Intro}
Geo-neutrinos are electron antineutrinos that come from radioactive decays in the Earth's interior. Their sources are natural $\beta^{-}$-decays of nuclides including the three most heat producing elements, $^{238}$U and $^{232}$Th families, and $^{40}$K, following the scheme:
\begin{equation}
^{238}\mathrm{U} \rightarrow ^{206}\mathrm{Pb} + 8\alpha + 8 e^{-} + 6 \bar{\nu}_e + 51.7 ~~~\mathrm{MeV}
\label{Eq:geo1}
\end{equation}
\begin{equation}
^{232}\mathrm{Th} \rightarrow ^{208}\mathrm{Pb} + 6\alpha + 4 e^{-} + 4 \bar{\nu}_e + 42.7 ~~~\mathrm{MeV}
\label{Eq:geo2}
\end{equation}
\begin{equation}
^{40}\mathrm{K} \rightarrow ^{40}\mathrm{Ca} + e^{-} + \bar{\nu}_e + 1.31 ~~~\mathrm{MeV}
\label{Eq:geo3}
\end{equation}
The geo-neutrino flux and the radiogenic heat, released during radioactive decays, are in a well-fixed ratio. Therefore, by measuring the total geo-neutrino flux, it is possible in principle, to determine the contribution of radiogenic heat released in radioactive decays, quoted in Eqs.~\ref{Eq:geo1}, \ref{Eq:geo2}, and \ref{Eq:geo3}, to the total terrestrial surface heat flux ($\sim$46\,TW). The energy spectra of geo-neutrinos released in these reactions are shown in Fig.~\ref{Fig:GeonuSpectrum}. The U, Th, and K spectra are normalized to 6, 4, and 1 antineutrino, respectively, according to Eqs.~\ref{Eq:geo1}, \ref{Eq:geo2}, \ref{Eq:geo3}.
\begin{figure}[tb]
\begin{center}
\centering{\epsfig{file=geonuspc.pdf, angle = -90,scale=0.5}}
\begin{minipage}[t]{16.5 cm}
\caption{Energy spectra of geo-neutrinos released in the reactions Eq.~\ref{Eq:geo1} ($^{238}$U chain, solid black line), Eq~\ref{Eq:geo2} ($^{232}$Th chain, dashed-dotted red line), and Eq.~\ref{Eq:geo3} ($^{40}$K, dashed blue line). The vertical dashed line shows the kinematic threshold (1.806 MeV) of the inverse beta decay interaction.
\label{Fig:GeonuSpectrum}}
\end{minipage}
\end{center}
\end{figure}
Even though the geo-neutrino flux at the Earth surface is some $10^6$\,cm$^{-2}$\,s$^{-1}$, their detection is challenging as antineutrinos interact with matter only through the weak interaction, thus the probability of such interactions, and possible detection, is very small. The cross section of the main detectable interaction of electron-flavor antineutrinos, the inverse beta decay interaction:
\begin{equation}
\bar{\nu}_e + p \rightarrow e^+ + n,
\label{Eq:InvBeta}
\end{equation}
is $3.3 \times 10^{-44}$\,cm$^2$ at 2\,MeV~\cite{strumia} and increases by about an order of magnitude at 3\,MeV. The kinematic threshold of this interaction is 1.806\,MeV. Thus, the geo-neutrinos produced in the $^{40}$K decays cannot be detected, as their end-point energy spectrum of $\sim$1.31\,MeV (see Fig.~\ref{Fig:GeonuSpectrum}) is below this level. A fraction of antineutrinos from $^{232}$Th decay chain with end-points energies of 2.1\,MeV ($^{228}$Ac) and 2.3\,MeV ($^{212}$Bi) and those from $^{238}$U with end-points 1.9, 2.7, and 3.3\,MeV ($^{214}$Bi) and 2.2 ($^{234}$Pa$^{m}$) can be detected via reaction in Eq.~\ref{Eq:InvBeta}.
Geo-neutrinos travel almost undisturbed through the Earth with a finite, albeit small, probability to interact in scintillation liquids contained in kiloton-scale detectors installed in underground laboratories. Therefore, these particles are unique direct probes, which bring information from the Earth's internal regions not accessible by any other means. Importantly, the measured flux at a detector is proportional to the abundance and distribution of U and Th in the Earth, critical inputs for many geological, geophysical, and geochemical models that describe the complex processes taking place inside the Earth.
The geo-neutrino signal is especially useful for providing insights into the radiogenic power of the deep mantle, which is not directly obtainable from other methods. Some information on the chemical composition of the upper mantle can be obtained from samples brought to the surface through volcanic and tectonic processes. The chemical composition of such samples, however, can also be altered during their transport, particularly so for mobile elements like K, Th and U. The lower mantle is completely inaccessible by means of direct sampling. A systematic study of geo-neutrinos can provide constraints on a broad range of questions in Earth Sciences, including defining the energy available to drive plate tectonics, critically testing compositional models of the present-day mantle, determining the contribution of the radiogenic heat to the total terrestrial surface heat flux, providing insights into the power generating the geo-dynamo, the Earth's magnetic field, and establishing the relative abundance of Th and U in the silicate Earth and its Th/U ratio, as compared to that in some meteorites (useful contribution for the understanding of the Solar system and the Earth formation). As it is understood, observations of the chemical behavior of Th and U over a wide range of conditions inside the Earth are consistent with radioactive elements being absent from the Earth's core. However, some authors~\cite{herndon} suggest the existence of a georeactor active in the Earth's central inner core and such theories can also be tested by means of detecting electron antineutrinos.
The very low neutrino cross section and the geo-neutrino relatively low energy require that the detectors have special properties, such as a large size and a very low radioactive background. These requirements necessitate advanced technologies and considerable efforts to achieve detection success. Therefore the experiments capable to study the Earth's geo-neutrino flux are few and it is not easy to set up other detectors in different regions of the world. Such detectors are placed in underground laboratories to shield the experimental setup from cosmic radiations, which can mimic antineutrino interactions
The results of geo-neutrino measurements can be expressed in several ways. One such expression is the normalized event rate. It can be expressed using the so called Terrestrial Neutrino Unit (TNU), which is the number of antineutrino events detected during one year on a target of $10^{32}$ protons ($\sim$1\,kton of liquid scintillator) and 100\% detection efficiency. Conversion between the signal $S$ expressed in TNU and the oscillated, electron flavor flux $\phi$ is straightforward and requires a knowledge of the geo-neutrino energy spectrum and the interaction cross section, which scales with the energy of the electron antineutrino:
\begin{equation}
S(^{232}\rm{Th}) [\rm{TNU}] = 4.07 \times \phi (^{232}\rm{Th})~~[10^6 \rm{cm}^{-2} \rm{s}^{-1}]
\label{Eq:TNUFluxTh}
\end{equation}
\begin{equation}
S(^{238}\rm{U}) [\rm{TNU}] = 12.8 \times \phi (^{232}\rm{Th})~~ [10^6 \rm{cm}^{-2} \rm{s}^{-1}]
\label{Eq:TNUFluxU}
\end{equation}
This paper provides a state-of-the-art perspective on the new interdisciplinary field of Neutrino Geoscience, which involves, bringing together, communities of Earth scientists and particle physicists. In Sec.~\ref{Sec:GeoModels} of this paper geological and geophysical models of the Earth are reviewed; in Sec.~\ref{Sec:GeoSignal} we describe the models of the continental and oceanic crusts and the expected geo-neutrino signal at the sites where the experiments are placed; in Sec.~\ref{Sec:detectors} the presently running geo-neutrino detectors are presented; in Sec.~\ref{Sec:Results} the results already achieved and their impact on the Earth models are discussed; and finally in Sec.~\ref{Sec:Future} future experiments are highlighted.
\section{Models of the Earth}
\label{Sec:GeoModels}
This paper examines the nature of geo-neutrinos and how they relate to the Earth, its composition and energy budget, and how they define the power available to drive the Earth's engine. Geo-neutrinos are electron antineutrinos that are naturally emitted during beta decays and their detection can in principle tell us about the amount of thorium and uranium inside the Earth. In turn, measuring the Earth's flux of geo-neutrinos will constrain the nature of materials that were available to construct the planet some 4.5\,billion years ago at one astronomical unit (AU) out from the Sun.
Collaboration between geologists and physicists has at least a 150\,year history that has yielded exciting new science discoveries. When Lord Kelvin began to consider the age of the Earth and the rate of heat dissipation from the planet, a constrained but unsolved problem, he posed the problem as one of simple conductive dissipation of heat. A central question in geology today is what proportion of the present day surface heat flux of the Earth is due to radiogenic heating and how much is due to the release of primordial heat left over from accretion and core differentiation?
The assembly of the Earth from the initial solar nebular involved the accretion of many cosmic gas-dust fragments that accreted into planetesimals and into an ever increasing hierarchical accumulation of mass. There was a considerable amount of accretional energy that accompanied planet assembly. Likewise, the settling of metal into the center of the Earth during core formation involves the accumulation of gravitational energy that is later dissipated as thermal energy. Collectively, this formative period of the Earth produces a highly energetic thermal state whereby heat dissipation is regulated by the structure of the Earth involving a thermally conductive, metallic core surrounded by an insulating oxide shell.
Understanding the age of the Earth from the perspective of a simple cooling solid is not an accurate reference frame, as was assumed by Lord Kelvin. The dissipation of heat from the Earth is in large part controlled by heat loss across a thermal boundary layer that surrounds at least two convective shells, the plastically deforming convecting mantle and the liquid outer core, shells with markedly different thermal properties. As later recognized by Ernest Rutherford, radiogenic heating plays an additional, albeit minor role in this story. Thus, to understand the Earth's energy budget requires an assessment of the relative contributions of primordial and radiogenic heat and defining of the rate of heat dissipation.
Fortunately, geo-neutrino studies offer yet another opportunity for a fruitful collaboration between geology and physics to address the issue of how much radiogenic heat is contained in the Earth. Early results from this field are overwhelmingly positive, in that they define the Earth as containing a complement of both primordial and radiogenic power, and are beginning to resolve the absolute amount of radiogenic power inside the Earth. This latter information is important for critically evaluating models regarding the building blocks available to construct the Earth. The new, interdisciplinary field of Neutrino Geoscience is now providing critical insights into the bulk composition of the Earth, as well as the energy to power mantle convection, plate tectonic and geodynamo. Additional recent reviews of this field, particularly from the perspective of the geological inputs and how different models of the Earth influence the prediction of detected geo-neutrino signal, are also available~\cite{Sramek2013, Dye2012}.
\subsection{The origin of the Earth}
\label{SubSec:Origin}
There is considerable debate surrounding discussions on the age, origin and composition of the Earth. Increasingly, as we gain observational information from stellar nurseries, accretion disks, extra-solar planetary systems and meteorites, we are presented with a variety of potential processes and materials that can be envisaged for the building blocks and processes involved in planetary growth.
\subsubsection{Chondrites: the building blocks of the planets}
\label{SubSubSec:Chondrites}
Today questions remain about whether or not the Earth has a chondritic composition, and if so, which of the chondrites were the essential building block of the Earth. Chondrites are primitive, undifferentiated meteorites (i.e., a chaotic assemblage of rock and metal) that are a collection of the earliest formed material in the solar system. Studies of meteorites add much to our understanding of the age of the solar system and the nature of the building blocks that makes up the planets. The earliest formed fragments of the solar system (i.e., calcium-aluminum inclusions), found in some of these chondrites, are high temperature ceramic grains up to a cm across and represent nebular condensates, which experienced a series of post-formational histories that include chemical exchange in the nebular between gas and condensate, shock heating and melting events and oftentimes secondary mineral formation. From these fragments and other observations we know that the solar system formed 4.568 billion years ago~\cite{bouvier} due to the gravitational collapse of a portion of a large molecular cloud that formed a pre-solar nebula, a rotating and collapsing disk where much of the mass collects into the central portion that becomes increasingly hotter. Further out from the center of the disk the planets formed by collisional accretion and gravitational attraction, ultimately coalescing to form ever larger bodies.
The how and when planets formed remains a subject of considerable debate. Did Jupiter form first and other planets later? Did the inner rocky and outer gas planets formed simultaneously or sequentially? The Grand Tack model~\cite{walsh} envisages Jupiter and the outer gas giants as having accreted in the first few million years of the solar system: these bodies then moved inward towards the Sun, perturbed the region of the inner solar system, which lead to the formation of the inner rocky bodies, and later these gas giants returned to their more distant positions, after having gravitationally filtered this inner pool of planetesimals and initiating accretion of the rocky planets. Theoretical studies envisage the hierarchical growth of the inner rocky planets from the aggregation of planetesimals into planets with the exchange of different components (refractory versus volatile components, metals versus silicates, etc) across different heliocentric zones. These processes can be modeled and the results having attributes consistent with the compositional heterogeneities seen in the inner solar system and with acceptable solutions for the distribution of planetary masses~\cite{chambers}.
Chondritic meteorites are a mixture of silicate and metal materials in proportions similar to that found in the terrestrial planet (i.e., the mass fraction of the metallic core and rocky shell of the mantle and crust). It has also been shown that the compositions of the most primitive of the chondritic meteorites, the C1 carbonaceous chondrite, matches that of the solar photosphere~\cite{lodders}, the outer "surface" layer of the Sun, and that this match ranges over five orders of magnitude in element concentration (Fig.~\ref{Fig:SunCI}). The comparison with the solar photosphere is significant, given that the mass of the solar system is the Sun, with Jupiter being a thousand times smaller than it and the Earth being another thousand times smaller still. Thus, chondrites represent a guide to the building blocks of the solar system.
\begin{figure}[tb]
\begin{center}
\centering{\epsfig{file=Figure_1.pdf,scale=0.5}}
\begin{minipage}[t]{16.5 cm}
\caption{Comparison of the composition of the solar photosphere and that of C1 carbonaceous chondrites. Mass abundance data, $A$(El) [kg/kg], from Palme and Jones~\cite{palme}; the number of atoms of an element, $N$(El) [kg$^{-1}$], was derived for chondrites to be $N$(El) = $10\exp{ (A(\rm{El}) - 1.55)}$ and for the solar photosphere to be $N$(El) = $10 \exp {(A(\rm{El}) - 1.54)}$. Both derived values are normalized, that is the number of silicon atoms is $N$(Si) = $10^6$, consistent with all other presentations of this data. Augmented data, when necessary, for the solar photosphere came from Lodders~\cite{lodders}, which was given in the form $N$(El) and did not have to be derived from $A$(El) data, while that for CI chondrites came from Asplund~\cite{asplund}, whose derived equation was $N$(El) = $10\exp{(A(\rm{El}) - 1.51)}$, which was normalized to $N$(Si) = $10^6$.
\label{Fig:SunCI}}
\end{minipage}
\end{center}
\end{figure}
There are a various types of chondritic meteorites, with their classification being established on petrographic grounds that relate to their oxidation state of iron, their amount of iron, and metamorphic grade (or degree of aqueous alteration) of their constituent minerals. There are several varieties of chondritic meteorites, but the three dominant groups include the carbonaceous chondrites (e.g., with sub-groups labeled CI, CM, CV, CO, CR, CK, with distinctions being due to mineral attributes), the ordinary chondrites (i.e., H, L and LL, distinguished by their high, low and very low iron contents) and the enstatite chondrites (i.e., EH and EL, and again distinguished by their high and low iron contents) and these varieties are notable for their redox state of iron, being oxidized, intermediate and reduced, respectively. This redox state of iron also couples to a number of chemical and isotopic attributes of these chondrites~\cite{warren}, where iron is in the oxide form and associated with silicates (the major mineral family) or iron is in the reduced or sulfide form, with enstatite chondrites having negligible iron in the silicates (Fig.~\ref{Fig:UreyCraig}). A simple compositional model for the Earth (red star Fig.~\ref{Fig:UreyCraig}) compared to chondrites is plotted given the mass of the core and the amount of iron in the core and the Bulk Silicate Earth (i.e., the BSE is the crust plus the mantle, which is the primitive undifferentiated silicate fraction of the Earth, after core subtraction).
\begin{figure}[tb]
\begin{center}
\centering{\epsfig{file=Figure_2.pdf,scale=0.5}}
\begin{minipage}[t]{16.5 cm}
\caption{A Urey-Craig diagram that separates chondritic meteorites according to the ratio of oxidized Fe to silicate ($x$-axis) relative to the ratio of reduced iron (and that in sulfide) to total silicon. The Earth has a large metallic core and thus its bulk composition plots close to reduced end of the redox scale.
\label{Fig:UreyCraig}}
\end{minipage}
\end{center}
\end{figure}
\subsubsection{Element behavior: differentiation and redox potentials in the disk and planets}
\label{SubSubSec:Elements}
Importantly for the Earth, the redox state of iron controls the size of the core, which is 1/3 the mass of the planet. Moreover, the differentiation of the Earth into the core, silicate Earth (mantle plus crust) and hydrosphere/atmosphere, appears to have been a relative early Earth process (i.e., mostly completed in the first 50\,million years or thereabouts). Evidence for the timing of core formation comes from the short-lived $^{182}$Hf-$^{182}$W isotope system ($\beta^-$, with $t_{1/2}$ = 8.9\,million years), with both chondrites and iron meteorites having distinctly lower $^{182}$W/$^{184}$W isotopic compositions than the silicate Earth, implying a young formation age (order ten or a few tens of million of years after the formation of the solar system), which is based on the number and timing of separation steps of extracting W into the core and leaving Hf in the silicate Earth~\cite{kleine, yin}. Although the models differ on the exact timing and the number of multi-stage steps involved in its evolution, there is increasing consensus that core formation effectively occurred as early as 11\,million years after solar system formation or as late as $\sim$4.50\,Ga, with the latter being constrained by the ages of the earlier minerals on Earth and the time of Moon formation, which created a Moon having an identical isotopic composition as that of the Earth.
The segregation of iron (and Ni) into core is the single most important chemical differentiation event that occurred on Earth and it established to a first degree the distribution of elements in the planet. Historically, Emil Wiechert, a German physicist and geophysicist, envisaged the first order structure of the Earth in 1897, with the Earth having a metallic core surrounded by a silicate shell. By 1913, his PhD student Beno Gutenberg defined the depth to the core mantle boundary at 2900 km, roughly the same depth to which it is known today (2893 $\pm$ 5\, km). With this perspective, coupled with the planet's moment of inertia and an understanding of meteorites, scientists compared the Earth to the heavenly objects falling upon it, the chondrites. However, these comparisons lead to much speculation about the appropriate analogs of planets.
At a simple level the bulk of the Earth can be described by four elements (i.e., O, Fe, Si and Mg), which make up about 93\% by mass of the planet. In combination with Al, Ca and Ni, these seven elements describe more than 98\% of the mass of the Earth and thus define the bulk of the system. Of these seven elements, Ca and Al play a crucial role and their abundances can be directly linked to that of Th and U, as these four elements, along with some thirty other elements are considered refractory elements.
The refractory elements are those elements that condense out of a nebular disk at high temperatures and empirically are observed in equal proportion in the chondrites. Thus, chondritic ratios are conserved for the refractory elements, whereas the relative abundances of the other five abundant elements in the Earth (i.e., O, Fe, Si, Mg, Ni) and the remaining non-refractory elements vary markedly between different types of chondrites. Consequently, if we can establish the absolute abundance of Th and U in the planet, we can use chondritic ratios of refractory elements to set their abundances and from that model the remaining abundances of the other elements. Importantly, if we could determine the absolute abundances of potassium, a moderately volatile element, in the Earth, we could establish the volatility curve for the planet. (Geo-neutrinos from $^{40}$K have energies below the kinematic threshold of the current detection interaction, the inverse beta decay.)
Collectively, the abundance of the volatile lithophile elements establishes the planet's volatility scale and provides a constraint on the time-integrated nature of material accreted at\,1 AU. Relative to C1 carbonaceous chondrites the Earth is strongly depleted in volatile elements (Fig.~\ref{Fig:Volatility}). Not only does this include the ices (i.e., compounds of H, C, N and O), but also included are the alkali metals, S, and other non-refractory elements. The short-lived (3.7\,million years) radioactive system $^{53}$Mn, which decays to $^{53}$Cr, has been used to document that the Earth's volatile depletion signature, like that of various meteorites, was established within two million years of the start of the solar system~\cite{shukolyukov,moynier,trinquier}. Thus, the Earth's building blocks were likely volatile depleted and so much so that we do not have an analogous example among the chondritic meteorites.
\begin{figure}[tb]
\begin{center}
\centering{\epsfig{file=Figure_3.pdf,scale=0.5}}
\begin{minipage}[t]{16.5 cm}
\caption{The abundances of elements in the silicate Earth (i.e., the primitive mantle, which was then differentiated to the present-day crust and mantle) divide by their relative abundances in C1 carbonaceous chondrite and plotted against the half-mass condensation temperatures for a gas density in the solar system's nebular disk at approximately 1\,AU. See~\cite{mcdonough} for details.
\label{Fig:Volatility}}
\end{minipage}
\end{center}
\end{figure}
In addition to considering the behavior of elements (i.e., refractory versus volatile) in the nebular disk, geologists classify elements according to their geochemical affinities during geological processes, with elemental affinities cast according to partition functions with metal ({\it siderophile}), silicate ({\it lithophile}), sulfides ({\it chalcophile}), and water and gases ({\it atmophile}). Hence, inventories of siderophile elements are stored in the Earth's core, with minor amounts in the mantle, while the chalcophile elements were divided between the core and mantle\cite{mcdonough}. Core formation was likely protracted over one to a few million year time scale and it occurred over a range of conditions, but, on average, appears to have been established at mid-mantle pressure and temperature and a dominantly reduced oxygen fugacity~\cite{oneil,rubie,wood}. This scenario of core formation is derived from combining data on the absolute and relative abundances of elements in the silicate Earth and in chondrites, with experimental observations that establish the thermodynamic behavior of elements in analog material at controlled pressure, temperature and gas fugacity conditions in the laboratory. The depletion of siderophile and chalcophile elements in silicate Earth is accounted for by their sequestering into the core. To different degrees the siderophile and chalcophile elements have dissimilar geochemical affinities, as can be observed (Fig.~\ref{Fig:Volatility}) from the distinctive depletions of the two refractory elements Mo and W and the highly siderophile noble metals.
The lithophile elements, those that partition into silicates and other oxides, are excluded from the core forming metals due to their chemical affinities for oxygen. However, these potentials are established by the ambient oxygen fugacity at the time of metal silicate equilibrium and it is possible that during core formation some nominally lithophile elements may have been reduced to their metallic state. Therefore, a fraction of the inventory of some lithophile elements may be found in the core. Experiments that mimic a range of core forming conditions even in the presence of sulfide bearing metal find that U has negligible affinities for a core forming metal~\cite{wheeler}; the conditions for forcing Th into a metal phase are more extreme than that for U~\cite{jones} and thus even less likely to be in the core. Although debate surrounds the potential for Th and/or U being partitioned into the core~\cite{malvergene}, three significant observations are inconsistent with such assertions, given the Earth has a chondritic Th/U value of 3.9 $\pm$ 0.3: (1) the average mantle Th/U ratio is $\sim$3 based on ocean island basalts~\cite{arevalo2013} and mid-ocean ridge basalts~\cite{arevalo2009,arevalo2010}, (2) the continents, the complementary reservoir to the mantle, has an average crustal Th/U ratio of between 4 and 5, based on studies of crustal rocks~\cite{rudnick, huang}, and (3) the time integrated $\kappa$ value (a measure of the $^{232}$Th/$^{238}$U from the slope of $^{208}$Pb/$^{206}$Pb) of mantle and crustal rocks is $\sim$4~\cite{galer,elliot,paul}.
\subsection{Structure of the Earth}
\label{SubSec:Structure}
The Earth is a differentiate planet made up of three shells: a metallic core, overlain by a rocky layer of mantle and crust, which is in turn surrounded by an outermost fluid layer of hydrosphere and atmosphere (see Table~\ref{tab:Earth}) for further details). This structure is a consequence of the physical and chemical processes that occurred early in Earth's history, particularly with an initial core formation. The fundamental result of planetary differentiation is that element distribution in the Earth is not random, but controlled by a combination of chemical and physical potentials. Although dense iron is at the Earth's gravitational center, other heavy elements like uranium and thorium are concentrated upwards in the mantle and more so in the continental crust due to their chemical properties.
\begin{table}
\begin{center}
\begin{minipage}[t]{16.5 cm}
\caption{Properties of the Earth. Data from~\cite{yoder,masters,mcdonough, huang}.}
\label{tab:Earth}
\end{minipage}
\begin{tabular}{ll}
\hline
{\bf Radii} [m] & \\
Mean radius of the Earth & 6,371,010 $\pm$ 20 \\
Equatorial radius & 6,378,138 $\pm$ 2 \\
Polar axis & 6,356,752 \\
Inner (solid) core radius & $(1.220 \pm10) \times 10^6$\\
Outer (liquid) core radius & $(3.483 \pm 5) \times 10^6$ \\
\hline
{\bf Thickness} [m] & \\
Continental crust & (34 $\pm$ $4) \times 10^3$\\
Oceanic crust & (8.0 $\pm$ $2.7) \times 10^3$\\
\hline
{\bf Mass} [kg] & \\
Earth & $5.9736 \times 10^{24}$ \\
Inner (solid) core & $9.675 \times 10^{22}$\\
Outer (liquid) core & $1.835 \times 10^{24}$ \\
Core & $1.932 \times 10^{24}$\\
Mantle & $4.043 \times 10^{24}$ \\
Oceanic crust & $(0.67 \pm 0.23) \times 10^{22}$ \\
Continental crust & $(2.06 \pm 0.25) \times 10^{22}$\\
Bulk crust & $(2.73 \pm 0.48) \times 10^{22}$ \\
Ocean & $1.4 \times 10^{21}$ \\
Atmosphere & $5.1 \times 10^{18}$ \\
\hline
{\bf Fractional mass contributions } & \\
{\it -Bulk silicate Earth} & \\
Oceanic crust & 0.17\% \\
Continental crust & 0.51\% \\
Mantle & 99.32\% \\
{\it - Earth} & \\
Silicate Earth & 67.7\% \\
Core & 32.3\% \\
Inner core to bulk core & 5.0\% \\
\hline
{\bf Volume} [m$^3$] & \\
Earth & $1.083 \times 10^{21}$ \\
Inner (solid) core & $7.606 \times 10^{18}$ \\
Outer (liquid) core & $1.694 \times 10^{20}$ \\
Bulk core & $1.770 \times 10^{20}$ \\
Bulk silicate Earth & $9.138 \times 10^{20}$ \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsubsection{Seismic data and mineral constitution of the mantle}
\label{SubSubSec:Seismic}
The first order structure of the Earth's interior is defined by the 1D seismological profile, called PREM: Preliminary Reference Earth Model~\cite{dziewonski}. From the core outwards the Earth has a series of major, seismically defined features and discontinuities (Fig.~\ref{Fig:PREM}). This seismic profile of the Earth describes its constitution and make up~\cite{birch}, given the equation of state of Earth materials at appropriate temperatures and conditions of the interior. The seismic discontinuities or jumps are the result of mineralogical phase changes and/or crossing compositional boundaries.
\begin{figure}[tb]
\begin{center}
\centering{\epsfig{file=Figure_4.pdf,scale=0.5}}
\begin{minipage}[t]{16.5 cm}
\caption{The 1D seismological model of the Earth as reported in PREM: Preliminary Reference Earth Model~\cite{dziewonski}. $\rho$: density [g/cm$^3]$; $V_p$: velocity [km/s] of the primary, longitudinal waves; $V_s$: velocity [km/s] of the secondary, shear, transverse waves: note S-waves do not propagate through the liquid outer core; ICB: Inner Core Boundary, CMB: Core Mantle Boundary. The right panel, which focuses on the top 800\,km of the mantle, compares PREM data to other global models, STW105 from~\cite{kustowski}, shown in red, and SEMum 1D from~\cite{lekic}, shown in blue. The darker (orange) filled area, shown in both panels, delimited by the discontinuities at 410\,km and 660\,km depth, is the transition zone described in text.
\label{Fig:PREM}}
\end{minipage}
\end{center}
\end{figure}
The most significant compositional boundary in the Earth is at the core-mantle boundary (Earth scientist's CMB) and defines the first discovered (circa 1907) and the most dramatic seismic discontinuity. Here compressional wave velocities ($V_p$) drop substantially from the mantle values and then increase throughout the liquid outer core and solid inner core. The absence of a shear wave in the outer core is consistent with it being liquid. By the 1920s seismologists~\cite{jeffreys} mapped out a series of discontinuities in the mantle and began discussing the nature of the seismic jumps as being either isochemical phase changes or compositional layering in the mantle.
The major, seismically-defined, boundaries in the mantle are at 410\,km and 660 km\,depth and the D'' boundary near the CMB. The D'' (pronounced D-double prime) layer is distinctively recognized as the region near the core mantle boundary (100-300\,km above the CMB) where there is a decrease in the vertical gradient of both compressional and shear wave velocities. There is considerable community debate about the physical and chemical state of this irregularly shaped mantle domain, which is a thermal boundary layer between the hot core and cooler mantle. Seismology provides an instantaneous picture of the Earth and so it is difficult to define the age of the D'' region. There are several suggestions regarding the history of the D'' region, ranging from a long term (age of the Earth) or a shorter term ($10^8$ to $10^9$\,years) feature of the mantle that gets refreshed through time. The nature of the D'' region has been a source of great intellectual speculation, ranging from an early cumulate pile from an early Earth, global magma-ocean differentiation event~\cite{labrosse,lee}, to an early surface crust that was gravitationally sequestered to the base of the mantle and is representative of an Early Enriched Reservoir (EER from~\cite{boyet}), or is the final resting place for subducting slabs (surface tectonic plates) of recycled oceanic lithosphere (i.e., crust plus underlying mantle that is mechanically coupled to the plate).
The 660\,km and 410\,km boundaries are well documented phases change boundaries (Fig.~\ref{Fig:geotherm}). It is uniformly agreed that the relatively sharp ($\leq$10\,km) 410\,km seismic boundary is due to the isochemical, positive Clapeyron slope ($dP/dT$), phase change of olivine to wadsleyite ($\alpha$-olivine to $\beta$-spinel structure, (Mg$_{0.9}$Fe$_{0.1}$)SiO$_4$), the dominant ($\geq$55 mole \%) mineral in the upper mantle. Consequently, the depth to the 410\,km seismic boundary appears to be thermally controlled with the phase transition occurring at shallower depth in cold regions (e.g., areas with subducting slabs) and deeper in higher temperature regions (e.g., upwelling hot mantle region, as in the Hawaiian plume).
\begin{figure}[tb]
\begin{center}
\centering{\epsfig{file=Figure_5.pdf,scale=0.5}}
\begin{minipage}[t]{16.5 cm}
\caption{A phase diagram of the mineralogy and thermal gradient of the top 1000\,km of the Earth's mantle. The mode proportion of mantle minerals is presented on the top $x$-axis, whereas the temperature scale is shown on the bottom $x$-axis. The Transition Zone (see also Fig.~\ref{Fig:PREM}) is marked by the two major, seismic discontinuities at 410\,km and 660\,km, which are coincident with major phase changes. Assuming a Mg to Fe mole proportion of 9 to 1 for the bulk composition, the olivine to wadsleyite transition occurs at $\sim$1670\,K and 410\,km and the disproportionation of ringwoodite to Mg-perovskite and ferropericlase occurs at $\sim$1870\,K and 660\,km~\cite{katsura, akaogi}.
\label{Fig:geotherm}}
\end{minipage}
\end{center}
\end{figure}
The 660\,km seismic discontinuity is a somewhat broader ($\leq$60\,km depth variation) transition that is also recognized as a phase change. At these depth the bi-mineralic (majorite garnet and ringwoodite $\gamma$-spinel) assemblage breaks down into Mg-perovskite ((Mg$_{0.9}$Fe$_{0.1}$)SiO$_3$), ferropericlase ((Mg,Fe)O) and Ca-perovskite (CaSiO$_3$), with Al being distributed between the two perovskite phases and Fe$^{3+}$ distributed between all the phases. At this depth the disproportionation of $\gamma$-spinel to Mg-perovskite and ferropericlase has a negative Clapeyron slope and so this transition should have an anti-correlation with that of the 410\,km discontinuity. Overall, however, there is little evidence for this anti-correlation~\cite{houser}. There has been considerable debate in the community, however, regarding whether or not this phase change also defines a marked compositional change in the mantle. In large part models differ on the amount of ferropericlase in the lower mantle, or alternatively the difference in the amount of SiO$_2$ in the lower and upper mantle.
The topography on the 410\,km and 660\,km discontinuities can be obtained from high resolution seismic images, which defines the mantle's Transition Zone thickness~\cite{lawrence}. The average thickness of this zone is 242 $\pm$ 9\,km, with regions in the western Pacific being (also in the Red Sea to the Aegean region) as much as 35\,km thicker and up to 35\,km thinner in areas northwest of Hawaii and beneath Central Africa. Thinning and thickening of Transition Zone is mostly due to topography on the 660\,km discontinuity. There are regions with large amplitudes in boundary heights over narrow horizontal scales, which correlate with subducting slabs. Hot regions (e.g., upwelling plume, like Hawaii) of the mantle are correlated with anomalously thin transition zones and are also laterally narrow. Overall topography on the 410\,km and 660\,km discontinuities is generally correlated with temperature variations on small lateral scales (slabs and plumes).
There are two significant boundary layer structures in the Earth that are associated with the cooling of the Earth. At the base of the mantle, the D'' layer is a structure that in part reflects the conductive thermal boundary between the hotter core and cooler mantle. At the top of the mantle, the lithosphere is the mechanical plate that translates with mantle convection and its outermost conductive cooling layer. The oceanic lithosphere, made of oceanic crust (8 $\pm$ 2\,km) and its subjacent lithospheric mantle (up to 80\,km thick), forms at mid-ocean spreading centers where adiabatic decompression leads to melting, crust production and basal accretion of residual mantle. Later and further afield from the spreading ridge, the base of this lithosphere continues to accrete ambient mantle due to conductive cooling processes. The lithosphere beneath the continents is made up on average of 34\,km of crust underlain by lithospheric mantle that is estimated to reach down to 175 $\pm$ 75\,km~\cite{huang}. Fragments of the deep lithosphere beneath are brought up in selected magma types and demonstrate that the roots of continents have comparable age distributions as those seen in their overlying crust. On average the oceanic lithosphere is about 50\,million years old (and everywhere $<$200\,million year old), whereas the continental lithosphere is on average about 2\,billion years old.
The oceanic crust is made up almost exclusively of basalts (SiO$_2$ $\sim$50\,wt\%) with limited compositional variation, whereas the continental crust is markedly different with a large diversity of rock types (igneous, metamorphic, and sedimentary) and a complete range of compositions (e.g., sandstones with $\sim$100\,wt\% SiO$_2$ and carbonates with $\sim$0 wt\% SiO$_2$). On average the continents were made by extracting basaltic lavas from the mantle, followed by the burial, metamorphism, melting of these lavas and granite formation, which leads to the granites floating to the top of the crust and the loss of the mafic residue from continents. This multi-stage, complex history produces this heterogeneous composition, which is on average andesitic in composition (SiO$_2$ $\sim$60\,wt\%)~\cite{rudnick}. Consequently, these differentiation processes ultimately lead to a simplified distribution of U in the Earth with $\sim$1,000\,ng/g in the continental crust, $\sim$100 ng/g in the oceanic crust and $\sim$10 ng/g in the present-day mantle.
\subsubsection{Composition of the mantle and the existence of compositional layers}
\label{SubSubSec:mantle}
The observed variation in seismic velocity in the mantle and core can be used to interpret its composition based on the equation of state of materials at specific pressures and temperatures~\cite{birch}. In a pioneering study, using the bulk sound velocity information from seismic data and assuming a mantle geotherm, Birch~\cite{birch} concluded that the Transition Zone of the mantle was either a region of phase changes (Fig.~\ref{Fig:geotherm}), a compositional gradient or both and emphasized that the lower mantle would have a high-pressure modifications of the ferro-magnesian silicates, which are characteristics of upper mantle minerals. He also speculated, rightly, that the Transition Zone would be "key to a number of major geophysical problems".
We can define a series of mineralogical and compositional models for the Earth, the core, and the mantle that are consistent with range of compositions seen in chondritic meteorites. However, given that the results are non-unique, debate on what is the constitution of the deeper portions of the mantle continues. There are two broad end-member, mineralogical and compositional models for the Earth's mantle based on (1) a homogeneous model, with an upper and lower mantle of similar compositions (Hart and Zindler, 1986~\cite{hart}; McDonough and Sun, 1995~\cite{McDonoughSun}; All\'egre et al., 1995~\cite{allegre95}; Palme and O'Neill, 2003~\cite{palme}), and (2) a layered model, with an upper and lower mantle of distinctly different compositions (e.g., Anderson, 2002~\cite{anderson02}; Javoy et al., 2010~\cite{javoy}; Murakami et al., 2012~\cite{murakami}). Variants on these models that envisage lesser degrees of layering involve only parts of the lower mantle. These concepts include basal mantle cumulate layers resultant from early earth magma ocean conditions (Labrosse et al., 2006~\cite{labrosse}; Lee et al., 2007~\cite{lee}), or gravitationally sequestered layers of early-enriched crust (Boyet and Carlson, 2005~\cite{boyet}).
\begin{figure}[tb]
\begin{center}
\centering{\epsfig{file=Figure_6.pdf,scale=0.5}}
\begin{minipage}[t]{16.5 cm}
\caption{A plot of the weight ratios of some major elements in chondrites (red, green and blue circles) and various models of the silicate Earth. The Mg/Si value established the relative proportion of olivine (atomic Mg/Si of 2) to pyroxene (atomic Mg/Si of 1) in the upper mantle. The present day upper mantle has a Mg/Si weight ratio of about 1.1. Silicate Earth models that fall below this Mg/Si value of 1.1 (e.g., Javoy et al., 2010~\cite{javoy} and an assumed CI model of Murakami et al., 2012~\cite{murakami}) predict the lower mantle that is enriched in pyroxene and chemically distinct from the upper mantle.
\label{Fig:MgAloverSi}}
\end{minipage}
\end{center}
\end{figure}
Recently, Murakami et al.~\cite{murakami} reported new developments in obtaining sound velocity data for lower mantle minerals held at appropriate pressure and temperature conditions for the lower mantle and went on to critically evaluate analog compositional models of the deep Earth. Examining a mixed assemblage of silicate perovskite and ferropericlase (see Fig.~\ref{Fig:geotherm} for an example of possible mineral proportions) they found that the best fit of their data was a lower mantle model that contained 93 volume percent of silicate perovskite or about half the ferropericlase to that shown in Fig.~\ref{Fig:geotherm}. This higher proportion of perovskite requires that the lower mantle be enriched in silicon relative to the upper mantle and that the bulk composition of the silicate Earth follows that of a CI type chondrite (i.e., Mg/Si of about 0.9 in Fig.~\ref{Fig:MgAloverSi}). The implication of this model is that the mantle is chemically stratified and that there is limited mass transport across the 660\,km seismic boundary layer.
One of the grand challenges in Earth sciences is the integration of data from a wide variety of new technologies into a coherent picture that is constrained by the uncertainties of both the data and the models. The example of the Murakami et al.~\cite{murakami} data is a case in point. This exciting new technological development provides unparalleled opportunities for mineral physicists to characterize sound wave speed in minerals at appropriate lower mantle pressures. Results from this study, however, (1) were for shear wave velocities ($V_s$) only (not compressional wave velocities ($V_p$)), (2) were extrapolated to modeled lower mantle temperatures, (3) were conducted on simple analog compositional materials (e.g., not considering Ca, Al, and Fe$^{3+}$ contributions) and (4) were compared without uncertainties to the 1D PREM model, which is based on seismological data that have their own uncertainties. It is also worth noting that constraints on mantle composition from $V_s$ data are significantly weaker than that based on $V_p$ data. Likewise, models using density and bulk modulus data from other mineral physics experiments encounter similar issues with the full propagation of uncertainties in these systems.
Similarly, claims by isotope geochemists that the compositions of Earth materials match a specific family of chondrites need to be placed in a greater context. Physically observable differences in shapes and sizes of components and the redox states of minerals lead to the classification of meteorite groups. Recent strides in mass spectrometry have allowed us to identify small isotopic differences (i.e., a few parts in $10^4$ to $10^6$) between these petrographic groups of chondritic meteorites (e.g., Boyet and Carlson, 2005~\cite{boyet}; Gannoun et al., 2011~\cite{gannoun}; Warren, 2011~\cite{warren}; Zhang et al., 2012~\cite{zhang}; Fitoussi and Bourdon, 2012~\cite{fitoussi}). Javoy, 1995~\cite{javoy1995} highlighted the shared oxygen isotopic composition of the Earth and enstatite chondrites and later Javoy et al., 2010~\cite{javoy} suggested a compositional model for the Earth based on the same enstatite chondrites. More recently some scientists have continued to support the match between the Earth and enstatite chondrites (e.g., Gannoun et al., 2011~\cite{gannoun}; Warren, 2011~\cite{warren}; Zhang et al., 2012~\cite{zhang}), whereas others have highlighted differences between these bodies (Fitoussi and Bourdon, 2012~\cite{fitoussi}). The Javoy et al., 2010~\cite{javoy} model utilizing the enstatite chondrites postulates a gross chemical distinction between the upper and lower mantle, thus requiring convecting layering of the mantle.
A major constraint on the nature of the deep mantle has come from studies of the noble gas isotopic composition of basalts. Basaltic lavas erupted at deep ocean ridges, the boundaries of tectonic plates in the ocean, have a chemical and isotopic composition that are distinctly different from those erupted at places like Hawaii and Iceland, which are volcanic edifices distributed randomly at the Earth's surface without regard to tectonic plate boundaries. The latter, ocean island basalts (often labeled OIB), have noble gas isotopic compositions (i.e., enrichments in primordial $^3$He and low $^{40}$Ar/$^{36}$Ar) that are indicative of un-degassed source regions. The former, mid-ocean ridge basalts (often labeled MORB), have noble gas isotopic compositions (i.e., enriched in $^4$He and $^{40}$Ar) consistent with their sources being degassed and enriched in the radiogenic products of $^{40}$K and $\alpha$ decays. Consequently, these observations lead many to conclude that the mantle is chemically layered with the lower mantle (depth not being constrained by these chemical arguments) containing a more primordial, under-degassed noble gas component that is tapped by focused upwelling plumes, while the degassed upper mantle is tapped by the whole sale sampling of 40,000\,km of mid-ocean ridges~\cite{allegre96}.
Increasingly, the end-member layered mantle models, which were greatly in favor some thirty years ago, have come under considerable scrutiny and disfavor. This is due mostly to seismic tomographic observations that show oceanic lithospheric plates plunging into the lower mantle, with some projecting steeply down from the subducting trench and others showing a stair-step transition of ponding in the Transition Zone and later laterally becoming unstable and then plunging into the deep mantle~\cite{grand,hilst}. Collectively, these seismic images of subducting lithospheric plates traversing the mantle are taken as evidence of whole mantle convection with mass transfer occurring over the depth and breadth of the mantle. Importantly, however, the interpretation of geological information requires a 4D integration of data, which has led some to accept this seismological evidence, while postulating that the mass transport condition is a relatively recent development in the Earth~\cite{allegre02, allegre04}. Most recently, however, noble gas models of the mantle have reconciled the observational data with whole mantle convection~\cite{gonnerm}.
\subsection{Earth's thermal budget, heat producing elements, and geo-neutrinos}
\label{SubSec:heat}
Models that predict the major elemental composition of the silicate Earth (sometimes referred as the bulk silicate Earth, or BSE) also predict the Th and U content, given the assumption that the refractory lithophile elements are in chondritic proportions and were excluded from the core. This assumption is applied to all of the terrestrial planets and it assumes, for example, that given an Al content and chondritic Ca/Al value (1.1), the Ca content is predicted to be 10\% greater. Likewise, chondritic ratios of Al/Th ($(2.9 \pm 0.2) \times 10^5$) and Th/U ratio (3.9 $\pm$ 0.3) can be used to establish the abundances of Th and U in models of the silicate Earth, particularly when considering differences between model compositions. Table~\ref{Tab:EarthComp} presents a range of compositional models for the silicate Earth and included model U contents ranging between 12 and $26 \times 10^{-9}$\,kg/kg. A more enriched model by Wasserburg et al., 1963~\cite{wasserburg}, which was based on chemical and isotopic observations of oceanic and continental rocks, proposed the Bulk Silicate Earth has a U content of $33 \times 10^{-9}$ kg/kg. The most U and Th depleted model was presented by O'Neill and Palme, 2008~\cite{ONeill} and Campbell and O'Neill, 2012~\cite{campbell} and was based on a concept of collisional erosion in the early Earth, where a fraction of the Earth's surface crust (~2\% of the mass of the silicate Earth) enriched in the heat producing elements, is lost to space. In this model, the Earth has lost about half of its budget of heat producing elements leaving the Earth with only $\sim$$10^{-8}$ kg/kg of U.
\begin{table}
\begin{center}
\begin{minipage}[t]{16.5 cm}
\caption{Compositional models of the silicate Earth. Abundances of major elements are given in weight percent, those of Th and U in $10^{-9}$ kg/kg.}
\label{Tab:EarthComp}
\end{minipage}
\begin{tabular}{l|lllll|ll|ll}
\hline
Compositional Models & Al & Ca & Mg & Si & Fe & Al/Si & Mg/Si & Th & U \\
\hline
Ringwood, 1979~\cite{ringwood} & 1.75 & 2.22 & 23.0 & 21.1 & 6.22 & 0.083 & 1.09 & - & - \\
Jagoutz et al., 1979\cite{jagoutz} & 2.12 & 2.50 & 23.1 & 21.1 & 6.06 & 0.100 & 1.10 & 94 & 26 \\
Taylor, 1980~\cite{taylor} & 1.75 & 1.89 & 24.1 & 21.3 & 6.22& 0.082 & 1.13 & 70& 18 \\
Wanke et al., 1984~\cite{wanke} & 2.17 & 2.50 & 22.2 & 21.3 & 5.83& 0.102 & 1.04 & - & - \\
Palme and Nickel, 1985~\cite{PalmeNickel} & 2.54 & 3.14 & 21.4 & 21.6 & 5.99 & 0.118 & 0.99 & - & -\\
Hart and Zindler, 1986~\cite{hart} & 2.15 & 2.34 & 22.8 & 21.5 & 6.22 & 0.100 & 1.06 & 79 & 21\\
Anderson, 1989~\cite{anderson89} & 1.69 & 2.43 & 19.7 & 21.0 & 12.2 & 0.080 & 0.94&- & -\\
McDonough and Sun, 1995~\cite{McDonoughSun}& 2.34 & 2.52 & 22.8 & 21.0 & 6.25 & 0.111 & 1.08 & 80 & 20\\
All\'egre et al., 1995~\cite{allegre95} & 2.16 & 2.31& 22.8 & 21.6 & 5.82 & 0.100 & 1.06 & - & -\\
Palme and O'Neil, 2003~{palme} & 2.38 & 2.61 & 22.2 & 21.2 & 6.30& 0.112 & 1.04& 83 & 22\\
Anderson, 2007~\cite{anderson07} & 2.02 & 2.20 & 20.5 & 22.4 & 6.11 & 0.090 & 0.92 & 77 & 20\\
Lyubetskaya and Korenaga, 2007~\cite{lyub} & 1.86 & 1.99 & 23.8 & 21.0& 6.20 & 0.089 & 1.13 & 63 & 17\\
Javoy et al., 2010, Table 4 in~\cite{javoy} & 1.28 & 1.28 & 19.1 & 24.1 & 8.63 & 0.053 & 0.79 & 43 & 12\\
Javoy et al., 2010, Table 6 in~\cite{javoy}& 1.28 & 1.34 & 22.0 & 23.3 & 6.87& 0.055 & 0.95 &43 &12\\
\hline
\end{tabular}
\end{center}
\end{table}
Compositional models for the Earth predict a factor of 3 difference in the amount of U in the Earth. Assuming the Earth has a chondritic Th/U of 4 and a planetary K/U of $1.4 \times 10^4$~\cite{wasserburg,jochum,arevalo}, one can calculate a total heat production from the decay of these elements, as well as a surface geo-neutrino flux. \v{S}r\'amek et al., 2013~\cite{sramek} recently modeled this range of compositional space and showed that the heat production power ranges from 10 to more than 30\,TW relative to an estimate of the surface heat flux of 46-47\,TW~\cite{jaupart,davies}. Recent work by Huang et al.~\cite{huang} finds that the continental crust has $6.8^{+1.4}_{-1.1}$\,TW of radiogenic power and that the remaining power resides in the mantle, not the core. This then translates to as little as 3\,TW and to as much as 23\,TW of radiogenic power in the mantle to drive convection and plate tectonics.
As mentioned above, it has been proposed that the core contains U and/or K~\cite{rama}, but these arguments have been addressed here and in~\cite{mcdonough} where it is recognized that positing such models needs to be coupled to corroborating geochemical evidence that is also free of negating consequences. Studies have shown that reducing U into the metallic state has a far greater effect on other elements (e.g., Ti) that show no evidence for a core extraction. Likewise, there is limited potential for CaS and various REE to accompany a potassium sulfide into a core forming phase; again there is no evidence for such process. Finally, Herndon~\cite{herndon} has suggested a U-driven georeactor in the Earth's core as a consequence of his model of Earth's formation that involved a highly reduced Earth. Herndon's compositional model is inconsistent with chemical and isotopic observations of the Earth's mantle presented in McDonough, 2003~\cite{mcdonough}, particularly given a core containing significant quantities of Ca, Mg, U, Th, and other lithophile elements based on analogies with enstatite chondrites (highly reduced meteorites).
Estimates over the last 40 years for the Earth's surface heat flow are between 41 and 47\,TW, with recent estimates being $46 \pm 3$~\cite{jaupart} and $47 \pm 2$~\cite{davies}. The latter estimate considers data from $>$38,000\,heat flow measurements from around the globe. Measuring the temperature and temperature gradient in the Earth and then projecting this temperature condition into the body is a considerable challenge and it was this matter that confused Lord Kelvin when he took his observations and folded them into an estimate of the age of the Earth. Determining the Earth's heat flow from measurements of gradients, heat production, and conductivity is both a surprisingly simple and complex concept at the same time (see also the discussion in \cite{Sramek2013} and \cite{Dye2012}). The recent recognition that the near surface gradient in a heat flow measurement also records climate change effects on millennia time scales has many going back to examine the original heat flow measurements to de-convolve this effect from the estimated surface flux.
Combining the present day surface flux ($46 \pm 3$\,TW) and estimates of the radiogenic heat production allows one to estimate the amount of primordial heat remaining in the Earth. Models envisaging "low-$Q$" heat production for the Earth, with as little as 10\,TW of radiogenic power (e.g.,~\cite{ONeill,javoy,campbell}), require that the Earth has a significant amount of primordial heat ($\leq$36\,TW), whereas the "high-$Q$" models (e.g.,~\cite{wasserburg,turcotte2001,turcotte2002}) project a limited primordial heat budget left in the Earth (e.g., $\leq$16\,TW). Geophysical models of the Earth seek satisfactory solutions to the planetary thermal evolution by fitting the relative contributions of primordial heat and heat production, while being consistent with the Earth's secular cooling record~\cite{schubert}. These models parameterize mantle convection in terms of the force balance between buoyancy and viscosity versus thermal and momentum diffusivities, while recognizing that the convective state of mantle greatly exceeds its critical Rayleigh number, which marks the onset of convection. Typically these are high-$Q$ models, requiring that more than 50\% of the present heat flow be due to radiogenic heating.
\section{Geo-neutrinos from the crust and mantle}
\label{Sec:GeoSignal}
\subsection{Geo-neutrino signal from the crust}
\label{SunSec:GeoCrust}
Analyzing the arrival times of the refracted and reflected elastic waves produced by an earthquake with its epicenter close to Zagreb, Andrija Mohorovi\v{c}i\'c, in 1909, provided the first evidence of a discontinuity between crust and mantle. Further measurements of the seismic waves confirmed the presence of this boundary, which separates rocks having P-wave velocities of 6-7\,km/s from those having velocities of about 8-9\,km/s. This change of mechanical properties of Earth materials is due to a compositional transition from mafic rocks of the lower crust to ultramafic rocks of the upper mantle~\cite{ref2.1,RudnickFountain}. The crust is that part of the Earth with highest concentration of Heat Producing elements (i.e., $\sim$$1 \times 10^{-6}$\,kg/kg) while their abundances drop rapidly (i.e., $\sim$$ 1 \times 10^{-8}$\,kg/kg) below the Mohorovi\v{c}i\'c discontinuity, often referred to as Moho.
The crust is divided in two main reservoirs: continental and oceanic crust. Although the mass of the continental crust is about 0.34\% of the Earth's mass, this crust contains approximately 40\% of the Earth's inventory of U and Th~\cite{huang}. Therefore, rocks of continental crust produce the highest rate of geo-neutrinos per unit of mass and they give the biggest contributions to geo-neutrino signals in the existing detectors.
The first estimations of the expected geo-neutrinos from U and Th in the crust are published in~\cite{ref2.4}. In this model uranium and thorium are distributed uniformly in a shell 30\,km thick having a mass of $2\times 10^{22}$\,kg with abundances $A$(U) = $4 \times 10^{-6}$ kg/kg and $A$(Th) = $19 \times 10^{-6}$\,kg/kg. The expected signal of $S$(U+Th)$\sim$32\,TNU is independent from the position of the detector on the Earth's surface.
Further estimations have been published and are based on different geophysical crustal models. The authors of~\cite{ref2.5} take into account the spatial distribution of the continental and oceanic crust following the global crustal map CRUST5.1~\cite{ref2.6}, but not considering any crustal sub-layers. After 2004 a new generation of models for estimating geo-neutrinos from the crust was published by many authors~\cite{ref2.7,enomoto, ref2.9, ref2.10}, who all adopted the global crustal model on a $2^{\circ} \times 2^{\circ}$ grid as published by Laske et al.~\cite{ref2.11}. This geophysical model is made up of 16,200 tiles and describes 360 key 1D-profiles. The thickness, the density, and the velocities of compressional ($V_p$) and shear ($V_s$) waves traveling through are given explicitly for seven layers (ice, water, soft sediments, hard sediments, upper, middle and lower crust) in each tile. The accuracies of this model are not specified and they vary in different places since vast continental regions (large portions of Africa, South America, Antarctica and Greenland) lack direct measurements.
Table~\ref{Tab:GeonuSignal} shows the expected geo-neutrino signal in TNU at the Earth's surface from U and Th in the crust only, according to three different models published in the last decade~\cite{huang,ref2.7,ref2.10}. Kamioka and Gran Sasso are the locations where KamLAND and Borexino experiments are running from 2002 and 2007, respectively. The SNO+ experiment is the follow-up of the Sudbury Neutrino Observatory (SNO) and is in construction phase. The site of Hawaii is considered, due to its low geo-neutrino crustal signal.
\begin{table}
\begin{center}
\begin{minipage}[t]{16.5 cm}
\caption{Geo-neutrino expected signals in TNU from U and Th in the crust according to three different geophysical and geochemical models. All calculations are normalized to a survival probability $<P_{ee}> = 0.55$. The uncertainties of Mantovani et al.~\cite{ref2.10} correspond to the full range of the crustal models, while for Dye~\cite{ref2.7} and Huang et al.~\cite{huang} the $1\sigma$ errors are reported}
\label{Tab:GeonuSignal}
\end{minipage}
\begin{tabular}{l|c|c|c}
\hline
Site & Mantovani et al.~\cite{ref2.10} & Dye~\cite{ref2.7} & Huang et al.~\cite{huang} \\
\hline
Kamioka & $24.7^{+4.3}_{-10.3} $ & $23.1 \pm 5.5$ & $20.6^{+4.0}_{-3.5}$ \\
Gran Sasso & $29.6^{+5.1}_{-12.4}$ & $ 28.9 \pm 6.9$ & $29.0^{+6.0}_{-5.0}$ \\
Sudbury & $38.5^{+6.7}_{-16.1}$ & $34.9 \pm 8.4$ & $34.0^{+6.3}_{-5.7}$ \\
Hawaii & $3.3^{+0.6}_{-1.4}$ & $3.2 \pm 0.6$ & $2.6^{+0.5}_{-0.5}$ \\
\hline
\end{tabular}
\end{center}
\end{table}
In Mantovani et al., 2004~\cite{ref2.10}, the radioactivity content of each layer of a $2^{\circ} \times 2^{\circ}$ global crustal model was calculated by averaging the abundances of U and Th values available in the GERM database (2003). The reported spread is obtained by using the maximal and minimal abundances of the compilations. The geo-neutrino signal from the crust reported in~\cite{ref2.7} differs from that of~\cite{ref2.10} for the composition of the crystalline crust. In this latter model the authors assign to each identifiable layer (upper, middle and lower crust) the U and Th abundances presented in the comprehensive review published by Rudnick and Gao~\cite{rudnick}. The uncertainty of the geo-neutrino signal for this model is the sum of the uncertainties due to $1\sigma$ error of U and Th abundances assigned to the crustal layers.
In reference~\cite{huang} the uncertainties of the expected geo-neutrino flux are calculated for the first time, taking into account the Th and U content of the crust and considering the geochemical and geophysical uncertainties associated with the input data. Observing a log-normal distributions of U and Th concentrations in crustal rocks, the median values are evaluated as the most representative number of the probability functions. The asymmetrical uncertainties are propagated from the non-Gaussian distributions of the abundances in the deep continental crust using a Monte Carlo simulation. The estimated signals from U and Th in the crust as calculated from this study are reported in Table~\ref{Tab:GeonuSignal}, with all values overlapping within the quoted uncertainties.
Due to the inverse-squared distance-dependence of the neutrino flux, the local and global reservoirs can provide comparable contributions to the geo-neutrino signal, at least for detectors sited in the continental crust. The boundaries of the local crust are a matter of convention. Following the reference~\cite{huang}, the crustal U and Th content in the 24 closest $1^{\circ} \times 1^{\circ}$ crustal voxels surrounding KamLAND, Borexino and SNO+ contribute 65\%, 53\% and 56\% of the total signal, respectively. Refined geochemical and geophysical models, that describe the Earth, have been developed for identifying with greater precision and accuracy the local contribution (circa 500\,km radius) surrounding each detector.
\subsection{Local geological model near the Kamioka site}
\label{subsec:kamioka}
The Japan island arc sits on a continental shelf situated close to the eastern margin of the Eurasian plate, one of the most seismically active areas of our planet. The Philippine tectonic plate is moving towards the Eurasia plate at about 40\,mm/year and ultimately, the Philippine plate is subducting beneath the southern part of Japan. The Pacific Plate is moving roughly in the same direction at about 80\,mm/year and is subducting beneath the northern half of Japan. Both subducting plates form deep submarine trenches and uplift areas parallel to the trench, and generate igneous activity, particularly the production of the volcanic island chain. The Sea of Japan is a typical marginal sea, which is incompletely bordered by islands and expanded basins on the back arc side (back arc basin), and is situated between the Japan island arc and the Asian continent. The geochemical and geophysical features of the Japanese crust, the effects of the subducting slab, and the intricate back-arc opening tectonics have been studied by Fiorentini et al.~\cite{fiorentini05} and Enomoto et al.~\cite{enomoto}, with the aim of estimating their effects on geo-neutrino signal.
The six $2^{\circ} \times 2^{\circ}$ tiles around KamLAND produce $S$(U+Th) = 13.3\,TNU~\cite{huang}. A refined local model of the crust identifies two layers: an upper crust extending down to the Conrad discontinuity, and a lower part down to the Moho discontinuity. In~\cite{fiorentini05}, the map of Conrad and Moho depths beneath the Japan Islands is derived by Zhao et al.~\cite{zhao}, with an estimated standard error of $\pm1$\,km over most of Japan territory, see Fig.~\ref{Fig:Kamioka}. A detailed grid based on $0.25^{\circ} \times 0.25^{\circ}$ cells provided a sampling density for the study of the upper crust in the region near Kamioka that is equivalent to about one specimen per 400\,km$^2$. Also, the vertical distribution of Th and U abundances in the crust provides even greater challenges because of the limited information on the chemical composition at scales smaller than the Conrad depth, which is generally about 20\,km deep. The chemical composition of the upper-crust of Japan was estimated by Togashi et al., 2002~\cite{togashi} and was based on 166 representative specimens, which can be associated with 37 geological groups, based on ages, lithologies, and provinces. In Fiorentini et al., 2005~\cite{fiorentini05}, a map of uranium abundance in the upper crust was built under the assumption that the composition of the whole upper crust is the same as that inferred in~\cite{togashi} from the study of the exposed portion.
\begin{figure}[tb]
\begin{center}
\centering{\epsfig{file=KL.pdf,scale=0.6}}
\begin{minipage}[t]{16.5 cm}
\caption{Moho depth of the local refined model around KamLAND~\cite{fiorentini05}.
\label{Fig:Kamioka}}
\end{minipage}
\end{center}
\end{figure}
The composition of the Japanese lower crust was assumed to be homogeneous and taken to be $A_{LC}$(U) = $(0.85 \pm 0.23) \times 10^{-6}$ kg/kg and $A_{LC}$(Th) = $(5.19 \pm 2.08) \times 10^{-6}$ kg/kg, based on the model of the lower continental crust reported in an extensive study of the Eastern China crust~\cite{gao98}. The expected U and Th geo-neutrino fluxes from the region surrounding KamLAND are $S$(U) = $11.17 \pm 0.65$\,TNU and $S$(Th) = $3.20 \pm 0.37$\,TNU, respectively (Table~\ref{Tab:KamiokaSignal})~\cite{fiorentini2012}. The maximal and minimal excursions of various inputs and uncertainties provide an estimate of the $3\sigma$ error range. Consequentially, and considering the measurement errors of the chemical analysis of the representative samples, a $3\sigma$ uncertainty of $\pm10$\% has been associated to the U and Th abundances in the Japanese upper crust. The full range of uncertainty, due to the unknown chemical composition of the lower crust, is the half-difference between the signals obtained for the extreme values of the estimated uranium abundances in the lower crust. All these effects are summarized in Table II of Fiorentini et al.~\cite{fiorentini2012}, together with the errors associated with the constraints of the model (discretization of the upper crust and the crustal depth).
\begin{table}
\begin{center}
\begin{minipage}[t]{16.5 cm}
\caption{Contributions to the geo-neutrino signal in KamLAND from the local geology. Quoted errors correspond to $1\sigma$~\cite{fiorentini2012}.}
\label{Tab:KamiokaSignal}
\end{minipage}
\begin{tabular}{l|c|c}
\hline
& $S$(U) [TNU] & $S$(Th) [TNU] \\
Six-tiles & $11.17 \pm 0.65$ & $3.20 \pm 0.37$ \\
Subducting slab & $2.02 \pm 0.61$ & $0.90 \pm 0.27$ \\
Sea of Japan & $0.34 \pm 0.10$ & $0.09 \pm 0.03$ \\
Local total & $13.53 \pm 0.90$ & $4.19 \pm 0.46$ \\
\hline
\end{tabular}
\end{center}
\end{table}
During the subduction of the Philippine and Pacific plates U and Th are carried down in marine sediments and in the oceanic crust. The potential exists for the lower part of the continental crust of Japan to be enriched in large-ionic lithophile elements via dehydration of the top of the subducting plate~\cite{ref2.18}. The degree of enrichment of U and Th into the overlying Japanese crust is still debated. Two extreme cases of this can be modeled: one assumes that the slab keeps its trace elements throughout the subduction process, and in the other, all the uranium from the subducting crust is dissolved in fluids and is transported to the base of the lower crust of Japan arc. Considering a single slab penetrating below Japan with a velocity $v$ = 60\,mm/year (the average of the two plates) on a time scale $T \sim$$10^8$\,year, the U and Th abundance in the lower crust can increase according to the extreme cases by a factor of 1.06 and 2.57, respectively~\cite{ref2.18}. Encompassing both scenarios at the $3\sigma$ level, the contribution to geo-neutrino signal in KamLAND from the subducting slab can be estimated as $S$(U) = $2.02 \pm 0.61$\,TNU and $S$(Th) = $0.90 \pm 0.27$\,TNU (Table~\ref{Tab:KamiokaSignal}).
Although in the global crustal model CRUST 2.0~\cite{ref2.11} the crust beneath the Sea of Japan is classified as oceanic, its true nature remains uncertain. Tamaki et al.~\cite{tamaki} identify four distinctive crustal types: continental, rifted continental, extended continental, and oceanic. The minimal geo-neutrino signal, $S$(U+Th) = 0.06\,TNU, from the Sea of Japan was obtained assuming a homogeneous thin oceanic crust. On the other hand, a model based on a thick crust (up to 19\,km for the Oki bank) with U and Th abundances typical of continental crust overlain by a few km of sedimentary layer (up to 4\,km for the Ulleung basin), maximizes the geo-neutrino production to approximately $S$(U+Th) = 0.82\,TNU~\cite{fiorentini05}. In Table~\ref{Tab:KamiokaSignal}, we report the central values of these two extreme cases with $1\sigma$ uncertainties in order to encompass the extreme values with $3\sigma$.
\subsection{Local geological model near the Gran Sasso site}
\label{subsec:lngs}
The Borexino experiment is located under the highest mountains of the Apennines in Italy, in the Gran Sasso Range. This massif is the northern part of the Latium-Abruzzi carbonates, which is over-thrust onto the Umbria-Marche basin in the Apennines chain. The orogenesis of the Apennine belt began in the early Neogene (about $20 \times 10^6$\,years ago) and developed through the deformation of two major paleo-geographic domains: the Liguria-Piedmont Ocean and the Adria-Apulia passive margin. In particular the central Apennines are an arc shaped fold-and-thrust belt, with north-eastward convexity and vergence that plunges north-westward~\cite{carmignani}. These structures are clearly visible at the Gran Sasso massif, where a northern block is formed by Jurassic limestone over thrusting Miocene limestone and marls. The normal stratigraphic sequence is observed in the southern block where limestone and marls are stacked from Jurassic to Miocene in age.
The refined reference model for the Gran Sasso area was developed by Coltorti et al., 2011~\cite{coltorti} and is based on two zones with different degrees of resolution: the central tile, a three dimensional geological model of the $2^{\circ} \times 2^{\circ}$ area centred at Gran Sasso National Laboratories, and the rest of the region, i.e. what remains of the six tiles after the central tile subtracted. In both areas, the crust is separated from the mantle by a well-defined Moho surface (Fig.~\ref{Fig:LNGS}) and is divided into three reservoirs: sediments, upper crust, and lower crust.
\begin{figure}[tb]
\begin{center}
\centering{\epsfig{file=BX.pdf,scale=0.6}}
\begin{minipage}[t]{16.5 cm}
\caption{Moho depth of the local refined model around Borexino~\cite{coltorti}.
\label{Fig:LNGS}}
\end{minipage}
\end{center}
\end{figure}
In the central tile, the litho-stratigraphic framework of the 13\,km thick sedimentary deposit is geologically complex and composed of numerous sedimentary units characterized by different thicknesses, ages and depositional environments. Following the approach proposed in Plank et al., (1998)~\cite{PlankLangmuir}, Coltorti et al.~\cite{coltorti} assumed that sediments formed in similar depositional environments would have similar and somewhat homogeneous compositions. In particular, the sedimentary cover is modeled as having four types, characterized by different contents of radioactive elements. Uranium and thorium abundances have been measured on representative samples of geological formations using Inductively Coupled Plasma-Mass Spectrometry and Gamma Spectroscopy.
The most massive sedimentary reservoir ($\sim$77\% of sedimentary mass) is composed of Mesozoic carbonate units set down in a shallow-water environment. The 12 samples of limestone, dolomite and evaporite, all show low and uniform values of U and Th abundances: $A$(U) = $(0.3 \pm 0.2) \times 10^{-6}$ kg/kg and $A$(Th) = $(0.2 \pm 0.2) \times 10^{-6}$ kg/kg, respectively. In the central tile approximately 16\% of the mass of the sedimentary cover is terrigenous deposits that progressively overlay the previous carbonate depositional systems, starting from the late Miocene onward. The main lithologies of these units are sandstones, silts and clays characterized by an average U and Th abundances of $a$(U) = $(2.3 \pm 0.6) \times 10^{-6}$ kg/ kg and $a$(Th) = $(8.3 \pm 2.5) \times 10^{-6}$ kg/kg, respectively.
Neglecting the Meso-Cenozoic basinal carbonate units (less 2\% of sedimentary mass), the Permian clastic units (sandstones and conglomerates) are the result of the dismantling and erosion of the ancient Paleozoic crust. Since these units rarely outcrop within the entire Italian Peninsula and their geochemical nature is similar to the basement rocks, in the model of~\cite{coltorti} this reservoir is considered to have the same U and Th content of the upper crust (see below).
The mass weighted average abundances of the sediments of the central tile, and adopted for the rest of the region, are $A_{Sed}$(U) = $0.8 \times 10^{-6}$ kg/kg and $A_{Sed}$(Th) = $2.0 \times 10^{-6}$\,kg/kg. This is significantly lower than the world average for sediments from the global crustal model, $A_{Sed}$(U) = 1.7 and $S_{Sed}$(Th) = $6.9 \times 10^{-6}$ kg/kg~\cite{PlankLangmuir}. This is a consequence of large volumes of U- and Th-poor carbonates. The expected geo-neutrino signal from sediments in the six tiles is $S$(U+Th) = $2.93 \pm 0.25$\,TNU, corresponding to $\sim$30\% of the local contribution.
In Coltorti et al.~\cite{coltorti}, the three-dimensional structure of the crystalline basement is constrained by seismic profiles from the CROP Project~\cite{finetti} and by a Moho isopach map obtained from seismic and gravity data~\cite{finetti}. Approximately 62\% of the central tile volume is occupied by upper and lower crust having an averaged thickness of 13 and 9 km, respectively. Since basement rocks do not outcrop in Central Italy, an accurate sampling has been performed on representative outcrops of the upper crust in the Southern Alps and of the lower crust in Ivrea-Verbano Zone, which is the most classic and extensively studied, deep crustal section found in the Alps. In particular two rocks types were analyzed: U- and Th-enriched felsic rocks and intermediate/mafic rocks that are depleted of heat producing elements. In the case of upper crust, the average U and Th abundances for the two groups are calculated by fixing the relative proportion of these two lithologies, using seismic data. In the lower crust, the fraction of felsic and mafic rocks was estimated on the basis of geophysical and geochemical information, obtaining a felsic and mafic percentage of 40\% and 60\%, respectively, in agreement with what was proposed by~\cite{wedepohl, RudnickFountain, gao98}.
The crystalline basement outside the central tile is modeled by Coltorti et al.~\cite{coltorti}, where they distinguished the upper and lower crust and treated them as separate, homogeneous layers. The abundances of U and Th calculated for the crust of the central tile are adopted for the rest of the region. Taking into account that the maximal and minimal excursions of various input values and uncertainties are taken as a proxy for the $3\sigma$ error range, the estimated geo-neutrino signal in the local upper and lower crust is $S_{UC}$(U+Th) = $6.15 \pm 1.2$\,TNU and $S_{LC}$(U+Th) = $0.59 \pm 0.22$\,TNU. In Table~\ref{Tab:LNGSSignal} the contributions to the geo-neutrino signal from U and Th within the three reservoirs of the local geological model near Gran Sasso massif are summarized.
\begin{table}
\begin{center}
\begin{minipage}[t]{16.5 cm}
\caption{Contributions to the geo-neutrino signal in Borexino from the local geology (six tiles). Quoted errors are at $1\sigma$ ~\cite{fiorentini2012}.}
\label{Tab:LNGSSignal}
\end{minipage}
\begin{tabular}{l|c|c}
\hline
& $S$(U) [TNU] & $S$(Th) [TNU] \\
\hline
Sediments & $2.53 \pm 0.21$ & $0.40 \pm 0.04$ \\
Upper crust & $4.94 \pm 0.97$ & $1.21 \pm 0.24$\\
Lower crust & $0.34 \pm 0.11$ & $0.25 \pm 0.11$\\
Local total & $7.81 \pm 0.99$ & $1.86 \pm 0.27$\\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Geo-neutrinos from the mantle}
\label{subsec:mantle}
The contribution to the geo-neutrino signal from mantle depends on the total amount of heat generating elements as well as on their distribution deep inside the Earth, since sources closer to the detector contribute more to the signal. For each value of the total mass of Th and U in any Earth model, we construct distributions of abundances that provide maximal and minimal signals, under the condition that they are consistent with the geochemical and geophysical information of the globe~\cite{fiorentini2007}.
We can test mantle models using uranium in the mantle by assuming a spherical symmetry distribution of uranium throughout the mantle or a layered condition. It follows that, for a fixed uranium mass in the mantle $m_{M}$(U), the extreme predictions for the signal are obtained by: (1) placing uranium in a thin layer at the bottom and (2) distributing it with uniform abundance over the mantle (Fig.~\ref{Fig:MantleSignal}). These two cases give, respectively:
\begin{equation}
S_{M}^{min} = 11.3 ~ m_{M}({\rm U}) ~~ {\rm [TNU]}
\label{Eq:mantleMin}
\end{equation}
\begin{equation}
S_{M}^{max} = 16.2 ~ m_{M}({\rm U}) ~~ {\rm [TNU]}
\label{Eq:mantleMin}
\end{equation}
\begin{figure}[tb]
\begin{center}
\centering{\epsfig{file=Earth.pdf,scale=0.6}}
\begin{minipage}[t]{16.5 cm}
\caption{Cartoons of two extreme distributions of U and Th in the Earth. The minimal geo-neutrino signal (a) is obtained by placing the heat-producing radiogenic elements as far inward as possible (i.e. in a thin layer at the bottom of the mantle), having as little radiogenic material as possible in the crust. An opposite condition is when there is a homogeneous mantle combined with the maximal amount of heat-producing radiogenic elements in the crust, which produces the highest geo-neutrino signal (b).
\label{Fig:MantleSignal}}
\end{minipage}
\end{center}
\end{figure}
The relative geo-neutrino contributions from the crust and mantle can be combined so as to obtain predictions of the surface flux. For a total U mass fixed by a BSE model, assigning to the crust as much material as consistent with observational data and putting the rest in the mantle with a uniform distribution, the highest geo-neutrino signal is modeled. Similarly, the minimal signal is obtained for a minimal mass contribution in the crust and the rest in a thin layer at the bottom of the mantle. The total amount of radioactive elements should not produce more heat flow than $47\pm 2$\,TW~\cite{davies}; this limit represents the upper bound for a fully radiogenic Earth model. On the other hand the minimal geo-neutrinos signal is obtained with the minimal mass of uranium in the crust and a negligible amount in the mantle. On the basis of these arguments the two extreme total signals ($S_{high}$ and $S_{low}$) expected in one site can be plotted as a function of heat flow due to uranium and thorium in the Earth, considering a fixed chondritic ratio Th/U. The graph Signal vs Heat Power from U+Th is site dependent ($S$-$H$ plane); the intercept depends on the site, while the slope is universal.
On the $S$-$H$ plane we can identify the regions corresponding to three classes of BSE models extensively studied in \v{S}r\'amek et al., 2013~\cite{sramek}. The cosmochemical model~\cite{javoy} is based on an Earth composition that is similar to that observed in enstatite chondrites, which among the different types of chondrites shows the closest isotopic similarity with the mantle rocks and has sufficiently high iron content to explain the metallic core. This model is characterized by a relative low amount of U and Th, producing a total radiogenic power of $11 \pm 2$ TW. Taking into account that U and Th in the crust contribute 7\,TW\cite{huang}, the U and Th heat power of the mantle is approximately 4\,TW.
On the opposite end of the spectrum of proposed Earth compositions, the geodynamical model is based on the energetics of mantle convection and the observed surface heat loss~\cite{turcotte2002}. This model requires a high mantle $U_{rey}$ ratio as to prevent extremely high temperatures in Earth's early history. Assuming a mantle $U_{rey}$ ratio of $0.7 \pm 0.1$, the present radiogenic heat power produced by U and Th in the Earth is $33 \pm 3$\,TW, mainly (80\%) generated from the mantle.
A class of intermediate BSE models~\cite{hart,McDonoughSun, allegre95, palme} is based on the relative abundances of refractory lithophile elements in mantle samples. These models project back to their initial starting composition and are constrained by the relative abundances of the refractory lithophile elements in chondritic meteorites. Adopting the U and Th abundances of primitive mantle by McDonough and Sun~\cite{McDonoughSun} the power produced by these two elements corresponds to $16.6 \pm 3.0$\,TW.
\section{The detectors}
\label{Sec:detectors}
Only two geo-neutrino detectors, Borexino and KamLAND, are operating and have successfully measured the Earth's geo-neutrino flux. Both experiments have been primarily developed and constructed to achieve different goals than the measurement of the geo-neutrinos. Borexino, located in the underground laboratory of Gran Sasso, in central Italy, was designed to study the low-energy components of the solar neutrino flux. To this purpose, very high care has been devoted to keep at ultra-trace level the background due to the natural radioactivity. KamLAND, installed in the Kamiokande-Mozumi mine in central western Japan, was, on the other hand, designed to study the antineutrinos emitted by nuclear reactors. The physics justification for both experiments was focused on the study of the neutrino oscillation phenomenon, via two different approaches.
\subsection{Borexino}
\label{SubSec:BorexDetector}
The active detection medium of Borexino is a liquid scintillator which, with its intrinsic high luminosity ($\sim$50 times more than in the Cherenkov technique), is an ideal choice for massive calorimetric low-energy spectroscopy. In Borexino, the scintillator is a two component liquid, the solvent is Pseudocumene (PC, 1,2,4-trimethylbenzene) and the solute is a fluorescent dye (PPO, 2,5-diphenyloxazole) at a concentration of 1.5 g/l.
The Borexino design has been driven mostly by the need to keep the internal background as low as possible and to shield the external background as much as possible. Its layout (Fig.~\ref{Fig:DetectorBorex}) is based upon the principle of graded shielding: the detector structure consists of a set of concentric shells, more inner the shell, higher the radio-purity.
\begin{figure}[tb]
\begin{center}
\centering{\epsfig{file=DetectorBorex.pdf,scale=0.4}}
\begin{minipage}[t]{16.5 cm}
\caption{Layout of the Borexino detector~\cite{Alimonti1}.
\label{Fig:DetectorBorex}}
\end{minipage}
\end{center}
\end{figure}
Starting from the outside, a cylindrical water tank (WT) with a radius of $\sim$9\,m and a height of 16.9\,m, filled with 2100\,m$^3$ of ultra-pure water, contains a stainless steel sphere (SSS) with a radius of 6.85\,m. The SSS supports Photomultiplier Tubes (PMT), and contains a spherical nylon Inner Vessel (IV), 4.2\,m of radius and 300\,m$^3$ of volume, surrounded by 1050\,m$^3$ of PC. The nylon wall of the IV is 125\,$\mu$m thick and, as a consequence, the buoyancy on it has to be very small: so the density of the external shielding liquid (buffer liquid) is only slightly different from the one of the internal scintillator. The buffer liquid is doped with 5\,g/l (later reduced to $\sim$2.5\,g/l) of a quencher (dimethylphthalate - DMP) in order to suppress the residual scintillation of the pure PC. Therefore the spectroscopic signals arise in a very large majority only from the interior of the IV.
Two detectors are active in Borexino: an internal detector (ID) and an external one (OD), with the ID consisting of 2212 8-inch PMTs (ETL9351) distributed uniformly on the inner walls of the SSS. With the exception of 371 of them, the other PMTs are equipped with aluminum light concentrators in order to increase the light collection so that the optical coverage is $\sim$30\%. The optical concentrators are designed to collect only the light produced by the IV, minimizing the detection of the light originating in the buffer~\cite{Alimonti1, Alimonti2}. In the water tank another detector (outer detector - OD) consists of 208 PMTs, which detect the Cherenkov light produced by the muons in the shielding water. Despite the reduction of the cosmic muon flux by a factor of $10^6$, thanks to the 3800\,m.w.e. of the rock overburden, the muon flux in the hall C is still significant (1.2 muon m$^{-2}$ h$^{-1}$). Thus the muons crossing the whole detector are $\sim$8000/day.
The water contained in the water tank ($\sim$2\, m, at least, around the SSS in all directions; 2400\,m$^3$ in total) provides good shielding with respect to gammas and neutrons emitted by the rocks and by the surrounding laboratory environment.
The buffer liquid ($\sim$2.6\,m everywhere around the IV) has the duty to shield the nylon vessel IV from the radiations emitted by the PMTs and by the stainless steel of the SSS, and from the external residual gammas that survived through the shielding water of the water tank. The contribution to the background from the PMTs is relevant, even if they have been built keeping as low as possible the radioactivity of the components: this is especially true for the glass and ceramic components.
However, this shielding is still not enough; the radiation emitted by the IV nylon walls, even if very thin, requires the interposition of a certain thickness of scintillator between detection volume and IV walls. Then a wall-less Fiducial Volume (FV) is software defined. In general, in the analysis of the low neutrino interactions, the FV is defined as a sphere of 3.0\,m radius with additional cuts on the $z$-vertical axis due to the emanations from the IV reinforcements at top and bottom. But in the case of geo-neutrinos, where the interactions are well tagged with a delayed coincidence, a larger FV is considered, defining a volume having its walls $\sim$20\,cm inward from the IV walls.
A second nylon balloon is installed in the buffer liquid with a radius of 5.5\,m. It functions as a barrier against the Radon, emitted mostly by the PMTs, and other gaseous contaminants originated in the SSS. (for more details see~\cite{Alimonti1}).
\subsubsection{The radioactivity issue}
\label{subsection:radioactivityBorex}
The most likely sources of contamination for the active detection core of Borexino can be summarized as follow: $^{238}$U and $^{232}$Th families and $^{40}$K, present in all materials, in the dust and particulate residuals; $^{39}$Ar, $^{85}$Kr and $^{222}$Rn present in the air, which can penetrate through possible air leaks during the filling operations; $^{210}$Pb and $^{210}$Po (produced by the $^{222}$Rn decays), probably plated on the vessel and plumbing surfaces. A careful study of these possible backgrounds was carried out in the preparation phase of Borexino, by means of the CTF (Counting Test Facility), a Borexino prototype developed as a Borexino benchmark.
All these contaminants are a source of background. They can reside directly in the scintillator, on the materials used to construct Borexino and to shield the active part of it. In addition, residuals of radiations emitted by the environment which cross the shielding materials may contribute.
The tools used in order to keep the background produced by the radioactive contaminants as low as possible are the following:
\begin{enumerate}
\item {selection of the materials and components;}
\item {purification of the shielding liquids;}
\item {purification of the scintillator.}
\end{enumerate}
The selection of the materials has been carried out searching for very low radioactivity materials. This is the case of the stainless steel of the water tank and of the SSS. Samples of the construction materials have been checked in the Gran Sasso Low Radioactivity Laboratory. The PMTs were constructed with low radioactivity glass and ceramics. Selected components, as valves, pumps, lines, etc, have been used for the liquid handling plant.
Special attention was given to the nylon used in constructing the vessels and especially that of the IV, whose surface is in direct contact with the ultra-pure scintillating core of the detector. The raw nylon material was carefully selected, measuring the pellet radioactivity with an inductively coupled plasma mass spectrometer (ICP-MS). The pellets were then extruded in controlled area. The construction of the vessel and its assembly was finally carried out in a special clean room, whose atmosphere was strictly controlled, not only for the reduction of the dust (class 100), but also for the $^{222}$Rn. It was equipped for this purpose with cryogenic systems able to keep the radon present in the air at very low level. Finally, the assembled IV was covered with a shroud to stop alphas, produced by the decay of the $^{222}$Rn present in the atmospheric air, and low energy electrons. The installation of the vessel in the SSS was done in an atmosphere of air obtained by mixing N$_2$, purified via cryogenic systems, installed on purpose (see below), and O$_2$, stored in bottles.
In order to avoid dust and particulate, the surfaces of the SSS and of the liquid handling pipes and components were treated with a process involving electro-polishing, pickling and passivation, followed by a precision cleaning performed with detergents and high purity water. All the assemblies of the plants have been done in an N$_2$ atmosphere.
All the components and devices of the detector were previously cleaned in clean rooms of class 100, while the same SSS was converted in a clean room of class 10,000.
The shielding water was purified via an "ad hoc" plant, installed in hall C. The plant includes: a high purity deionizer, a water softening, inverse osmosis systems, ion exchange beds, ultra filtrations, and nitrogen stripping in vertical column. The water, treated as such, showed a conductivity of $\sim$18\,M$\Omega$/cm. This high-purity deionized water was used to fill the water tank and the inner zones of the detector in a preliminary filling to assure a further cleaning before the introduction of pseudocumene and the other components (fluor and quencher). In addition, it has also been used also for cleaning and rinsing the surfaces of the plants.
\subsubsection{The purification of the scintillator components}
\label{subsec:purificationBX}
Special care was used to clean the liquid scintillator component: PC and PPO. A purification system was developed to purify the PPO in a concentrated master solution obtained dissolving the PPO in PC at high concentration. Pre-purification is connected to the PPO property to solidify below 70$^{\circ}$C and, as a consequence, the possibility that it blocks the transfer lines. The pre-purification has been done first with a water extraction process and then with a distillation. In addition nitrogen degasification stripping was carried out directly in the PPO storage tank to remove noble gas impurities.
The PC has been prepared by Polimeri Europa in the Sardinia plant, treating the crude oil extracted from very old Libyan layers. The aim of this choice is to keep at very low level the contamination of $^{14}$C, which cannot be cleaned: the crude oil, in very old layers, remained millions of years shielded with respect to the cosmic rays, which produce $^{14}$C in collisions with the oil molecules.
Once produced, the PC was shipped to the Gran Sasso underground laboratory with specially designed transport tanks that were pre-cleaned and treated before use. The PC transport time was kept as short as possible ($\sim$20 h), avoiding long exposure to cosmic rays, thus minimizing the cosmogenic $^7$Be production.
The PC was stored underground in special vessels, with the internal surfaces treated similarly to the detector surfaces, and it was subjected to the cleaning processes during the filling operation of the SSS and the IV. The cleaning processes included three steps: distillation, ultrafiltration, N2 stripping. The distillation was carried out with a six-stage column operating at 100\,mbar and 100$^{\circ}$C of temperature. This process has been done at a relatively low temperature just to avoid that contaminants embedded in the column components would be stripped and inserted into the distilled liquid.
Nitrogen stripping, using very pure nitrogen gas, removes noble gases and other gaseous contaminants. A special nitrogen producing facility was developed just to produce nitrogen with only ultra-traces of $^{222}$Rn, $^{39}$Ar and $^{85}$Kr that are present in the atmosphere. Purified nitrogen was cryogenically generated with a sub-boiling system; the distribution and the storage of this very pure nitrogen supply was decoupled from other nitrogen circuits. In addition we have to emphasize that all plants of the Borexino subsystems were constructed to reach a tightness of $10^{-8}$\,scc/s.
The radiopurity of the Borexino detector is summarized in Table~\ref{tab:BXradiopurity}, where it is compared with the typical radio-purity of the various materials. The low contamination levels achieved are unprecedented and surpass design specifications.
\begin{table}
\begin{center}
\begin{minipage}[t]{16.5 cm}
\caption{Radiopurity of the Borexino detector after the filling in May 2007.}
\label{tab:BXradiopurity}
\end{minipage}
\begin{tabular}{l|l|l|l|l}
\hline
Name & Source & Typical & Required & Achieved \\
\hline
$^{14}$C & intrinsic PC & $\sim 10^{-12}$ g/g & $\sim$$10^{-18}$ g/g & $\sim$$2 \times 10^{-12}$ g/g \\
\hline
$^{238}$U & dust & $10^{-5} - 10^{-6}$ g/g & $< 10^{-16}$ g/g & $ (5.0 \pm 0.9) \times 10^{-18}$ g/g \\
$^{232}$Th & & & & $ (3.0 \pm 1.0) \times 10^{-18}$ g/g \\
\hline
$^{7}$Be & cosmogenic & $\sim$$3 \times 10^{-2}$ Bq/ton & $ < 10^{-6}$ Bq/ton & not observed \\
\hline
$^{40}$K & dust, PPO & $\sim$$2 \times 10^{-6}$ g/g (dust) & $ < 10^{-18}$ g/g & not observed \\
\hline
$^{210}$Po & surface contamination & & $ < 7$ cpd /ton & May07: 70 cpd/ton\\
& & & & May09: 5 cpd/ton \\
\hline
$^{222}$Rn & emanation, rock & 10 Bq/l (air, water) & $ < 10$ cpd /100 ton & $< 1$ cpd/ 100 ton\\
& & 100 - 1000 Bq/kg (rock) & & \\
\hline
$^{39}$Ar & air, cosmogenic & 17\,mBq/m$^3$ (air) & $<$1\,cpd/100\,ton & $<<$ $^{85}$Kr \\
\hline
$^{85}$Kr & air, nuclear weapons & $\sim$1\,Bq/m$^3$ (air) & $<$ 1\,cpd/100\,ton & $ 30 \pm 5$\,cpd/100\,ton \\
\hline
\end{tabular}
\end{center}
\end{table}
A further purification campaign was carried out in 2010 - 2011 that was specifically devoted to reduce $^{85}$Kr and $^{210}$Bi in the liquid scintillation. The campaign reduced $^{85}$Kr contamination to negligible level and $^{210}$Bi to $18 \pm 1.5$\,cpd/100\,ton. Contamination from $^{210}$Po was not reduced, but, leaving untouched and undisturbed the detector, it is naturally decaying ($\tau$ = 200\,days); as of April 2013, its contribution is already less than 180\,cpd/100\,ton.
\subsubsection{The antineutrino detection and the resolution in Borexino}
\label{Subsec:resolutionBX}
In Borexino antineutrinos are detected via the inverse beta-decay reaction, see Eq.~\ref{Eq:InvBeta} with a kinematic threshold at 1.806\,MeV. This reaction is very well tagged because (see Sec.~\ref{Sec:Results}) it produces a prompt signal from positron and a delayed signal, a 2.2\,MeV gamma.
Radiopurity of the detector significantly influences also the study of the antineutrinos, despite well tagged reactions, due mostly to two contributions: neutron production from $\alpha$'s (in particular via the reaction $^{13}$C($\alpha$, n)$^{16}$O) and the accidental coincidences due to the background rate. As discussed in the Sec.~\ref{Sec:Results}, these backgrounds are negligible in Borexino.
The energy estimators used in Borexino are three: $N_p$, the number of the PMTs, which detected one or more hits; $N_h$, the number of hits; $N_{pe}$, the number of photoelectrons (p.e.) for each event. Each of these estimators have pro and cons: the first two are better only at low energy, when 1\,p.e./PMT is dominating, while the last one is used in general over 2\,MeV of released energy.
A calibration campaign was carried out in 2009 and 2010, placing 11 different sources in the center and in many off-center positions ($\sim$300) in the IV: $^{37}$Co, $^{139}$Ce, $^{203}$Hg, $^{85}$Sr, $^{54}$Mn, $^{65}$Zn, $^{60}$Co, $^{40}$K, $^{222}$Rn, $^{14}$C. In addition, an AmBe source, producing about 10 neutrons with energies up to 10\,MeV, was deployed in twenty-five positions to study the detector response to neutrons and to protons recoiling off neutrons. In addition a $^{228}$Th source was placed in the buffer region of the detector to study the detector response to the major external background source, 2.615\,MeV gamma rays of $^{208}$Tl, a daughter isotope of $^{228}$Th. These calibrations reduced the systematic error associated with all Borexino results and helped to optimize the Monte Carlo simulation of the detector response~\cite{Back}.
The light yield is $\sim$500\,p.e./MeV; the energy scale resolution is $5\% / \sqrt {E {\rm [MeV]}}$ in the range 200 - 2000\,keV; over 2\,MeV it is slightly worse because in that range the calibration was less accurate. The position reconstruction accuracy is $\sigma (x,y,z)$ = 10 - 12\,cm.
The stability of the ID is continuously checked by means internal signals, as $^{210}$Po, $^{14}$C, $^{11}$C, the shoulder of $^{7}$Be and, just after the filling when $^{222}$Rn was present, via the $^{210}$Bi-$^{210}$Po sequence, while the PMTs are monitored via pulsed laser light distributed via optical fibers~\cite{Back}.
\subsection{KamLAND}
\label{subsec:kamland}
The geometrical structure of KamLAND is very similar to the Borexino's (Fig.~\ref{Fig:DetectorKL}), but the size in KamLAND is larger and it has a compositionally different scintillator and buffer liquids. The KamLAND detector was designed to study nuclear reactor antineutrinos with a mean energy of prompt signals of $\sim$3\,MeV, and therefore the requirements of low radioactivity are less stringent than in Borexino.
\begin{figure}[tb]
\begin{center}
\centering{\epsfig{file=DetectorKL.pdf,scale=1.4}}
\begin{minipage}[t]{16.5 cm}
\caption{Layout of the KamLAND detector~\cite{GandoT}.
\label{Fig:DetectorKL}}
\end{minipage}
\end{center}
\end{figure}
The liquid scintillator consists of 89\% dodecane and 20\% Pseudocumene, plus $1.36 \pm 0.03$\,g/l of the fluor PPO. About 1\,kton of this liquid scintillator is contained in a 6.5\,m radius spherical balloon made of transparent nylon-EVOH (ethylene vinyl alcohol copolymer) composite; this Inner Vessel (IV), supported by a network of Kevlar ropes, is the active core of the KamLAND detector. The IV has 135\,$\mu$m thick walls; its total volume is $1171 \pm 25$\,m$^3$.
The IV is surrounded by a buffer liquid consisting of 57\% isoparaffin and 43\% dodecane oils, which fills a 9 m radius stainless-steel sphere (SSS), which functions also as a support for the PMTs. The specific gravity of the buffer liquid is 0.04\%, lower than the one of the liquid scintillator, whose density is 0.78\,g/cm$^3$. A 3\,mm thick acrylic balloon, 8.3\,m of radius, functions as a barrier against the radon emitted by the PMTs. Finally 3.2\,kton of water surround the SSS and are contained in a cylindrical Water Tank (WT).
This sequence of the layers and different liquids, as in the case of Borexino, is designed to shield the innermost detector from the radiations emitted by the rocks and by the materials that make up the detector~(Fig.~\ref{Fig:DetectorKL}).
The signals, which are produced in the Inner Detector (ID), are read by an array of 1325 17-inch fast PMTs and 554 20-inch PMTs; this array, supported by the SSS, assures an optical coverage of $\sim$34\%. The Outer Detector (OD) processes the water-Cherenkov light, produced in the Water Tank and read by 225\,PMTs, mounted on the internal WT walls (for more details see~\cite{Abe2008, Abe2010}).
Having started data acquisition in January 2002 KamLAND has studied important aspects of neutrino oscillations using antineutrinos produced by nuclear reactors. In September 2011 KamLAND began the study of the $0\nu\beta\beta$-decay. To this purpose a transparent nylon balloon, 3.08\,m diameter, containing 13\,tons of Xe-loaded liquid scintillator was inserted into the center of the ID (KamLAND-ZEN, see. Fig.~\ref{Fig:DetectorKLZen}).
\begin{figure}[tb]
\begin{center}
\centering{\epsfig{file=DetectorKLZen.pdf,scale=1.3}}
\begin{minipage}[t]{16.5 cm}
\caption{Structure of the KamLAND-ZEN detector~\cite{Gando}.
\label{Fig:DetectorKLZen}}
\end{minipage}
\end{center}
\end{figure}
An important evolution of the geo-neutrinos study in the KamLAND-ZEN experiment started in 2012, when the nuclear power plants have been switched off in Japan for check and maintenance (the last plant has been switched off at the beginning of May 2012). Thus a favorable period for the geo-neutrino study in Japan has started, because the near field reactor antineutrino background was suppressed.
KamLAND is installed under the Ikenoyama peak, which assures an overburden of $\sim$2700\,m water equivalent.
\subsubsection{The radiopurity in KamLAND}
\label{subsec:KLradiopurity}
The purification of the liquid scintillator and the buffer oil has been done in two steps: initially the liquids have been submitted to a water extraction process during the detector filling period in March 2002; later each component of the liquid scintillator has been distilled in three separate towers and, after mixed, submitted to a high purity N$_2$ purging process (2007 - 2009 campaign). Each component has been distilled at optimal temperature and reduced pressure in order to prevent changes to the properties of the liquid~\cite{GandoT}. After the purification, $^{238}$U has been reduced to $(1.5\pm1.8) \times 10^{-19}$ g/g, $^{232}$Th to $(1.9 \pm 0.2) \times 10^{-17}$ g/g, and $^{40}$K to a limit $< 4.5 \times 10^{-18}$ g/g; $^{210}$Po is $\sim$2\,mBq/m$^3$, $^{210}$Bi $<$1\,mBq/m$^3$, $^{85}$Kr $\sim$0.1\,mBq/m$^3$. In addition, accidental background is due to the vessel walls (Inoue, private communication).
The time range 2002-2007 of data taking defines the Period 1, while the period after the purification campaign until the fall 2011 is the Period 2. Since September 2011 the events are collected with the Xe balloon already installed: this period is the Period 3.
A Fiducial Volume (FV) at KamLAND is software defined during the analysis: it is a sphere of 6\,m of radius. In addition, in the period 3, only events in the FV and well outside the Xe balloon are considered~\cite{Gando}.
The Kamioka mine underground water, used in the water extraction operations and in the shield of the detector, was cleaned by means a purification plant consisting of pre-filters, UV sterilizer, ion exchanger, vacuum degasser and reverse osmosis (RO) in order to remove dust, metal ions, natural radioisotopes, such as $^{222}$Rn, and to eliminate bacteria: the $^{238}$U and $^{232}$Th in water has been reduced to the order of $10^{-13}$\,g/g.
The two more important sources of background, due to the radioactivity internal to the detector, are due to the reaction $^{13}$C($\alpha$,n)$^{16}$O and to accidental coincidences, which can mimic the reaction Eq.~\ref{Eq:InvBeta}. In addition, cosmogenic nuclides represent an additional possible background: the rate of cosmic muons in KamLAND is $0.198 \pm 0.014$\,Hz (all this matter is discussed in Sec.~\ref{Sec:Results}).
\subsubsection{The antineutrino detection and the resolution in KamLAND}
\label{Subsec:resolutionKL}
KamLAND uses the reaction given in Eq.~\ref{Eq:InvBeta} for the detection of antineutrinos. A difference with respect to Borexino is the time interval between the prompt and the delayed signals, which in Kamland is $207.5 \pm 2.8$\,$\mu$s~\cite{Abe2008}.
The energy assessment processes the information stored in the digitized PMT signals by identifying individual PMT pulses in waveform. The integrated area ("charge") is computed from the individual pulses.
Energy and spatial calibrations of the detector is done with 6 sources: $^{203}$Hg, $^{68}$Ge, $^{65}$Zn, $^{60}$Co, $^{65}$Zn, $^{241}$Am + $^9$Be and $^{210}$Po + $^{13}$C. Also in Kamland, as in Borexino, the residual background contaminants function as a continuous monitor of the detector response. The achieved vertex resolution is $\sim$$12$\,${\rm cm} / \sqrt {E {\rm [MeV]}}$, while the energy resolution is $6.5\% / \sqrt {E {\rm [MeV]}}$~\cite{Abe2008}.
\section{Analysis and results obtained by Borexino and KamLAND}
\label{Sec:Results}
\subsection{Measuring geo-neutrinos}
\label{subsec:measuring}
The flux of geo-neutrinos at a given location on the surface of the Earth depends on the distribution of heat producing elements ($^{238}$U, $^{232}$Th, and $^{40}$K) within the mantle and the crust, as described in Sec.~\ref{Sec:GeoSignal}. Geo-neutrinos can travel as much as 12,000\,km to reach a detector location such as Kamioka or Gran Sasso. It has been established by a number of observations~\cite{GonzalesGarcia2010} that neutrinos traveling from a source to a detection point can oscillate between one flavor and another. Therefore, there is a finite survival probability, $P_{ee}$, for a given electron neutrino, for example, to be detected as such at some distance from its production position. The origin of neutrino oscillations is found in the difference between mass and flavor eigenstates. In the case of the three-flavor $(\nu_e, \nu_{\mu}, \nu_{\tau})$ and three mass eigenstates $(\nu_1, \nu_2, \nu_3)$ scenario the two sets of eigenstates are related by a mixing matrix which depends on three angles and one CP phase~\cite{GonzalesGarcia2010}. The survival probability is determined by the antineutrino energy, the distance traveled, and by the neutrino oscillation parameters: namely, the mixing angles and the mass-squared differences between mass eigenstates. In particular, for geo-neutrinos (thus, considering their energy spectra), the present determination of the neutrino oscillation parameters establishes a neutrino oscillation length on the order of 140\,km. By comparing the averaged distance traveled by the geo-neutrinos and the oscillation length, the survival probability is determined to be:
\begin{equation}
P_{ee} = \cos^4\theta_{13}\left(1-\frac{1}{2}\sin^2 2\theta_{12}\right)+\sin^4\theta_{13} \sim 0.55\pm0.01
\label{Eq:Pee}
\end{equation}
where $\theta_{12}$ and $\theta_{13}$ are the mixing angles between mass eigenstates. As a consequence of the oscillations the flux of geo-neutrinos at an experimental location will be reduced by about 45\%.
Geo-neutrinos, electron antineutrinos, interact with matter only through the weak interactions and thus the probability of such interaction is very low. In order to be able to detect the weak geo-neutrino signal it is therefore important to shield the experimental setup from cosmic radiations and to place the detector inside underground laboratories as it was described in Sec.~\ref{Sec:detectors}. In order to increase the number of detected events, large volume detectors are required. Borexino's liquid scintillator target has about 280\,tons while that of KamLAND has about 1\,kton.
Electron antineutrinos are detected in liquid scintillator detectors by means of the inverse-beta decay interaction shown in Eq.~\ref{Eq:InvBeta}, having a kinematic threshold of 1.806\,MeV. This means that only a high energy tail from $^{238}$U and $^{232}$Th geo-neutrinos above this energy threshold can be detected. In particular, the fraction of detectable geo-neutrinos is 6.3\% for $^{238}$U and 3.8\% for $^{232}$Th, while $^{40}$K geo-neutrinos are all below this threshold and cannot be detected. The emitted positron promptly comes to rest and annihilates emitting two 511\,keV\,$\gamma$-rays. The deposited positron's kinetic energy and the energy of the two $\gamma$-rays are detected in a single, so called {\em{prompt event}}, with a visible energy $E_{prompt}$ related to the energy of incident geo-neutrino $E_\nu$ by a simple relation
\begin{equation}
E_{prompt} = E_\nu - 0.784 ~~~~~~{\rm [MeV]}
\label{Eq:Evisible}
\end{equation}
The free neutron emitted is thermalized and then typically captured on protons with a meantime in the range of 200 - 300\,$\mu$s, depending on the scintillator type. The neutron capture results in the emission of a 2.2\,MeV de-excitation $\gamma$-ray, which provides a so called {\em{delayed event}}. In the thermalization process, the emitted neutron loses the memory of its original direction, and no directionality information about the incident antineutrino can be obtained by any of the two experiments measuring geo-neutrinos. The cross section of this detection interaction (see also Sec.~\ref{Sec:Intro}) is known with a high precision below 1\% and can be found for example in~\cite{strumia}.
The space and time coincidence of the prompt and the delayed event provides a unique tool to strongly suppress possible background sources that mimick antineutrino interactions. Of course, other electron antineutrinos having a different origin with respect to geo-neutrinos represent a possible background to geo-neutrino measurement as well. Antineutrinos emitted from nuclear power plants are the only relevant antineutrino background.
The Borexino experiment was designed to measure solar neutrinos which are detected through a simple scattering off electrons, which does not provide a coincidence tag. Thus, neutrino interaction cannot be distinguished from an event due to the radioactive contaminants of the detector. Borexino succeeded in achieving an extreme level of radiopurity of the scintillator and of the construction materials, having as a consequence that backgrounds different from reactor antineutrinos are at almost negligible levels for the geo-neutrino measurement (see Sec.~\ref{subsection:radioactivityBorex}). In addition, there are no nuclear power plants in Italy and the mean weighted distance of the reactors from the LNGS site is more than 1000\,km.
The KamLAND experiment was designed to study reactor antineutrinos which are detected by the same inverse beta decay interaction providing the same coincidence tag as in geo-neutrino detection. Therefore, such an extreme radiopurity as in Borexino is not strictly required in the KamLAND experiment and some non-antineutrino backgrounds, as accidental coincidences or ($\alpha$, n) interactions represent a non-negligible background source for KamLAND geo-neutrino measurements. In addition, proximity of nuclear power plants causes an increased rate of events due to reactor antineutrinos. Recently, after the Fukushima nuclear accident occurred in March 2011, all Japanese nuclear reactors were temporarily switched off providing a unique possibility for geo-neutrino measurement in KamLAND.
\subsection{Reactor antineutrino background}
\label{subsec:reactors}
Antineutrinos from nuclear power plants are a relevant background source for geo-neutrino measurements. It is therefore crucial to be able to calculate the expected number of events and the corresponding spectral shape of the electron antineutrinos from nuclear plants. There are four principal isotopes used as fuels in the cores of nuclear power plants: $^{235}$U, $^{238}$U, $^{239}$Pu, and $^{241}$Pu. They contribute to the total thermal power of the plant in the different, so called, power fractions, which depend on the reactor type and on the burn-up stage of the individual core. The most recent parametrization can be found in~\cite{mueller}. The overall spectral shapes are very similar to the older parametrization given in~\cite{huber} while it predicts a 3.5\% higher total flux. The energy spectrum of reactor antineutrinos overlaps with that of geo-neutrinos and extends to about 8\,MeV. In the analysis, the antineutrino candidates with energies above the geo-neutrino end point strongly constrain the contribution of the reactor antineutrinos.
The oscillation phenomenon shapes the energy spectrum of the electron antineutrinos arriving from an individual core to the detector site. The survival probability depending on the neutrino oscillation parameters is also a function of the antineutrino energy and of the distance from the source to the detector. The oscillation length for antineutrinos of a few MeV is at the level of 100\,km and therefore in the calculation of the expected signal rate and the spectral shape it is important to consider all reactors individually. In the world there exist about 450 nuclear power plants concentrated mostly in Europe, North America, Japan, and Korea.
\subsection{Non-antineutrino background sources}
\label{subsec:bgr}
Non-antineutrino background sources can be divided in two main categories: cosmogenic background and the background due to the radioactive contaminants of the scintillator and the construction materials of the detector, which includes the scintillator itself, the containment vessels, and the photomultipliers.
The cosmogenic background is dominated by the cosmic muons and the spallation products, such as the fast neutrons and $^9$Li and $^8$He isotopes decaying in ($\beta$ + neutron) branches perfectly mimicking antineutrino interactions. Fast neutrons can penetrate through the construction materials of the detector and before their actual capture on protons (and thus before the 2.2\,MeV $\gamma$-ray emission) they can scatter off a proton, which can provide a prompt signal. Real coincidences of a scattered proton and a 2.2\,MeV gamma ray can therefore mimic a geo-neutrino interaction. The proton is a highly-ionizing particle and in principle, can be distinguished in liquid scintillators by the pulse-shape identification techniques from electron/positron or gamma ray interactions.
Overburden rocks above the underground laboratories in which the detectors are placed reduce the muon flux by several orders of magnitude, see Sec.~\ref{Sec:detectors}. The remaining muons are detected by the Cherenkov Outer Detectors with high efficiency. A time veto after each detected muon suppresses muon-produced cosmogenic backgrounds. For muons passing only through the Outer Detector, it is sufficient to fight only the fast neutrons having a capture time of 200-300 $\mu$s since the charged nuclei as $^9$Li and $^8$He cannot penetrate to the scintillator volume. Instead, muons passing through the scintillator can produce such isotopes directly there. The isotopes of $^9$Li and $^8$He have decay times of 26\, ms and 173\,ms, respectively, and longer vetoes are typically applied after such internal muons.
The level of backgrounds due to non-detected muons can be estimated from a known muon detection inefficiency. The remaining contribution of the backgrounds due to fast neutrons and $^9$Li and $^8$He isotopes produced by muons passing through the detector, which remain present in the tails even after the applied vetoes, can be estimated on the basis of the background events observed during the veto after the muons. A critical point is an estimation of the fast-neutron background due to the muon interactions within the rock walls surrounding the detector. This contribution has to be estimated by a careful Monte Carlo simulation.
An important background component is due to the accidental coincidences of non-correlated events from the interactions of radioactive contaminants of the construction materials and/or the scintillator. An optimized fiducial volume selection can strongly suppress the contribution of this background type. An optimization of the coincidence time window, $dt$, between the prompt and the delayed candidate can also contribute in maximizing the signal-to-background ratio. The contribution of the accidental background in the time correlated window can be determined by searching for the coincidences (with the same energy, position, pulse shape cuts) in an off-time window, typically much longer than $dt$ in order to improve the statistical precision.
The ($\alpha$, n) interactions are another important source of background, namely the interaction $^{13}$C($\alpha$, n)$^{16}$O, as first investigated by KamLAND~\cite{Abe2008}. The $\alpha$-particles are mostly from $^{210}$Po contaminant of liquid scintillator. The prompt event can be due to the three different processes:
\begin{enumerate}
\item{de-excitation 6.1\,MeV gamma ray of the $^{16}$O produced in the excited state;}
\item{4.4\,MeV $\gamma$ ray from the de-excitation of $^{12}$C excited by neutron; }
\item{proton scattered off by thermalizing neutron which, in principle, can be at least partially identified by the pulse shape identification techniques.}
\end{enumerate}
The neutron produced with energies up to 7.3\,MeV thermalizes and is captured on proton producing 2.2\,MeV gamma ray detected as a delayed candidate. The probability of the $^{210}$Po nucleus to give ($\alpha$, n) interaction in pure $^{13}C$ is discussed in~\cite{mckee}. The $^{210}$Po contamination of the scintillator is easily measurable by identification of a peak due to the $\alpha$s in the energy spectrum of single events. In Borexino, $^{210}$Po contamination is much lower than in KamLAND. The isotopic abundance of $^{13}$C in organic compounds is at the level of 1.1\%.
\subsection{KamLAND geo-neutrino analysis}
\label{subsec:ResultsKL}
KamLAND provided in 2005~\cite{araki2005} the first experimental investigation of geo-neutrinos. In Fig.~\ref{Fig:GeoFirstKL} we show the energy spectrum of collected data and expectations. From this measurement it is shown clearly that the main background sources for the geo-neutrino detection in KamLAND are electron antineutrinos from nuclear reactors near the detector, and $\alpha$-particle induced neutron background from radioactive contaminants within the detector active mass. In particular, for this latter source of background the reaction $^{13}$C($\alpha$,n)$^{16}$O with the $\alpha$ particle from $^{210}$Po is the dominant contribution. For the 2005 data set the live time corresponds to $749.1 \pm 0.5$\,days after selection cuts with a total exposure of $(7.09\pm0.35)\times10^{31}$ target proton-year. In this first analysis the overall detection efficiency for geo-neutrino candidates with energy between 1.7 and 3.4\,MeV is determined to be $0.687 \pm 0.007$. The total number of observed electron antineutrinos is 152. A rate only analysis gives 25$^{+19}_{-18}$ geo-neutrino candidates from U and Th. An unbinned maximum likelihood fit assuming the shape of the signal and a Th/U mass ratio of 3.9 gives a best-fit consistent with the rate analysis. The significance of the geo-neutrino observation is at the level of 95\% C.L.
\begin{figure}[tb]
\begin{center}
\centering{\epsfig{file=SpectrumKL2005.pdf,scale=0.5}}
\begin{minipage}[t]{16.5 cm}
\caption{Electron antineutrino energy spectra measured by KamLAND in 2005 taken from~\cite{araki2005}. The thick solid line shows the total expected signal excluding geo-neutrinos. The dashed light blue line is the signal from nuclear reactor antineutrinos. The dotted brown line corresponds to $^{13}$C($\alpha$,n)$^{16}$O background. In the inset, the expected spectra extended to\,8 MeV energy are shown.
\label{Fig:GeoFirstKL}}
\end{minipage}
\end{center}
\end{figure}
In 2008, KamLAND published new data for a total exposure equal to $2.44 \times 10^{32}$ target proton-year~\cite{Abe2008}. The geo-neutrino analysis is performed by fixing the Th/U mass ratio again at the chondritic value 3.9. The combined U + Th geo-neutrino signal corresponds to $73 \pm 27$ events or to a flux equal to $(4.4 \pm 1.6) \times 10^6$\,cm$^{-2}$s$^{-1}$, in agreement with the used reference model~\cite{enomoto}. This work shows again that the most important source of background comes from $^{210}$Po through $^{13}$C($\alpha$,n)$^{16}$O.
A more detailed study of geo-neutrinos is presented in Gando et al., 2011~\cite{gando2011}. The data collected correspond to a total live-time of 2,135\,days. The exposure is equal to $(3.49 \pm 0.07) \times 10^{32}$ target proton-year. The number of observed candidates in the geo-neutrino prompt energy range [0.9, 2.6]\,MeV is 841 against a predicted number of $729.4 \pm 32.3$\,events from reactors and background sources. Fixing the Th/U mass ratio to a chondritic value, the best-fit gives a signal of geo-neutrinos equal to 106$^{+29}_{-28}$, which corresponds to a flux on surface of 4.3$^{+1.2}_{-1.9} \times 10^6$ cm$^{-2}$s$^{-1}$. The null hypothesis for geo-neutrinos is disfavored at the 99.997\% C.L.
In 2013, new data from KamLAND~\cite{Gando} was published including a period with reactor-off activity following the Fukushima nuclear accident in March 2011. The data reported sum up to a total live time of 2991\,days from March 9, 2002 to November 20, 2012. The exposure is determined to be $(4.90 \pm 0.10) \times 10^{32}$ target proton-year. The data set is divided into three main periods: early KamLAND data-taking (1486\,days); after purification of the liquid scintillator (1154\,days); reactor-off period (351\,days). The number of reactor antineutrinos, which gives the largest contribution to the signal in KamLAND, is predicted from reactor data including thermal power variations and fuel exchange. The antineutrino emission spectra from Japanese commercial reactors are determined considering relative fission yields averaged over the live-time period given by: ($^{235}$U, $^{238}$U, $^{239}$Pu, $^{241}$Pu) = (0.567, 0.078, 0.298, 0.057). The contribution of Korean reactors, Japanese research reactors, and other world reactors is also included (the overall effect of these contributions accounts to about 6\%). After all selection cuts the expected number of events from reactors without oscillations is $3564 \pm 145$.
An unbinned rate + shape analysis is performed in the energy range [0.9, 8.5]\,MeV by means of the following $\chi^2$:
\begin{eqnarray}
\chi^2 & = & \chi^2 \left( \theta_{12}, \theta_{13}, \Delta m^2_{21}, N^{geo}_{U,Th}, N_{BG,1 \rightarrow 5},N_{sys,1 \rightarrow 4} \right) - \nonumber \\
& & 2 \ln{ L_{shape}\left(\theta_{12}, \theta_{13}, \Delta m^2_{21}, N^{geo}_{U,Th}, N_{BG,1 \rightarrow 5},N_{sys,1 \rightarrow 4} \right)} + \nonumber \\
& & \chi^2_{BG} \left( N_{BG,1 \rightarrow 5} \right) + \chi^2_{sys} \left( N_{sys,1 \rightarrow 4} \right) + \chi^2 \left(\theta_{12}, \theta_{13}, \Delta m^2_{21} \right)
\label{Eq:Chi2KL}
\end{eqnarray}
where $N_{BG}$ accounts for five different background sources (accidentals, Li-He cosmogenic, fast neutrons and atmospheric neutrinos, two $^{13}$C($\alpha$,n)$^{16}$O contributions depending on the final state of $^{16}$O). $N_{sys}$ accounts for four systematic errors (reactor spectrum, energy scale, event rate, and energy-dependent detection efficiency). The last three terms in the definition of the $\chi^2$ are penalty functions on background, systematics, and neutrino oscillation parameters. The likelihood, $L_{shape}$, takes into account the single event energy and the spectral shape.
In KamLAND electron antineutrinos are searched for by applying a number of selection cuts. In particular,
\begin{enumerate}
\item{prompt energy $E_p$ cut: $0.9 < E_p {\rm ~[MeV]} < 8.5$ (2.6\,MeV for geo-neutrinos);}
\item{delayed energy $E_d$ cut : $1.8 < E_d {\rm ~[MeV]} < 2.6$ (capture on proton) or $4.4 < E_d {\rm [MeV]} < 5.6$ (capture on carbon); }
\item{spatial correlation of prompt and delayed event: $\Delta R < 2$\,m;}
\item{time correlation between the delayed and prompt candidate: $0.5 < \Delta t ~[\mu$s$] < 1000$; }
\item{fiducial volume on $R_{p,d}$.}
\end{enumerate}
Moreover, to maximize the sensitivity to electron antineutrinos, a figure of merit is made based on Monte Carlo pdf's for neutrinos, $f_\nu$, and accidental coincidences, $f_{acc}$. For each candidate one determines $L = f_\nu / (f_\nu + f_{acc})$ and compares it with a selection value to maximize the figure-of-merit given by $S/\sqrt{(S+B_{acc})}$, where $S$ and $B_{acc}$ are the expected signals for neutrinos and accidentals, respectively. In KamLAND, the vertex and energy reconstructions are determined by means of calibrations with radioactive sources, see also Sec.~\ref{Subsec:resolutionKL}.
In the 2013 data set, the number of observed events passing all selection criteria is equal to 2611. The overall background is determined to be $364.1 \pm 30.5$ events. Some 57\% of the background is due to $^{13}$C($\alpha$,n)$^{16}$O. This background after purification (distillation) has been reduced by a factor of 10. The accidental background is also reduced by a factor of 5. The contribution of systematic uncertainties due mainly to the energy scale, fiducial volume, reactor power, and fuel composition is estimated to be 4\%. In Fig.~\ref{Fig:KL2013} we report the prompt candidate energy spectrum in the geo-neutrino energy window from Gando et al., 2013~\cite{Gando} and the selection efficiency as a function of energy. Assuming a Th/U mass ratio of 3.9, the total number of U + Th geo-neutrino events is determined to be 116$^{+28}_{-27}$, which corresponds to a flux of $(3.4 \pm 0.8)\times 10^6$\,cm$^{-2}$s$^{-1}$. This flux corresponds to a geo-neutrino signal $S_{geo}$ = 29.8 $\pm$ 7.0\,TNU, which is in agreement with the expected signal $S_{geo}$(U+Th) = $31.5^{+4.9}_{-4.1}$\,TNU calculated in~\cite{huang} on the base of the refined model of the region near KamLAND described in Sec.~\ref{subsec:kamioka}. The null hypothesis is disfavored at the level of $2\times 10^{-6}$.
\begin{figure}[tb]
\begin{center}
\centering{\epsfig{file=SpectrumKL2013.pdf,scale=0.5}}
\begin{minipage}[t]{16.5 cm}
\caption{Lower panel: The prompt candidate energy spectrum in the geo-neutrino energy window from KamLAND 2013 data~\cite{Gando}. Middle panel: observed geo-neutrino signal after subtraction of reactor antineutrinos and backgrounds. Top panel: detection efficiency as a function of energy.
\label{Fig:KL2013}}
\end{minipage}
\end{center}
\end{figure}
\subsection{Borexino geo-neutrino analysis}
\label{subsec:ResultsBX}
Borexino has provided the first geo-neutrino observation at more than $4\sigma$ in 2010~\cite{BX2010} and then recently updated the measurement with 2.4 times more exposure~\cite{BX2013}. The first measurement was based on the data from December 2007 to December 2009, corresponding to the exposure of 252.6\,ton-year after cuts or $1.52 \times 10^{31}$ proton-year. The update from 2013 is based on the data from December 2007 to August 2012, corresponding to the exposure of $(613 \pm 26)$ ton-year or $(3.69 \pm 0.16) \times 10^{31}$ proton-year after the selection cuts.
The geo-neutrino and reactor antineutrino spectra expected in Borexino are shown in Figs.~\ref{Fig:BXspectraAntinu}. The left part of this Figure shows the energy spectrum of the prompt candidate, expressed in energy [MeV], without the effect of detector resolution. In the construction of the geo-neutrino spectrum, the Th/U mass ratio was fixed to a chondritic value of 3.9. For comparison, the spectra are shown with and without the neutrino oscillations. As it can be seen, for geo-neutrinos the oscillations change only the absolute normalization of the spectrum, while the spectral shape is not affected. Instead, the spectral shape of reactor antineutrinos is strongly changed by the oscillation phenomenon.
\begin{figure}[tb]
\begin{center}
\begin{minipage}[t]{8 cm}
\centering{\epsfig{file=spcBXantinu.pdf,scale=0.43, angle = 90}}
\end{minipage}
\begin{minipage}[t]{8 cm}
\centering{\epsfig{file=spcBXMC.pdf,scale=0.45, angle = 90}}
\end{minipage}
\begin{minipage}[t]{16.5 cm}
\caption{Left: the expected energy spectrum of the prompt event (positron) due to the electron antineutrinos in Borexino~\cite{BX2010}. The dashed/solid black lines show the total geo-neutrino plus the reactor antineutrino spectra without/with oscillations. The dotted red line shows the oscillated geo-neutrino spectrum, while the thin solid line is the oscillated reactor antineutrino spectrum. The dashed area isolates the contributions of geo-neutrinos in the total oscillated spectrum. Right: Expected prompt positron event spectrum as obtained from the Monte Carlo simulation~\cite{BX2010} expressed in the light yield, e.g. in the number of detected photoelectrons. The spectra from the left part of this Figure are used as input for the Monte Carlo simulation. The event selection criteria described in the text were applied. Approximately, Borexino detects about 500 photoelectrons/MeV.
\label{Fig:BXspectraAntinu}}
\end{minipage}
\end{center}
\end{figure}
The expected geo-neutrino and reactor antineutrino energy spectra with oscillations are used as input for the Geant-4 based Monte Carlo. From the Monte Carlo output, the expected geo-neutrino and reactor antineutrino spectra, expressed in the number of measured photoelectrons (so called light-yield spectrum shown in right part of Fig.~\ref{Fig:BXspectraAntinu}), automatically incorporate the detector response function. Approximately, Borexino detects about 500\,photoelectrons/MeV. The detector response function was studied considering an extensive calibration campaign~\cite{Back}, see also Sec.~\ref{Subsec:resolutionBX}. These calibration data have been used to reduce the systematic error associated with all Borexino results and to optimize the Monte Carlo simulation of the detector response.
The expected event rate and the spectral shape of reactor antineutrinos were calculated by considering reactors all over the world. The time variation of the thermal power of individual cores was considered by using the monthly mean load factor, a ratio of the actual and nominal power, provided by the International Atomic Energy Agency. The power fractions of $^{235}$U : $^{238}$U : $^{239}$Pu : $^{241}$Pu used in the calculations were 0.56 : 0.08 : 0.30 : 0.06, while for the European reactors using Mixed Oxide technology were 0.000 : 0.080 : 0.708 : 0.212, and finally for the reactors using heavy water moderator they were 0.542 : 0.411 : 0.022 : 0.0243. The different stages of the burn up process of the fuel contributes to the systematic error at 3.2\%. The matter effect due to the antineutrino propagation through the Earth was estimated to +0.6\%, while the contribution of the long-lived fission products in the spent fuel was set to 1\% based on Kopeikin et al., 2006~\cite{kopeikin}.
In Bellini et al., 2013~\cite{BX2013}, the three flavor neutrino oscillation was considered and the total systematic error on the expected signal was estimated to 5.8\%. For the exposure of $(3.69 \pm 0.16) \times 10^{31}$ proton-year for the period from December 2007 to August 2012, the expected number of events from reactor antineutrinos was $N_{react}$ = $33.3 \pm 2.4$ events corresponding to $90.2 \pm 6.5$\,TNU. The 33.3\% of the reactor antineutrino signal falls within the geo-neutrino energy window (below 1300\,photoelectrons when expressed in light yield). In the absence of neutrino oscillations the number of expected events due to reactor antineutrinos is $60.4 \pm 4.1$.
In Borexino, other (non antineutrino) background sources are reduced to almost negligible levels. In total, Borexino expected $0.70 \pm 0.18$ background events among all antineutrino candidates detected during the exposure of $(3.69 \pm 0.16) \times 10^{31}$ proton-year in the period from December 2007 to August 2012. From these, 65.7\% are expected in the geo-neutrino energy window below 1300 photoelectrons (about 2.6\,MeV). The dominant background sources are due to cosmogenic $^9$Li - $^8$He ($0.25 \pm 0.18$\,events), accidental coincidences ($0.206 \pm 0.004$ events), and events due to ($\alpha$, n) reactions ($0.13 \pm 0.01$ events). The background due to possible untagged muons is at the level of $0.080 \pm 0.007$ events, considering different combinations of neutrons, multiple neutrons, and muons mostly passing the buffer region and producing fake prompt and delayed signals of antineutrino coincidence. Borexino has also identified a background correlated with $^{222}$Rn~\cite{BX2013} having $\tau$ = 5.52 \,days. The $^{214}$Bi($\beta$) - $^{214}$Po($\alpha$) coincidence from the $^{222}$Rn chain has a time constant close to the neutron capture time. The highly ionizing particles, as alpha particles, have the visible energies shifted towards lower values in liquid scintillators and thus, normally, the alpha particles from $^{214}$Po (decay are well below the neutron energy window, so that the $^{214}$Bi($\beta$) - $^{214}$Po($\alpha$) coincidences cannot fake positron – neutron coincidences from antineutrino interaction. However, in $1.04 \times 10^{-4}$ and $6 \times 10^{-7}$ cases, the $^{214}$Po decays to excited states of $^{210}Po$ and the alpha particle is accompanied by the emission of prompt gammas of 799.7\,keV and of 1097.7\,keV, respectively. The signal from a gamma ray is less quenched with respect to the signal from the alpha particle of the same energy, and thus the signal of ($\alpha$ + $\gamma$) corresponds to the higher light yield with respect to pure alpha signal of the same $Q$-value. Such an increased value of visible energy can overlap the low energy tail of a neutron signal, especially at the large radii of the detector. Thus, in the Borexino analysis from Bellini et al., 2013~\cite{BX2013}, the low energy threshold of the neutron energy window was increased with respect to Bellini et al., 2010~\cite{BX2010}, since in this analysis the data set with increased $^{222}$Rn contamination due to the tests of purification of the scintillator have been included. To further suppress this background, a particle identification technique, the so called Gatti filter~\cite{gatti}, was applied in order to provide the distinction of the $^{214}$Po($\alpha$ + $\gamma$) signal from 2.2\,MeV gamma ray from neutron capture.
The selection criteria of the golden antineutrino candidates have been tuned as follows. First, all identified muons are removed from the analysis. A 2\,s veto is applied after each muon passing through the scintillator volume in order to suppress the $^9$Li-$^8$He background. After each muon passing only through the external water tank, a veto of 2\,ms is applied. The total loss of the exposure due to these vetoes is about 11\%. The energy window of the prompt candidate corresponds to the kinematic threshold of the inverse beta decay interaction considering the energy resolution broadening. No upper energy cut has been applied. The energy window of the delayed candidate was tuned to cover the peak of 2.2\,MeV $\gamma$ ray from the neutron capture in the whole FV used in the geo-neutrino analysis. On the delayed signal a slight cut of the Gatti filter was applied as explained above. No Gatti cut is applied on the prompt candidate. The relative time window between the prompt and the delayed signal was required to be between 20 and 1280\,$\mu$s by considering the neutron capture time of ($245.5 \pm 1.8$) $\mu$s~\cite{BXmuons}. The relative distance between the prompt and the delayed signal has to be below 1\,m. The total detection efficiency of these selection criteria was determined through the Monte Carlo simulation to be $0.84 \pm 0.01$. A 25\,cm minimal distance from the inner vessel containing the scintillator is required for the prompt candidate mostly in order to suppress the ($\alpha$, n) background due to the alphas from higher $^{210}$Po contamination in the buffer liquid surrounding the scintillator. The inner vessel shape is reconstructed on a weekly basis by means of events from its radioactive contaminants. The systematic errors of the reconstruction of the vessel shape (1.6\%) and of the position of the prompt candidate (3.8\%) are included together with the 1\% error on the efficiency of the other selection criteria in the overall error of the total exposure.
\begin{figure}[t]
\begin{center}
\centering{\epsfig{file=RooFit.pdf,scale=0.5}}
\centering{\epsfig{file=DeltalnLAll.pdf,scale=0.6}}
\begin{minipage}[t]{16.5 cm}
\caption{Left: Light-yield spectrum of the 46 prompt events measured by Borexino from Bellini et al., 2013~\cite{BX2013}. The light yield is $\sim$500\,p.e./MeV. The geo-neutrino contribution in the total spectrum is showed in yellow, while the orange color (dashed red line) shows the reactor antineutrino spectrum. The dashed blue line shows the geo-neutrino spectrum. The other background contribution is almost negligible and is shown by a small red filled area in the lower left part. Right: The 68.2, 95.45, and 99.73\% C.L. contour plots from~\cite{BX2013} for the geo-neutrino and reactor antineutrino signal expressed in TNU resulting from the fit shown in left part of Fig.~\ref{Fig:BXSpc2013}. The black point represents the best fit. The vertical dashed lines represent the 1 sigma band of the expected signal from reactor antineutrinos, while the horizontal dashed lines represent the expectations based on different BSE models.
\label{Fig:BXSpc2013}}
\end{minipage}
\end{center}
\end{figure}
Borexino has identified 46 golden candidates passing all selection criteria (from these, 25 in the geo-neutrino energy window) during the exposure of ($613 \pm 26$) ton-year or $(3.69 \pm 0.16) \times 10^{31}$ proton-year after the selection cuts. (In the 2010 measurement with 2.4 times less exposure, Borexino has detected 21 candidates, from which 15 were in the geo-neutrino energy window). The time and the radial distributions of the candidates are compatible with the expectations. The distribution of the time difference between the delayed and the prompt candidate is compatible with that of the neutron capture time. All prompt events have a negative Gatti parameter, confirming that they are not due to $\alpha$s or protons.
In order to determine the relative contributions of geo-neutrinos, antineutrinos from nuclear power plants, and from other background sources, an unbinned maximal likelihood fit of the light-yield spectrum of prompt candidates in the whole energy range was performed. All 46 candidates in the whole energy range were considered in the fit (the light yield of the prompt event of all detected candidates is below 3500 photoelectrons, so below about 7\,MeV). The contributions of geo-neutrinos and antineutrinos from nuclear power plants were left as free fit parameters without any constrains, using the Monte Carlo functions shown in Fig.~\ref{Fig:BXspectraAntinu} (considering neutrino oscillations) as the probability distribution functions. The Th/U mass ratio was fixed to a chondritic value of 3.9. The background components were constrained within the $\pm$1$\sigma$ range around the expected values using either measured energy spectra (accidental coincidences), or the Monte Carlo ones ($^9$Li – $^8$He, ($\alpha$, n) background).
The light-yield spectrum of the 46 golden candidates with the best fit and the 68.27\%, 95.45\%, and 99.73\% C.L. contour plots of the geo-neutrino signal $S_{geo}$ and the signal from reactor antineutrinos $S_{react}$ expressed in TNU are shown in Fig.~\ref{Fig:BXSpc2013}, left and right, respectively. The best fit values are $N_{geo} = (14.3 \pm 4.4)$\,events and $N_{react} = 31.2^{+7.0}_{-6.1}$\,events, corresponding to signals $S_{geo} = (38.8 \pm 12.0)$\,TNU and $S_{react} = 84.5^{+19.3}_{-16.9}$\,TNU. This geo-neutrino experimental result can be compared to the expected signal $S_{geo}$(U+Th) = 34.9 $\pm$ 4.7\,TNU calculated by~\cite{coltorti} taking into account a refined model of the local geology of Gran Sasso area described in Sec.~\ref{subsec:lngs}. The measured geo-neutrino signal obtained in this analysis with fixed Th/U ratio corresponds to overall oscillated fluxes from U and Th decay chains of $\phi({\rm U}) = (2.4 \pm 0.7) \times 10^6$\,cm$^{-2}$s$^{-1}$ and $\phi({\rm Th}) = (2.0 \pm 0.6) \times 10^6$\,cm$^{-2}$s$^{-1}$. From the $\ln{\cal{L}}$ profile, the null geo--neutrino measurement has a probability of 6 $\times$ $10^{-6}$.
\subsection{Geological implications of geo-neutrino measurements}
\label{subsec:geoImpl}
The Kamland and Borexino results on geo-neutrinos have an impact on several aspects of our knowledge of the Earth's composition: radiogenic heat, U/Th ratio, radio-nuclides in the mantle, and the existence of U in and around the Earth's core.
{\it {\underline{Comparison of measured geo-neutrino signal with the expectations and the Earth radiogenic heat.}}}
Both Borexino and KamLAND geo-neutrino results are in a fairly good agreement with the geological expectations. This is of an extreme importance for this new interdisciplinary field, confirming both the validity of geological models and the fact that a new tool to study the deep Earth has arisen. This is valid even if the experimental results do not have sufficient precision (mostly limited by statistics) in order to discriminate among different geological models.
It is not straightforward to extract the radiogenic heat power from U and Th decays from the measured geo-neutrino flux. As a matter of fact, the measured geo-neutrino signal does not depend only on the absolute mass abundances of U and Th, but also on their distribution throughout the Earth. Therefore, the radiogenic heat power extracted from a measured $S_{geo}$ is model dependent. In Fig.~\ref{Fig:GeovsBSE} the expected geo-neutrino signal in Borexino (left) and KamLAND (right) are shown, respectively, as a function of the produced radiogenic heat. The red and blue lines consider the high and low models described in Sec.~\ref{subsec:mantle}, where the error in the prediction of the crustal signal is taken into account as well as different U and Th distributions through the mantle, as it was illustrated in Fig.~\ref{Fig:MantleSignal}. The three filled areas in Fig.~\ref{Fig:GeovsBSE} represent the three classes of BSE models: cosmochemical, geochemical, and geodynamical, according to the classification from \v{S}r\'amek et al., 2012~\cite{sramek}. The horizontal lines represent the 2013 results of Borexino~\cite{BX2013} and KamLAND~\cite{Gando}, respectively. Borexino is compatible with all BSE models within 1$\sigma$, while KamLAND is compatible within 2$\sigma$.
\begin{figure}[t]
\begin{center}
\begin{minipage}[t]{0.49 \textwidth}
\centering{\epsfig{file=Borexino_13.pdf,scale=0.37}}
\end{minipage}
\begin{minipage}[t]{0.49 \textwidth}
\centering{\epsfig{file=KamLAND_13.pdf,scale=0.37}}
\end{minipage}
\begin{minipage}[t]{16.5 cm}
\caption{The expected geo-neutrino signal in Borexino (left) and in KamLAND (right) from U and Th as a function of radiogenic heat released in radioactive decays of U and Th. The Borexino and KamLAND results from~\cite{BX2013} and \cite{Gando} are indicated by the horizontal lines, respectively. The three filled regions delimit, from the left to the right, the cosmochemical, geochemical, and geodynamical BSE models~\cite{sramek}, respectively.
\label{Fig:GeovsBSE}}
\end{minipage}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\centering{\epsfig{file=KLUTh.pdf,scale=0.35}}
\begin{minipage}[t]{16.5 cm}
\caption{Confidence level contours for the observed number of geo-neutrino events in KamLAND, taken from~\cite{Gando}. The small shaded region represents the prediction of the reference model of~\cite{enomoto}. The vertical dashed line represents the value of $(N_{\rm U} - N_{{\rm Th}}) / (N_{\rm U} + N_{{\rm Th}})$ expected for a Th/U mass ratio of 3.9.
\label{Fig:KLUTh}}
\end{minipage}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\centering{\epsfig{file=UThfree_arrows.pdf,scale=0.5}}
\centering{\epsfig{file=NUTh.pdf,scale=0.6}}
\begin{minipage}[t]{16.5 cm}
\caption{Top: The light-yield spectrum of 46 Borexino antineutrino candidates as in Fig.~\ref{Fig:BXSpc2013}, taken from~\cite{BX2013}. The difference is that the Th and U contributions to geo-neutrino signals are left as free fit parameters and are shown in cyan and blue, respectively. Bottom: The 68.27\%, 95.45\%, and 99.73\% contour plots for the Th and U geo-neutrino signal expressed in TNU, from~\cite{BX2013}. The black dot shows the best fit point.
\label{Fig:BXUThfree}}
\end{minipage}
\end{center}
\end{figure}
{\it {\underline{Th/U ratio}}}
In order to study the individual contributions of U and Th to the total geo-neutrino signal, an unbinned maximal likelihood fit similar to the ones described above can be performed. The only difference is that the Th and U ratio is not fixed according to chondritic mass ratio but both contributions are left as free individual fit components.
In Fig.~\ref{Fig:KLUTh} the result of such KamLAND analysis is shown~\cite{Gando}. Here we show the confidence level contours for the sum $N_{\rm U} + N_{{\rm Th}}$ and the asymmetry factor $(N_{\rm U} - N_{{\rm Th}}) / (N_{\rm U} + N_{{\rm Th}})$. The vertical dashed line corresponds to the chondritic Th/U mass ratio. The vertical shaded region shows the prediction from the reference model~\cite{enomoto}. The fit determines an upper limit at 90\% C.L. for the Th/U mass ratio equal to 19.
In Fig.~\ref{Fig:BXUThfree} the result of a similar Borexino 2013 analysis~\cite{BX2013} is shown. The best fit values are $N_{{\rm Th}} = 3.9 \pm 4.7$\,events and $N_{\rm U} = 9.8 \pm 7.2$\,events, corresponding to signals $S_{{\rm Th}} = 10.6 \pm 12.7$\,TNU and $S_{\rm U} = 26.5 \pm 19.5$\,TNU and the oscillated total fluxes of $\phi({\rm Τh}) = (2.6 \pm 3.1) \times 10^6$\,cm$^{-2}$ s$^{-1}$ and $\phi({\rm U}) = (2.1 \pm 1.5) \times 10^6$\,cm$^{-2}$s$^{-1}$. Although these data are compatible within 1$\sigma$ with either only $^{238}$U signal (and $S_{{\rm Th}}$ = 0) or only $^{232}$Th signal (and $S_{{\rm U}}$ = 0), the best fit of the Th/U ratio is in very good agreement with the chondritic value.
{\it {\underline{Mantle geo-neutrinos}}}
The measured geo-neutrino signal has its component from the crust and mantle, while no contribution is expected from the core, as discussed in Sec.~\ref{Sec:GeoModels}. Therefore, by subtracting the relatively well known crustal contribution from the total measured signal, it is, in principle, possible to extract the mantle signal.
Borexino alone has inferred the mantle contribution to be $15.4 \pm 12.3$\,TNU by subtracting the crustal contribution of $23.4 \pm 2.8$\,TNU from its measured signal~\cite{BX2013}. By assuming a homogeneous mantle and thus the same signal from the mantle geo-neutrinos on the Earth surface, the Borexino and KamLAND results from Gando et al., 2011~\cite{gando2011} have been combined in Bellini et al., 2013~\cite{BX2013} and the mantle geo-neutrino signal of $14.1 \pm 8.1$\,TNU has been extracted. Using KamLAND 2013 data and subtracting out the crust contribution determined by the reference model from Enomoto et al., 2007~\cite{enomoto}, in the hypothesis that U and Th are uniformly distributed throughout the mantle, the total mantle radiogenic heat production is calculated to be $11.2^{+7.9}_{-5.1}$\,TW~\cite{Gando}.
{\it {\underline{Georeactor}}}
The hypothesis of a georeactor present in the very innermost core of the Earth was described in Sec.~\ref{Sec:GeoModels}. Both Borexino and KamLAND experiments have been able to test this hypothesis based on their geo-neutrino data. No positive evidence of its existence has been found.
Borexino sets the upper limit on the power of a georeactor with the composition of $^{235}U : ^{238}U \simeq 0.76 : 0.23$~\cite{herndon1} to 4.5\,TW at 95\% C.L. The analysis is performed by adding a Monte Carlo spectrum corresponding to this hypothetical georeactor in yet another maximal likelihhod fit and by constraining the signal from the nuclear power plants to the expected value of $33.3 \pm 2.4$\, events.
KamLAND 2013 data have been also used to search for a georeactor. For the georeactor a fission ratio $^{235}$U:$^{238}$U $\simeq 0.75:0.25$ is used~\cite{Gando}. In particular, using a constrain on the oscillation parameters, that is on antineutrinos from nuclear power plants, and setting free the contribution from geo-neutrinos and from the georeactor, an upper limit for the power of this latter is determined to be $< 3.7$\,TW at 95\% C.L.
\section{Future prospects}
\label{Sec:Future}
Geo-neutrinos have been measured with high statistical significance by two different experiments placed in two different geological settings on two sides of the globe. Both experiments have seen the signal, which is in agreement with the geological expectations. Unfortunately, the existing results are not sufficient in order to firmly discriminate among several geochemical and geophysical models. The first attempts of combined analysis have arisen and have shown the importance of multi-site measurements. The first indications of measurements of geo-neutrinos from the mantle, the indicative exclusion of the fully radiogenic Earth model, the invalidation of the georeactor in the Earth's core with power greater than few TW, and the indication of chondritic Th/U ratio, are examples of the first geologically important results of this new inter-disciplinary field of Neutrino Geoscience. All of these measurements need further confirmations with much higher statistical significance. This means that the future projects having geo-neutrinos among their scientific goals should be detectors even bigger than the current ones, at the scale of several to few tens of kton. Another key point is the selection of the geological setting for the future experiment.
The most exciting question is the measurement of the geo-neutrino signal from the mantle, as it is discussed in Secs.~\ref{Sec:GeoModels} and \ref{Sec:GeoSignal}. Ideally, such an experiment should be placed there where the crustal contribution is minimal and easily estimated. This is the case of the ocean floor, where the signal is expected to be largely dominated by the mantle geo-neutrinos.
Another interesting point is to test if the mantle is homogeneous or not. There have been recent seismic-tomographic measurements proving that a large inhomogeneity~\cite{wang, wen} does exist below Africa and below Pacific. However, it is not clear if they are also related to compositional inhomogeneity. Geo-neutrinos are recently a unique tool able to obtain information on this problem, if measured at several locations distributed around the globe~\cite{sramek}.
In this Section we shortly describe the main future projects having geo-neutrinos among their scientific goals.
\subsection{SNO+ (Sudbury Neutrino Observatory+)}
\label{subsec:SNO}
SNO+~\cite{maneira,chen} is a revised version of the SNO detector, which had an important role in fixing the Solar Neutrino Problem, and thus in studying the neutrino oscillations. Its structure is based upon an active volume, a 12\,m diameter acrylic sphere, which in SNO was filled with $\sim$1000\,tons of heavy water, replaced in SNO+ with $\sim$780\,tons of liquid scintillator. The active volume is shielded by ultra pure water inserted between the acrylic sphere and 9000 read out photomultipliers, which take place within the scintillator, and between the geodesic structure, supporting the PMTs, and the walls of a cylindrical water tank. The shielding water is in total $\sim$7000\,tons; it suppresses or reduces the radiations emitted by the rocks and by the construction materials of the detector itself.
The liquid scintillator is the CH$_2$ based linear alkylbenzene with the addition of a PPO fluor. The energy resolution is expected to be $\sim$5\% at 1\,MeV and 3.5\% at $\sim$3\,MeV. Due to the scintillator density, 0.86 with respect to water, a rope system will be installed to hold down the acrylic vessel, which replaces the old support ropes. The scintillator will be purified to reach a good radiopurity (the goal would be $\sim$$10^{-17}$\,g/g), but because the radiopurity of the vessel is not very high, a definition of a fiducial volume, lower than the scintillator sphere, seems needed. The detector is installed underground with an overburden of 6080\,m water equivalent. We do not discuss here the insertion in the scintillator of a $0\nu \beta \beta$ decay candidate as tellurium for a double beta decay study of SNO+. In Fig.~\ref{Fig:SNO+} a sketch of the SNO+ layout is shown~\cite{maneira}.
\begin{figure}[h]
\begin{center}
\centering{\epsfig{file=SNO+.pdf,scale=1.2}}
\begin{minipage}[t]{16.5 cm}
\caption{Sketch of the SNO+ detector~\cite{maneira}.
\label{Fig:SNO+}}
\end{minipage}
\end{center}
\end{figure}
The ability of SNO+ to study the geo-neutrinos can be summarized as follow: the geo-neutrino detected rate will be in principle 20 per year, efficiencies included, possible fiducial volume effect excluded; the reactor antineutrinos flux at the Sudbury site is limited: $\sim$44.3\,TNU (geo-neutrino expected rate: $51 \pm 10$\,TNU)), to be compare with Kamioka: $\sim$152\,TNU ($34 \pm 14$\,TNU expected geo-neutrinos) and with Gran Sasso: $\sim$23.1\,TNU ($41 \pm 8$\,TNU geo-neutrinos)~\cite{fiorentini2007}.
The expected geo-neutrino signal in SNO+ is dominated by the contributions from the crust and the lithospheric mantle of the Canadian Shield. According to Huang et al.~\cite{huang} the lithosphere produces a total signal $S_{LS}({\rm U+Th} )$ = 36.7$^{+7.5}_{-6.3}$\,TNU, which comes mainly from U and Th of the local Precambrian rocks and Paleozoic sediments. \v{S}r\'amek~\cite{sramek} recently proposed an exhaustive analysis of geo-neutrino rates from different mantle structures based on three classes of BSE models (i.e. cosmochemical, geochemical and geodynamical). A present depleted mantle produces in Sudbury the minimum geo-neutrino signal, corresponding to 2.3 - 3.7 TNU. This contribution is less than 10\% of the crustal geo-neutrinos and it will be unlikely investigable in SNO+. On the other hand, a mantle model having a high Urey ratio (e.g. 0.6 - 0.8) could be an intensive source of geo-neutrinos, which can reach rates of 11.2 - 16.1\,TNU in Sudbury. Such high contribution will not be hidden even in SNO.
On this ground the regional contribution to the geo-neutrino flux and its uncertainties needs to be determined including the geological, geochemical and geophysical information of the Canadian Shield. On the base of the refined reference crustal model~\cite{huang} the U and Th in the crust of $6^{\circ} \times 4^{\circ}$ region, surrounding the detector, gives a signal of $S_{LOC}({\rm U+Th})$ = $18.9^{+3.5}_{-3.3}$\,TNU, which is more than that expected by the whole mantle based on a cosmochemical BSE model. The main reason of such high contribution is the crustal thickness ranging between 44.2\,km and 41.4\,km: this reservoir is approximately 40\% thicker than the crust surrounding the Gran Sasso and Kamioka sites. Moreover, excluding a thin layer ($<$3\,km) of Paleozoic sedimentary rocks southward the Grenville front, a large portion of crystalline basement (e.g. Grenville Province, Yavapai and Mazatzal Terranes) contains significant quantities of felsic rocks, which are enriched in U and Th comparing to most other lithologies. A possible enhancement of crustal geo-neutrinos is due to high concentration of U and Th in the Sudbury basin as reported in~\cite{perry} on the base of geothermal arguments. In particular these authors focus on heat flux anomaly of 43 - 60\,mW m$^{-2}$ which is significantly higher than the average for the Canadian Shield (42\,mW m$^{-2}$). Assuming a homogeneous Moho heat flux throughout the Canadian Shield~\cite{mareschal} the higher heat flux measured in this area can be explained as a local enrichment in crustal radio-elements within a 50\,km radius from SNO Lab. It could strongly affects the geo-neutrino signal expected in SNO+. In this framework a detailed calculation of the local geo-neutrino flux, relying on direct summation of the individual contribution of all the geological units, is desirable and partially anticipated by Huang~\cite{huang}.
SNO+ is expected to start the data taking in 2014-1015.
\subsection{LENA (Low Energy Neutrino Astronomy)}
\label{subsec:LENA}
Lena is an ambitious proposal for a large, 50 kton liquid scintillator detector~\cite{laguna} having the geo-neutrino measurement among one of their scientific goals. The project is a part of the European LAGUNA design study and it identifies itself as a multipurpose neutrino observatory. A combination of an unprecedented volume with the high radio-purity comparable with that, which has been reached by Borexino, would provide a unique tool for a wide variety of measurements both in neutrino physics and in testing possible physics beyond the Standard Model.
The detector should be a vertical cylinder containing the target of 100\,m height and 26\,m diameter. The liquid scintillator would be separated from a 2\,m thick not scintillating buffer liquid by a thin nylon vessel. The scintillation light would be viewed by 30 to 50,000\,PMTs mounted on a steel cylinder containing the organic liquids. In the 100\,kton water Cherenkov detector, a muon veto system would surround the steel cylinder, being equipped with about 3000\,PMTs, and an array of plastic scintillator panels necessary for the reconstruction of muon tracks in such a huge volume. The mostly discussed possible future locations are Pyh\"asalmi in Finland and Fr\'ejus in France.
Considering the $2^{\circ} \times 2^{\circ}$ crustal map and the mean chemical compositions of the main crustal layers, and the BSE models for the estimation of the mantle contribution, the expected geo-neutrino signal at Pyh\'asalmi is $51.3 \pm 7.1$\,TNU, while for the Fr\'ejus site is $41.4 \pm 5.6$\,TNU. In both cases, a detailed analysis of the contribution from the local crust surrounding the detector would be important in order to further constrain the expected signal and to better interpret the possible measured signal.
LENA would detect about 1000 geo-neutrino events per year. The main antineutrino background, namely that from the nuclear power plants, would be several times smaller in the Finland location: this would make it a preferable location from the point of view of the geo-neutrino measurement. In the geo-neutrino energy window the expected reactor antineutrino signal would be about 20 to 37\,TNU (depending on the construction of several new power plants in Finland), while it would be about 145\,TNU at Fr\'ejus, based on the thermal power of nuclear power plants as reported in 2009.
Assuming $2.9 \times 10^{33}$ target protons, a light yield of 400\,photons/MeV, the chondritic Th/U mass ratio of 3.9, and the Borexino radiopurity (no-background approximation), the geo-neutrino flux would be determined with a few percent precision within the first few years, an order of magnitude improvement than the current experimental results. Thanks to the high statistics, LENA would be able to measure the Th/U ratio of the local geo-neutrino signal with unprecedented precision, reaching, after 3 years, 10-11\% precision in Phyahsalmi and 20\% precision in Fr\'ejus.
\subsection{Hanohano}
\label{subsec:Hanohano}
Hanhano is a proposed 10\,kton liquid scintillator detector designed to be deployed in the deep ocean at 3 to 5\,km depth~\cite{learned}. A tank of 26\,m in diameter and 45\,m tall would be placed vertically on a 112\,m long barge with 32\,m beam. This proposal aims to measure geo-neutrinos and to have a potential to measure neutrino mass hierarchy if the neutrino mixing angle is relatively large (as it was recently proven to be~\cite{daya}).
Since the oceanic crust is thin and depleted in U and Th with respect to continental crust (on which all other existent and proposed projects able to measure geo-neutrinos are placed), the dominant contribution of about 75\% of a measured geo-neutrino signal would come from the mantle. Thanks to a large volume the expected rate would be about 100 detected geo-neutrinos per year. Since the ocean sites are far away from nuclear power plants, only about 12 reactor antineutrinos per year would be detected, so the signal-to-background ratio would be high. This would in turn make possible to measure also the Th/U ratio of the measured mantle signal at the level of 10\% precision within few years. In addition, this detector should be portable in a sense that after an operation at one site, it could be brought to the surface and transported to a new site where it would be again lowered to ocean floor. As extensively supported by recent papers~\cite{Sramek2013, Dye2012, jocher} a movable deep-ocean detector could be the next challenge for measuring anthropic and terrestrial antineutrinos, especially for testing mantle lateral homogeneity in composition and the thermochemical evolution of the Earth.
\section*{Acknowledgments}
The authors wish to thank Ved Lekic for discussions on seismology and his production of a great figure, Kristi Engel for production of figures and support on edits, Kunio Inoue for discussion on KamLAND and Mark Chen on SNO+ project. We express our gratitude for useful discussions to G. Fiorentini, Y.~Huang, and O. \v{S}r\'amek. In addition, the authors acknowledge the Istituto Nazionale di Fisica Nucleare and the National Science Foundation (i.e., NSF EAR0855791, EAR-1067983, and EAR1321229) for their support. Finally, the authors are grateful to the Borexino and KamLAND collaborations which kindly allowed the use of figures from their documents and publications in this work. |
1310.3622 | \section{Introduction}
The interplay between mechanical and electronic effects in carbon nanostructures has been studied for a long time (e.g., \cite{Ando2002,GuineaNatPhys2010,castroRMP,Pereira1,Vozmediano,deJuanPRL2012,Asgari,r2,Peeters1,Peeters2,Peeters3}). The mechanics in those studies invariably enters within the context of continuum elasticity. One of the most interesting predictions of the theory is the creation of large, and roughly uniform pseudo-magnetic fields and deformation potentials under strain conformations having a three-fold symmetry \cite{GuineaNatPhys2010}. Those theoretical predictions have been successfully verified experimentally \cite{Crommie,Gomes2012}.
Nevertheless, different theoretical approaches to strain engineering in graphene possess subtle points and apparent discrepancies \cite{deJuanPRL2012,Kitt2012}, which may hinder progress in the field. This motivated us to develop an approach \cite{us} which does not suffer from limitations inherent to continuum elasticity. This new formulation accommodates numerical verifications to determine when arbitrary mechanical deformations preserve sublattice symmetry. Contrary to the conclusions of Ref.~\cite{Kitt2012}, with this formulation one can also demonstrate in an explicit manner the absence of $K-$point dependent gauge fields on a first-order theory (see Refs.~\cite{us} and \cite{arxiv,Kitt2} as well). The formalism takes as its only direct input {\em raw} atomistic data --as the data obtained from molecular dynamics runs. The goal of this paper is to present the method, making the derivation manifest. We illustrate the formalism by computing the gauge fields and the density of states in a graphene membrane under central load.
\begin{figure}[tb]
\includegraphics[width=0.45\textwidth]{Fig1v2.pdf}
\caption{Gauge fields from first-order continuum elasticity are defined regardless of spatial scale. A unit cell is shown in (b) and (c) for comparison. In this work, we define the pseudospin Hamiltonian for each unit cell using space-modulated, low-energy expansions of a tight-binding Hamiltonian in reciprocal space. As a result, in our approach the gauge fields will become discrete.}\label{fig:F1}
\end{figure}
\subsection{Motivation}
The theory of strain-engineered electronic effects in graphene is semi-classical. One seeks to determine the effects of mechanical strain across a graphene membrane in terms of spatially-modulated pseudospin Hamiltonians $\mathcal{H}_{ps}$; these pseudospin Hamiltonians $\mathcal{H}_{ps}(\mathbf{q})$ are low-energy expansions of a Hamiltonian formally defined in reciprocal space. Under ``long range'' mechanical strain (extending over many unit cells and preserving sublattice symmetry \cite{Ando2002,GuineaNatPhys2010,castroRMP}) $\mathcal{H}_{ps}$ also become continuous and slowly-varying local functions of strain-derived gauges, so that $\mathcal{H}_{ps}\to\mathcal{H}_{ps}(\mathbf{q},\mathbf{r})$. Within this first-order approach, the salient effect of strain is a local shift of the $K$ and $K'$ points in opposite directions, similar to a shift induced by a magnetic field \cite{GuineaNatPhys2010,castroRMP}. In the usual formulation of the theory \cite{Ando2002,GuineaNatPhys2010,castroRMP,Pereira1,Vozmediano,deJuanPRL2012}, this dependency on position leads to a {\em continuous} dependence of strain-induced fields $\mathbf{B}_s(\mathbf{r})$ and $E_s(\mathbf{r})$. Such continuous fields are customarily superimposed to a discrete lattice, as in Figure \ref{fig:F1}~\cite{GuineasSSC2012}.
When expressed in terms of continuous functions, a pseudospin Hamiltonian $\mathcal{H}_{ps}$ is defined down to arbitrarily small spatial scales and it spans a zero area. In reality, however, the pseudospin Hamiltonian can only be defined per unit cell, so it should take a single value at an area of order $\sim a_0^2$ ($a_0$ is the lattice constant in the absence of strain).
This observation tells us already that the scale of the mechanical deformation with respect to a given unit cell is inherently lost in a description based on a continuum model. For this reason, it is important to develop an approach which is directly related to the atomic lattice, as opposed to its idealization as a continuum medium. In the present paper we show that in following this program one gains a deeper understanding of the interrelation between the mechanics and the electronic structure of graphene. Indeed, within this approach we are able to quantitatively analyze whether the proper phase conjugation of the pseudospin Hamiltonian holds at each unit cell. The approach presented here will give (for the first time) the possibility to explicitly check on any given graphene membrane under arbitrary strain if mechanical strain varies smoothly on the scale of interatomic distances. Consistency in the present formalism will also lead to the conclusion that in such scenario strain will not break the sublattice symmetry but the Dirac cones at the $K$ and $K'$ points will be shifted in the opposite directions \cite{GuineaNatPhys2010,castroRMP}.
Clearly, for a reciprocal space to exist one has to preserve crystal symmetry, so that when crystal symmetry is strongly perturbed, the reciprocal space representation starts to lack physical meaning, presenting a limitation to the semiclassical theory. The lack of sublattice symmetry --observed on actual unit cells on this formulation beyond first-order continuum elasticity-- may not allow proper phase conjugation of pseudospin Hamiltonians at unit cells undergoing very large mechanical deformations. Nevertheless this check cannot proceed --and hence has never been discussed-- on a description of the theory within a continuum media, because by construction there is no direct reference to actual atoms on a continuum.
As it is well-known, it is also possible to determine the electronic properties directly from a tight-binding Hamiltonian $\mathcal{H}$ in real space, without resorting to the semiclassical approximation and without imposing an {\em a priori} sublattice symmetry. That is, while the semiclassical $\mathcal{H}_{ps}(\mathbf{q},\mathbf{r})$ is defined in reciprocal space (thus assuming some reasonable preservation of crystalline order), the tight-binding Hamiltonian $\mathcal{H}$ in real space is more general and can be used for membranes with arbitrary spatial distribution and magnitude of the strain.
In addition, contrary to the claim of Ref.~\cite{Kitt2012}, the purported existence of $K-$point dependent gauge fields does not hold on a first-order formalism \cite{us,arxiv}. What we find instead, is a shift in opposite directions of the $K$ and $K'$ points upon strain~\cite{GuineaNatPhys2010}.
\section{Theory}
\subsection{Sublattice symmetry}
The continuum theories of strain engineering in graphene, being semiclassical in nature, require sublattice symmetry to hold \cite{Ando2002,GuineaNatPhys2010}.
One the other hand, no measure exists in the continuum theories \cite{Ando2002,GuineaNatPhys2010,castroRMP,Pereira1,Vozmediano,deJuanPRL2012} to test sublattice symmetry on actual unit cells under a mechanical deformation. For this reason, sublattice symmetry is an implicit assumption embedded in the continuum approach.
\begin{figure}[h!]
\includegraphics[width=0.45\textwidth]{Fig2v2.pdf}
\caption{(a) Definitions of geometrical parameters in a unit cell. (b) Sublattice symmetry relates to how {\em pairs} of nearest-neighbor vectors (either in thick, or dashed lines) are modified due to strain. These vectors change by $\Delta \mathbf{\tau}_j$ and $\Delta \mathbf{\tau}_j'$ upon strain ($j=1,2$). Relative displacements of neighboring atoms lead to modified lattice vectors; the choice of renormalized lattice vectors will be unique {\em only} to the extent to which sublattice symmetry is preserved: $\Delta \mathbf{\tau}_j'\simeq \Delta \mathbf{\tau}_j$.}\label{fig:F2}
\end{figure}
To address the problem beyond the continuum approach, let us start by considering the unit cell before (Fig.~\ref{fig:F2}(a)) and after arbitrary strain has been applied (Fig.~\ref{fig:F2}(b)). For easy comparison of our results, we make the zigzag direction parallel to the $x-$axis, which is the choice made in Refs.~\cite{GuineaNatPhys2010} and \cite{Vozmediano}. (Arbitrary choices of relative orientation are clearly possible; in Ref.~\cite{us} we chose the zigzag direction to be parallel to the y-axis.)
The lattice vectors before the deformation are given by (Fig.~\ref{fig:F2}(a)):
\begin{equation}\label{eq:defa}
\mathbf{a}_1=\left(1/2,\sqrt{3}/2\right)a_0,\text{ }\mathbf{a}_2=\left(-{1}/{2},{\sqrt{3}}/{2}\right)a_0,
\end{equation}
\begin{equation}\label{eq:deft}
\boldsymbol{\tau}_1=\left(\frac{\sqrt{3}}{2},\frac{1}{2}\right)\frac{a_0}{\sqrt{3}},\text{ } \boldsymbol{\tau}_2=\left(-\frac{\sqrt{3}}{2},\frac{1}{2}\right)\frac{a_0}{\sqrt{3}},\text{ }
\boldsymbol{\tau}_3=\left(0,-1\right)\frac{a_0}{\sqrt{3}}.
\end{equation}
(Note that $\mathbf{a}_1=\boldsymbol{\tau}_1-\boldsymbol{\tau}_3$, and
$\mathbf{a}_2=\boldsymbol{\tau}_2-\boldsymbol{\tau}_3$.)
After mechanical strain is applied (Fig.~\ref{fig:F2}(b)), each local pseudospin Hamiltonian will only have physical meaning at the unit cells where:
\begin{equation}\label{eq:applicabilitycondition}
\Delta \boldsymbol{\tau}_j'\simeq\Delta \boldsymbol{\tau}_j \text{ (j=1,2)}.
\end{equation}
Condition (\ref{eq:applicabilitycondition}) can be re-expressed in terms of changes of angles $\Delta \alpha_j$ or lengths $\Delta L_j$ for pairs of nearest-neighbor vectors $\boldsymbol{\tau}_j$ and $\boldsymbol{\tau}_j'$
[$j=1$ is shown in thick solid and $j=2$ in thin dashed lines in Fig.~\ref{fig:F2}(b)]:
\begin{equation}\label{eq:beta}
\small(\boldsymbol{\tau}_j+\Delta \boldsymbol{\tau}_j)\cdot(\boldsymbol{\tau}_j+\Delta\boldsymbol{\tau}'_j)=
|\boldsymbol{\tau}_j+\Delta\boldsymbol{\tau}_j||\boldsymbol{\tau}_j+\Delta\boldsymbol{\tau}'_j|\cos(\Delta\alpha_j),
\end{equation}
\begin{equation}\label{eq:sign}
\small\text{sgn}(\Delta \alpha_j)=\text{sgn}\left([(\boldsymbol{\tau}_j+\Delta\boldsymbol{\tau}_j)
\times(\boldsymbol{\tau}_j+\Delta\boldsymbol{\tau}'_j)]\cdot \hat{k}\right),\end{equation}
where $\hat{k}$ is a unit vector along the z-axis, $sgn$ is the sign function ($sgn(a)=+1$ if $a\ge 0$ and $sgn(a)=-1$ if $a <0$), and:
\begin{equation}\label{eq:L}
\small
\Delta L_j\equiv |\boldsymbol{\tau}_j+\Delta\boldsymbol{\tau}_j|-|\boldsymbol{\tau}_j+\Delta\boldsymbol{\tau}'_j|.
\end{equation}
Even though in the problems of practical interest the deviations from the sublattice symmetry do tend to be small \cite{us}, it is important to bear in mind that the sublattice symmetry {\it does not hold a priori} \cite{GuineaNatPhys2010}. It is therefore important to have a method to quantify such deviations and check whether the sublattice symmetry holds at the problem at hand. Forcing the sublattice symmetry to hold from the start amounts to introducing an artificial mechanical constraint on the membrane which is not justified on physical grounds~\cite{Ericksen}. For this reason the method we propose is discrete and directly related to the actual lattice; it does not resort to the approximation of the membrane as a continuum medium \cite{Ando2002,GuineaNatPhys2010,castroRMP,Pereira1,Vozmediano,deJuanPRL2012,arxiv,Kitt2}. Being expressed in terms of the actual atomic displacements, our formalism holds beyond the linear elastic regime where the first-order continuum elasticity may fail. The continuum formalism is recovered as a special case of the one presented here in the limit when $|\Delta\mathbf{\tau}_j|/a_0\to 0$.
\subsection{Renormalization of the lattice and reciprocal lattice vectors}\label{sec:3}
In the absence of mechanical strain, the reciprocal lattice vectors $\mathbf{b}_1$ and $\mathbf{b}_2$ are obtained by standard methods: We define $\mathcal{A}\equiv(\mathbf{a}_1^T,\mathbf{a}_2^T)$, with $\mathbf{a}_1$ and $\mathbf{a}_2$ given in Eq.~(\ref{eq:defa}) and shown in Fig.~\ref{fig:F2}(a). The reciprocal lattice vectors $\mathcal{B}\equiv(\mathbf{b}_1^T,\mathbf{b}_2^T)$ are related to the lattice vectors by \cite{MartinBook}:
\begin{equation}\label{eq:realreciprocal}
\mathcal{B}^T=2\pi\mathcal{A}^{-1}.
\end{equation}
With the choice we made for $\mathbf{a}_1$ and $\mathbf{a}_2$ we get:
\begin{equation}
\mathbf{b}_1=\left(1,\frac{1}{\sqrt{3}}\right)\frac{2\pi}{a_0} \text{, and }
\mathbf{b}_2=\left(-1,\frac{1}{\sqrt{3}}\right)\frac{2\pi}{a_0}.
\end{equation}
As seen in Fig.~\ref{fig:F3}(a) the $K-$points on the first Brillouin zone are defined by:
\begin{equation}
\mathbf{K}_1=\frac{2\mathbf{b}_1+\mathbf{b}_2}{3}, \text{ }\mathbf{K}_2=\frac{\mathbf{b}_1-\mathbf{b}_2}{3} \text{, and } \mathbf{K}_3=-\frac{\mathbf{b}_1+2\mathbf{b}_2}{3},
\end{equation}
and:
\begin{equation}
\mathbf{K}_4=-\mathbf{K}_1,\text{ } \mathbf{K}_5=-\mathbf{K}_2, \text{ and }\mathbf{K}_6=-\mathbf{K}_3.
\end{equation}
\begin{figure}[h!]
\includegraphics[width=0.45\textwidth]{Fig3v2.pdf}
\caption{First Brillouin zone (a) before and (b) after mechanical strain is applied. The reciprocal lattice vectors are shown,
as well as the changes of the high-symmetry points at the corners of the Brillouin zone. Note that independent $K$ points ($K$ and $K'$) move in the opposite directions. The dashed hexagon in (b) represents the boundary of the first Brillouin zone in the absence of strain.}\label{fig:F3}
\end{figure}
The relative positions between atoms change when strain is applied: $\boldsymbol{\tau}_j\to \boldsymbol{\tau}_j+\Delta\boldsymbol{\tau}_j$ ($j=1,2,3)$, and $-\boldsymbol{\tau}_j\to -\boldsymbol{\tau}_j-\Delta\boldsymbol{\tau}_j'$ ($j=1,2$).
For negligible curvature, one may assume that $\Delta\boldsymbol{\tau}_j\cdot\hat{z}=\Delta z_j\sim 0$ (and similar for the primed displacements $\Delta \boldsymbol{\tau}_j'$). We present here a formulation of the theory strictly valid for in-plane strain (it would also be valid for membranes with negligible curvature).
We wish to find out how reciprocal lattice vectors change to first order in displacements under mechanical load.
In order for reciprocal lattice vectors to make sense at each unit cell, Eqn.~\ref{eq:applicabilitycondition} must hold. In terms of numerical quantities one would need that $\Delta \alpha_j$ and $\Delta L_j$ are all close to zero. In that case we set $\Delta \boldsymbol{\tau}_j'\to \Delta \boldsymbol{\tau}_j$ for j=1,2, and continue our program.
For this purpose we define:
\begin{equation}
\Delta \mathbf{a}_1\equiv\Delta \boldsymbol{\tau}_1-\Delta \boldsymbol{\tau}_3 \text{, and }
\Delta \mathbf{a}_2\equiv\Delta \boldsymbol{\tau}_2-\Delta \boldsymbol{\tau}_3,
\end{equation}
or in terms of (two-dimensional) components:
\begin{equation}
\Delta \mathcal{A}\equiv
\left(
\begin{matrix}
\Delta \tau_{1x}-\Delta \tau_{3x}& \Delta \tau_{2x}-\Delta \tau_{3x}\\
\Delta \tau_{1y}-\Delta \tau_{3y}& \Delta \tau_{2y}-\Delta \tau_{3y}
\end{matrix}
\right).
\end{equation}
The matrix $\mathcal{A}$ changes to $\mathcal{A}'=\mathcal{A}+\Delta\mathcal{A}$, and we must modify $\mathcal{B}$ so that Eqn.~\eqref{eq:realreciprocal} still holds under mechanical load. To first order in displacements $\mathcal{A}'^{-1}$ becomes:
\begin{equation}\label{eq:correction}
\mathcal{A}'^{-1}=(1+\mathcal{A}\Delta\mathcal{A})^{-1}(\mathcal{A}^{-1})\simeq \mathcal{A}^{-1}-\mathcal{A}^{-1}\Delta\mathcal{A}\mathcal{A}^{-1}.
\end{equation}
By comparing Eqns. (7) and ~\eqref{eq:correction}, the reciprocal lattice vectors in Fig.~\ref{fig:F3}(b) must be renormalized by:
\begin{equation}
\Delta\mathcal{B}=-2\pi\left(\mathcal{A}^{-1}\Delta\mathcal{A}\mathcal{A}^{-1}\right)^T.
\end{equation}
We note that the existence of this additional term is quite evident when working directly on the atomic lattice, but it was missed in Ref.~\cite{Kitt2012}, where the theory was expressed on a continuum. Let us now calculate some shifts of the $K-$points due to strain. For example, $\mathbf{K}_2$ ($=K$ in Fig.~\ref{fig:F3}(a)) requires an additional contribution, which we find by explicit calculation to be:
$$
\Delta K=\Delta\mathbf{K}_2=-\frac{4\pi}{3a_0^2}
\left(\Delta\tau_{1x}-\Delta\tau_{2x},\frac{\Delta \tau_{1x}+\Delta \tau_{2x}-2\Delta \tau_{3x}}{\sqrt{3}}\right),
$$
and using Eqn. (10) one immediately sees that $\Delta K'=-\Delta\mathbf{K}_2$, so that the $K$ ($\mathbf{K}_2$) and $K'$ ($-\mathbf{K}_2$) points shift in opposite directions, as expected \cite{GuineaNatPhys2010,castroRMP}.
\subsection{Gauge fields}
Equation \eqref{eq:applicabilitycondition} gives a condition for which the mechanical strain that varies smoothly on the scale of interatomic distances does not break the sublattice symmetry \cite{GuineaNatPhys2010}. On the other hand, arbitrary strain breaks down to some extent the periodicity of the lattice, and ``short-range'' strain can be identified to occur at unit cells where $\Delta \alpha_j$ and $\Delta L_j$ cease to be zero by significant margins.
This observation provides the rationale for expressing the gauge fields without ever leaving the atomic lattice: When $\Delta \boldsymbol{\tau}_j'\simeq\Delta \boldsymbol{\tau}_j$ at each unit cell a mechanical distortion can be considered ``long-range,'' and the first-order theory is valid. The process to lay down the gauge terms to first order is straightforward. Local gauge fields can be computed as low energy approximations to the following $2\times 2$ pseudospin Hamiltonian:
\begin{equation}\label{eq:tbh}
\left(
\begin{matrix}
E_{s,A} & g^*\\
g & E_{s,B}
\end{matrix}
\right),
\end{equation}
with $g\equiv -\sum_{j=1}^3(t+\delta t_j)e^{i(\boldsymbol{\tau}_j+\Delta\boldsymbol{\tau}_j)\cdot(\mathbf{K}_n+\Delta\mathbf{K}_n+\mathbf{q})}$, and $n=1,...,6$. We defer discussion of the diagonal terms for now.
Keeping exponents to first order we have:
$$
\small
(\boldsymbol{\tau}_j+\Delta\boldsymbol{\tau}_j)\cdot(\mathbf{K}_n+\Delta\mathbf{K}_n+\mathbf{q})\simeq
\boldsymbol{\tau}_j\cdot\mathbf{K}_n+\boldsymbol{\tau}_j\cdot\Delta\mathbf{K}_n+\Delta\boldsymbol{\tau}_j\cdot\mathbf{K}_n+
\boldsymbol{\tau}_j\cdot\mathbf{q}.
$$
The exponent is next expressed to first-order:
\begin{eqnarray}
e^{i(\boldsymbol{\tau}_j\cdot\mathbf{K}_n+\boldsymbol{\tau}_j\cdot\Delta\mathbf{K}_n+\Delta\boldsymbol{\tau}_j\cdot\mathbf{K}_n+
\boldsymbol{\tau}_j\cdot\mathbf{q})}\simeq \nonumber\\
ie^{i\boldsymbol{\tau}_j\cdot\mathbf{K}_n}\boldsymbol{\tau}_j\cdot\mathbf{q}+
e^{i\boldsymbol{\tau}_j\cdot\mathbf{K}_n}[1+i(\boldsymbol{\tau}_j\cdot\Delta\mathbf{K}_n+\Delta\boldsymbol{\tau}_j\cdot\mathbf{K}_n)].
\end{eqnarray}
Carrying out explicit calculations, one can see that:
\begin{equation}\label{eq:cancellation}
\sum_{j=1}^3e^{i\boldsymbol{\tau}_j\cdot\mathbf{K}_n}[1+i(\boldsymbol{\tau}_j\cdot\Delta\mathbf{K}_n+\Delta\boldsymbol{\tau}_j\cdot\mathbf{K}_n)]=0.
\end{equation}
For example, at $K=\mathbf{K}_2$ we have:
$$
\left[1+\frac{4i\pi(\Delta \tau_{1x}+\Delta \tau_{2x}+\Delta \tau_{3x})}{9a_0}\right](1+e^{\frac{2\pi i}{3}}-e^{\frac{\pi i}{3}}),
$$
with phasors adding up to zero. Similar phasor cancelations occur at every other $K-$point.
The term linear on $\Delta \mathbf{K}_n$ on Eqn.~\ref{eq:cancellation} cancels out the fictitious $K-$point dependent gauge fields proposed in Ref.~\cite{Kitt2012}, which originated from the term linear on $\Delta \mathbf{\tau}_j$ on this same equation. This observation constitutes yet another reason for the formulation of the theory directly on the atomic lattice. With this we have demonstrated that gauges will not depend explicitly on $K-$points, so we now continue formulating the theory considering the $\mathbf{K}_2$ point only \cite{GuineaNatPhys2010,Vozmediano,castroRMP}.
Equation~\eqref{eq:tbh} takes the following form to first order at $\mathbf{K}_2$ in the low-energy regime:
\begin{eqnarray}\label{eq:ps1}
\mathcal{H}_{ps}=&
\left(
\begin{smallmatrix}
0 & t\sum_{j=1}^3ie^{-i\mathbf{K}_2\cdot\boldsymbol{\tau}_j}\boldsymbol{\tau}_j\cdot\mathbf{q}\\
-t\sum_{j=1}^3ie^{i\mathbf{K}_2\cdot\boldsymbol{\tau}_j}\boldsymbol{\tau}_j\cdot\mathbf{q} & 0
\end{smallmatrix}
\right)\nonumber\\
+&\left(
\begin{smallmatrix}
E_{s,A} & -\sum_{j=1}^3\delta t_je^{-i\mathbf{K}_2\cdot\boldsymbol{\tau}_j}\\
-\sum_{j=1}^3\delta t_je^{i\mathbf{K}_2\cdot\boldsymbol{\tau}_j} & E_{s,B}
\end{smallmatrix}
\right),
\end{eqnarray}
with the first term on the right-hand side reducing to the standard pseudospin Hamiltonian in the absence of strain. The change of the hopping parameter $t$ is related to the variation of length, as explained in Refs.~\cite{Ando2002} and \cite{Vozmediano}:
\begin{equation}
\delta t_j=-\frac{|\beta| t}{a_0^2} \boldsymbol{\tau}_j\cdot\Delta\boldsymbol{\tau}_j.
\end{equation}
This way Eqn.~\eqref{eq:ps1} becomes:
\begin{eqnarray}
\mathcal{H}_{ps}=
\hbar v_F\boldsymbol{\sigma}\cdot \mathbf{q}
+\left(
\begin{smallmatrix}
E_{s,A} & f_1^*\\
f_1 & E_{s,B}
\end{smallmatrix}
\right),
\end{eqnarray}
with $f_1^*=\frac{|\beta|t}{2a_0^2}
[2\boldsymbol{\tau}_3\cdot\Delta\boldsymbol{\tau}_3
-\boldsymbol{\tau}_1\cdot\Delta\boldsymbol{\tau}_1
-\boldsymbol{\tau}_2\cdot\Delta\boldsymbol{\tau}_2
+\sqrt{3}i(\boldsymbol{\tau}_2\cdot\Delta\boldsymbol{\tau}_2-\boldsymbol{\tau}_1\cdot\Delta\boldsymbol{\tau}_1)]$, and $\hbar v_F\equiv
\frac{\sqrt{3}a_0t}{2}$.
The parameter $f_1$ can be expressed in terms of a vector potential: $A_s$ $f_1=-\hbar v_F\frac{eA_s}{\hbar}$. This way:
\begin{eqnarray}\label{eq:Asdiscrete}
\small
A_s&=-\frac{|\beta|\phi_0}{\pi a_0^3}[
\frac{2\boldsymbol{\tau}_3\cdot\Delta\boldsymbol{\tau}_3
-\boldsymbol{\tau}_1\cdot\Delta\boldsymbol{\tau}_1
-\boldsymbol{\tau}_2\cdot\Delta\boldsymbol{\tau}_2}{\sqrt{3}}\nonumber\\
&-i(
\boldsymbol{\tau}_2\cdot\Delta\boldsymbol{\tau}_2
-\boldsymbol{\tau}_1\cdot\Delta\boldsymbol{\tau}_1)].
\end{eqnarray}
We finally analyze the diagonal entries in Eqn.~\eqref{eq:tbh}, which are given as follows \cite{us}:
\begin{equation}\label{eq:EsA}
E_{s,A}=-\frac{0.3 eV}{0.12}\frac{1}{3}\sum_{j=1}^3\frac{|\boldsymbol{\tau}_j-\Delta\boldsymbol{\tau}_j|-a_0/\sqrt{3}}{a_0/\sqrt{3}},
\end{equation}
and
\begin{equation}\label{eq:EsB}
E_{s,B}=-\frac{0.3 eV}{0.12}\frac{1}{3}\sum_{j=1}^3\frac{|\boldsymbol{\tau}_j-\Delta\boldsymbol{\tau}'_j|-a_0/\sqrt{3}}{a_0/\sqrt{3}}.
\end{equation}
These entries represent the scalar deformation potential which we take to linear order in the average bond increase \cite{YWSon}.
\subsection{Relation to the formalism from first-order continuum elasticity}
We next establish how the theory based on a continuum relates to the present formalism. In the absence of significant curvature, the continuum limit is achieved when $\frac{|\Delta\boldsymbol{\tau}_j|}{a_0}\to 0$ (for $j=1,2,3$). We have then (Cauchy-Born rule):
$\boldsymbol{\tau}_j\cdot \Delta \boldsymbol{\tau}_j\to \boldsymbol{\tau}_j\left(
\begin{smallmatrix}
u_{xx}&u_{xy}\\
u_{xy}&u_{yy}
\end{smallmatrix}\right)\boldsymbol{\tau}_j^T$, where $u_{ij}$ are the entries of the strain tensor.
This way Eqn.~\eqref{eq:Asdiscrete} becomes:
\begin{equation}\label{eq:limit}
A_s\to \frac{|\beta|\phi_0}{2\sqrt{3}\pi a_0}(u_{xx}-u_{yy}-2iu_{xy}),
\end{equation}
as expected \cite{GuineaNatPhys2010,Vozmediano}.
Equation \eqref{eq:limit} confirms that if the zigzag direction is parallel to the $x-$axis the vector potential we have obtained is consistent with known results in the proper limit \cite{GuineaNatPhys2010,Vozmediano}.
Besides representing a consistent first-order formalism, the present approach is exceptionally suited for the analysis of ``raw'' atomistic data --obtained, for example, from molecular dynamics simulations-- as there is no need to determine the strain tensor explicitly: the relevant equations (\ref{eq:Asdiscrete}, \ref{eq:EsA}, \ref{eq:EsB}) take as input the changes in atomic positions upon strain. Within the present approach $N/2$ space-modulated pseudospinor Hamiltonians can be built for a graphene membrane having $N$ atoms.
\section{Applying the formalism to rippled graphene membranes}
We finish the present contribution by briefly illustrating the formalism on two experimentally relevant case examples. The developments presented here are motivated by recent experiments where freestanding graphene membranes are studied by local probes \cite{usold,stmNanoscale2012,stroscio}. (One must keep in mind, nevertheless, that the theory provided up to this point is rather general.)
\subsection{Rippled membranes with no external mechanical load}
It is an established fact that graphene membranes will be naturally rippled due to a number of physical processes, including temperature-induced (i.e., dynamic) structural distortions \cite{Fasolino1}, and static structural distortions created by the mechanical and electrostatic interaction with a substrate, a deposition process \cite{Nature2007}, or line stress at the edges of finite-size membranes \cite{us}.
In reference \cite{deJuanPRB} it is argued that the rippled texture of freestanding graphene leads to observable consequences, the strongest being a sizeable velocity renormalization. In order to demonstrate such statement, one must take a closer look at the underlying mechanics of the problem. The model \cite{deJuanPRB} assumes that a graphene membrane is originally pre-strained (in bringing an analogy, one would say that the membrane would be an ``ironed tablecloth''), so that curvature due to a single wrinkle directly leads to increases in interatomic distances. Those distance increases directly modify the metric on the curved space. In practice, an external electrostatic field can be used to realize such pre-strained configuration \cite{Fogler}.
In improving the consideration of the mechanics beyond first-order continuum elasticity, let us consider what happens if this pre-strained assumption is relaxed (in continuing our analogy, the rippled membrane in Fig.~\ref{fig:F4}(a) would then be akin to a ``wrinkled tablecloth prior to ironing''): How do the gauge fields look in such scenario? With our formalism, we can probe the interrelation between mechanics and the electronic structure directly. In Figure \ref{fig:F4}(a) we display a graphene membrane with three million atoms at 1 Kelvin after relaxing strain at the edges. The strain relaxation proceeds by the formation of ripples or wrinkles on the membrane. This initial configuration is already different to a flat (``pre-strained'') configuration within the continuum formalism, customarily enforced prior to the application of strain.
\emph{The ripples must be ``ironed out'' before any significant increase on interatomic distances can occur:} ``Isometric deformations'' lead to curvature without any increase on interatomic distances \cite{us} (in continuing our analogy, this is usually what happens with clothing). We believe that a local determination of the metric tensor from atomic displacements alone will definitely be useful in continuing making a case for velocity renormalization \cite{deJuanPRL2012,arxiv,deJuanPRB}; this is presently work in progress \cite{us2}.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.48\textwidth]{Fig4v2.pdf}
\end{center}
\caption{A finite-size graphene membrane at 1 Kelvin. (a) The membrane forms ripples to relieve mechanical strain originating from its finite size. (b) We could not discern changes on the LDOS (which relates to renormalization of the Fermi velocity) on a completely flat membrane and after line strain is relieved. (c) Measures for changes in angles and lengths at individual unit cells (Eqns.~4-6) displaying noise on a small scale, and consistent with the formation of ripples. (d) The deformation potential, mass term and (e) the pseudo-magnetic field are inherently noisy as well.}\label{fig:F4}
\end{figure}
The local density of electronic states is obtained directly from the Hamiltonian of the membrane in configuration space $\mathcal{H}$, and shown in Fig.~\ref{fig:F4}(b). When compared to the DOS from a completely flat membrane, no observable variation on the slope of the DOS appears, and hence, no renormalization of the Fermi velocity either.
One can determine the extent to which nearest-neighbor vectors will preserve sublattice symmetry in terms of $\Delta\alpha_j$ and $\Delta L_j$, Eqns.~(4-6). We observe small and apparently random fluctuations on those measures in Fig.~\ref{fig:F4}(c): $\Delta L_j\lesssim $ 1\% and $\Delta \alpha_j\lesssim 2^{o}$.
We display the deformation potential in Figure \ref{fig:F4}(d) in terms of the average ($E_{def}$) and difference ($E_{mass}$)
between $E_{s,A}$ and $E_{s,B}$ (Eqns.~(\ref{eq:EsA}) and (\ref{eq:EsB})) at any given unit cell:
\begin{equation}
E_{def}=\frac{1}{2}(E_{s,A}+E_{s,B}), \text{ and } E_{mass}=\frac{1}{2}(E_{s,A}-E_{s,B}).
\end{equation}
Both quantities are of the order of tens of meVs.
The ripples lead to the random-looking pseudo-magnetic field shown in Fig.~\ref{fig:F4}(e), reminiscent of the electron density plots created by random charge puddles \cite{Rossi1,Rossi2}.
We next consider how strain by a sharp probe modifies the results in Fig.~\ref{fig:F4}.
\subsection{Rippled membranes under mechanical load}
In what follows we consider a central extruder creating strain on the freestanding membrane. For this, we placed the membrane shown in Fig.~\ref{fig:F4} on top of a substrate (shown in blue/light gray in Fig.~\ref{fig:F5}(a)) with a triangular-shaped hole (in green/dark gray in Fig.~\ref{fig:F5}(a)). The membrane is held fixed in position when on the substrate, and pushed down by a sharp tip at its geometrical center, down to a distance $\Gamma$=10 nm.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.49\textwidth]{Fig5v2.pdf}
\end{center}
\caption{Strained membrane: (a) The section in blue (light gray) is kept fixed, and strain is applied by pushing down the triangular section in green (dark gray) with a sharp extruder, located at the geometrical center. (b) Deviations from proper sublattice symmetry are concentrated at the section directly underneath the sharp tip, where the deformation is the largest and strain is the most inhomogeneous. (c-d) Gauge fields.}\label{fig:F5}
\end{figure}
As indicated earlier, sublattice symmetry is not exactly satisfied right underneath the tip, where $\Delta\alpha_j$ and $\Delta L_j$ take their largest values (Fig.~\ref{fig:F5}(c)). While $\Delta L_j$ still displays some fluctuations, this is not the case for $\Delta \alpha_j$ (the scale for $\Delta \alpha_j$ is identical to that from Fig.~\ref{fig:F4}(c)). The large white areas tells us that fluctuations on $\Delta\alpha_j$ are wiped out upon load as the extruder removes wrinkles. This observation stems from the lattice-explicit consideration of the mechanics.
We have presented a detailed discussion of the problem along these lines \cite{us}. We found that for small magnitudes of load a rippled membrane will adapt to an extruding tip isometrically. This observation is important in the context of the formulation with curvature \cite{deJuanPRB,deJuanPRL2012}, because in that formulation there is the assumption that distances between atoms increase as soon as graphene deviates from a perfect 2-dimensional plate.
The gauge fields given in Fig.~\ref{fig:F5}(c-d) reflect the circular symmetry induced by the circular shape of the extruding tip~\cite{us}.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.48\textwidth]{Fig6v2.pdf}
\end{center}
\caption{Local density of states on the membrane under strain shown in Fig.~\ref{fig:F5}. The locations where the DOS
is computed are shown in the insets (the most symmetric line patterns are displayed in yellow).}\label{fig:F6}
\end{figure}
We finish the discussion by probing the local density of states at many locations in Fig.~\ref{fig:F6}, which may relate to the discussion of confinement by gauge fields \cite{Blanter}. $E_s$ was was not included in computing DOS curves.
Some generic features of DOS are clearly visible: (i) Near the extruder, the deformation is already beyond the linear regime, and the DOS is indeed renormalized for locations close to the mechanical extruder \cite{deJuanPRL2012,arxiv,deJuanPRB}. (ii) A sequence of features appear on the DOS farther away from the extruder. Because the field is not homogeneous and perhaps due to energy broadening we are unable to tell a central peak. As indicated on the insets, the plots on Fig.~6(b) and 6(d) are obtained along high-symmetry lines (the colors on the DOS subplots correspond with the colored lines on the insets). For this reason they look almost identical, and the three sets of curves (corresponding to the DOS along different lines) overlap. Due to lower symmetry, the LDOS in Fig.~6(a) and 6(c) appear symmetric in pairs, with the exception of the plots highlighted in gray. (the light 'v'-shaped curve in all subplots is the reference DOS in the absence of strain).
LDOS curves complement the insight obtained from gauge field plots. Hence, they should also be reported in discussing strain engineering of graphene's electronic structure, particularly in situations where gauge fields are inhomogeneous.
\section{Conclusions}
We presented a novel framework to study the relation between mechanical strain and the electronic structure of graphene membranes. Gauge fields are expressed directly in terms of changes in atomic positions upon strain. Within this approach, it is possible to determine the extent to which the sublattice symmetry is preserved. In addition, we find that there are no $K-$dependent gauge fields in the first-order theory. We have illustrated the method by computing the strain-induced gauge fields on a rippled graphene membrane with and without mechanical load. In doing so, we have initiated a necessary discussion of mechanical effects falling beyond a description within first-order continuum elasticity. Such analysis is relevant for accurate determination of gauge fields and has not received proper attention yet.\\
\noindent{\bf Acknowledgments}\\
We acknowledge conversations with B. Uchoa, and computer support from HPC at Arkansas (\emph{RazorII}), and XSEDE (TG-PHY090002, \emph{Blacklight}, and \emph{Stampede}). M.V. acknowledges support by the Serbian Ministry of Science, Project No. 171027. |
1112.1519 | \section*{Introduction}
Let $(X,\omega)$ be a compact K\"ahler manifold of complex dimension $n \in \mathbb{N}^*$. Recall that a $(1,1)$-cohomology class is {\it big} if it contains a {\it K\"ahler current},
i.e. a positive closed current which dominates a K\"ahler form. Fix $\alpha \in H^{1,1}(X,\mathbb{R})$ a big class and
$\mu$ a non-negative Radon measure whose total mass $\mu(X)$ equals
$\rm{vol}(\alpha)$, the volume of $\alpha$.
The systematic study of complex Monge-Amp\`ere equations in big cohomology classes
has been initiated in \cite{BEGZ}. It has been show there that there exists a unique positive closed current $T_{\mu} \in \alpha$ with full Monge-Amp\`ere mass
such that
$$
T_{\mu}^n=\mu
$$
if and only if $\mu$ does not charge pluripolar sets.
The purpose of this note is to study the stability properties of the solution $T_{\mu}$ to this complex Monge-Amp\`ere equation, i.e. to study the continuity properties of the mapping
$$
\mu \mapsto T_{\mu}.
$$
We can not expect this mapping to be continuous for the weakest topologies, i.e. when the set of non pluripolar measures (resp. the set of positive currents
with full Monge-Amp\`ere masses)
is endowed with the weak topology of Radon measures (resp. of positive currents), as the Monge-Amp\`ere operator $T \mapsto T^n$
is not continuous either for this weak topology (this observation was made, in a local context, by Cegrell and Kolodziej in \cite{CK94}).
On the other hand we have the following:
\medskip
\noindent {\bf PROPOSITION A.}
{\it
Let $\mu_j,\mu$ be non pluripolar measures with total mass $\mu_j(X)=\mu(X)=\rm{vol}(\alpha)$. If $||\mu_j-\mu|| \rightarrow 0$ , then
$$
T_{\mu_j} \rightarrow T_{\mu}
\text{ in the weak sense of currents}.
$$
}
\medskip
Here $||\mu_j-\mu||$ denotes the total variation of the signed measure $\mu_j-\mu$.
\smallskip
It follows from the $dd^c$-lemma that any positive closed current $T \in \alpha$ decomposes as $T=\theta+dd^c \varphi$, for some
$\theta$-plurisubharmonic function $\varphi$. We let ${PSH}(X,\theta)$ denote the set of all such functions and observe that
there is a unique $\varphi_{\mu} \in {PSH}(X,\theta)$ such that $\sup_X \varphi_{\mu}=0$ and $T_{\mu}=\theta+dd^c \varphi_{\mu}$. In the sequel
we let
$$MA(\varphi):=\langle (\theta+dd^c \varphi)^n \rangle
$$
denote the (non pluripolar) complex Monge-Amp\`ere measure of $\varphi \in PSH(X,\theta)$.
The equation $T_{\mu}^n=\mu$ is thus equivalent to the Monge-Amp\`ere equation
$$
MA(\varphi_{\mu})=\mu.
$$
Since the weak convergence of currents $T_{\mu}$ is equivalent to the $L^1$-convergence of their normalized potentials
$\varphi_{\mu}$, Proposition A can be reformulated as
$$
\left( || {\mu_j}-{\mu}|| \rightarrow 0 \right) \Longrightarrow \left( || \varphi_{\mu_j}-\varphi_{\mu}||_{L^1(X)} \rightarrow 0 \right).
$$
It is natural to try and estimate quantitatively how fast this convergence holds.
Our second result yields a quantitative stability property "in energy":
\medskip
\noindent {\bf THEOREM B.}
{\it
There exists $C_n > 0$ such that if $0 \geq \psi, \varphi_1, \varphi_2, \in \mathcal E^1 (X,\theta)$ are normalized by
$\sup_X \varphi_1=\sup_X \varphi_2$, then
$$
\int_X \vert \varphi_1 - \varphi_2 \vert \mathrm{MA} (\psi) \leq C_n \cdot B^2 \cdot I (\varphi_1,\varphi_2)^{2^{-n}},
$$
where $B=\max \{1, \vert E (\varphi_1)\vert,\vert E (\varphi_2)\vert, \vert E (\psi)\vert \}$.
}
\medskip
We refer the reader to the first section for the definition of the class ${\mathcal E}^1(X,\theta)$ of $\theta$-psh functions
$\varphi$ which have finite energy $E(\varphi)>-\infty$. We recall here that the symmetric expression
$$
I(\varphi_1,\varphi_2):=\int (\varphi_1-\varphi_2) (MA(\varphi_2)-MA(\varphi_1)) \geq 0
$$
is used to define the important notion of "convergence in energy". Theorem B implies in particular a
quantitative estimate on how "convergence in energy" implies "convergence in capacity".
Related results were previously obtained in \cite{BBGZ}, the latter article being a great source of inspiration for this note.
Let us also stress that when the underlying cohomology class is K\"ahler, a weaker but quite elegant stability result
was previously obtained by Blocki in \cite{Bl03}. We briefly explain in section \ref{sec:energy} how our result can be used to derive more standard stability estimates in this vein.
\smallskip
Our last result yields the strongest property of stability, assuming stronger properties on the corresponding measures.
\medskip
\noindent {\bf THEOREM C.}
{\it
Assume $\mu=MA(\varphi_{\mu})=f_{\mu} \omega^n, \nu=MA(\varphi_{\nu})=f_{\nu} \omega^n$,
where the densities $0 \leq f_{\mu},f_{\nu}$ are in $L^p(\omega^n)$ for some $p>1$ and $\varphi_{\mu},\varphi_{\nu} \in {PSH}(X,\theta)$
are normalized by $\sup_X \varphi_{\mu}=\sup_X \varphi_{\nu}=0$. Then
$$
\Vert \varphi_\mu - \varphi_\nu \Vert_{L^{\infty} (X)} \leq M_{\tau} \Vert f_{\mu}- f_{\nu} \Vert_{L^1 (X)}^{\tau},
$$
where $M_{\tau} > 0$ only depends on upper bounds for the $L^p$ norms of $f_{\mu},f_{\nu}$ and
$$
\tau<\frac{1}{2^n(n+1)-1}.
$$
}
\medskip
The existence of a unique normalized $\theta$-psh function $\varphi_{\mu}$ with minimal singularities such that $(\theta+dd^c \varphi_{\mu})^n=\mu$
when $\mu$ has $L^p$-density, $p>1$, has been established in \cite[Theorem 4.1]{BEGZ}, generalizing Kolodziej's celebrated result \cite{Kol98}.
It is likely that the exponent $\tau$ we obtain here is not sharp. When $\alpha$ is a K\"ahler class, a better exponent
was obtained by Kolodziej in \cite{Kol03} and later on improved by Dinew-Zhang in \cite{DZ} (see also \cite{Hiep}
for some other generalization).
\medskip
\noindent {\it Notations.}
In the whole article we fix
$\bullet$ $(X,\omega)$ a compact K\"ahler manifold equipped with a K\"ahler form $\omega$,
$\bullet$ $\alpha \in H^{1,1}(X,\mathbb{R})$ a big cohomology class,
$\bullet$ $\theta$ a smooth closed
$(1,1)$-form representing $\alpha$.
\section{Preliminary results on big cohomology classes}
We briefly recall here some material developed in full detail in \cite{BEGZ}.
\subsection{Quasi-psh functions}
Recall that an upper semi-continuous function $$\varphi:X\to[-\infty,+\infty[$$
is said to be \emph{$\theta$-psh} iff $\varphi$ is locally the sum of a smooth and a psh function, and $\theta+dd^c\varphi\ge 0$ in the sense of currents, where $d^c$ is normalized so that
$$
dd^c=\frac{i}{\pi}\partial\overline{\partial}.
$$
By the $dd^c$-lemma any closed positive $(1,1)$-current $T$ cohomologous to $\theta$ can conversely be written as $T=\theta+dd^c\varphi$ for some $\theta$-psh function $\varphi$ which is furthermore unique up to an additive constant.
The set of all $\theta$-psh functions $\varphi$ on $X$ will be denoted by ${PSH}(X,\theta)$ and endowed with the weak topology, which coincides with the $L^1(X)$-topology. By Hartogs' lemma $\varphi\mapsto\sup_X\varphi$ is continuous in the weak topology. Since the set of closed positive currents in a fixed cohomology class is compact (in the weak topology), it follows that the set of $\varphi\in{PSH}(X,\theta)$ normalized by $\sup_X\varphi=0$ is compact.
\smallskip
We introduce the extremal function $V_\theta$ defined by
\begin{equation}\label{equ:extrem}V_\theta(x):=\sup\{\varphi(x)|\varphi\in{PSH}(X,\theta),\sup_X\varphi\le 0\}.
\end{equation}
It is a $\theta$-psh function with \emph{minimal singularities} in the sense of Demailly, i.e.~we have
$\varphi\le V_\theta+O(1)$ for any $\theta$-psh function $\varphi$. In fact it is straightforward to see that the following 'tautological maximum principle' holds:
\begin{equation}\label{equ:max}\sup_X\varphi=\sup_X(\varphi-V_\theta)
\end{equation}
for any $\varphi\in{PSH}(X,\theta)$.
\subsection{Ample locus and regularity of envelopes}
The cohomology class $\alpha=\{\theta\}\in H^{1,1}(X,\mathbb{R})$ is said to be \emph{big} iff there exists a closed $(1,1)$-current
$$
T_+=\theta+dd^c\varphi_+
$$
cohomologous to $\theta$ such that $T_+$ is \emph{strictly positive} (i.e. $T_+\ge \varepsilon_0 \omega$ for some $\varepsilon_0>0$).
By Demailly's regularisation theorem~\cite{Dem92} one can then furthermore assume that $T_+$ has \emph{analytic singularities}, that is there exists $c>0$ such that locally on $X$ we have
$$
\varphi_+=c\log\sum_{j=1}^N|f_j|^2\text{ mod }C^\infty
$$
where $f_1,...,f_N$ are local holomorphic functions. Such a current $T$ is then smooth on a Zariski open subset $\Omega$, and
the \emph{ample locus} $\mathrm{Amp}\,(\alpha)$ of $\alpha$ is defined as the largest such Zariski open subset (which exists by the Noetherian property of closed analytic subsets).
Note that \emph{any} $\theta$-psh function $\varphi$ with minimal singularities is locally bounded on the ample locus $\mathrm{Amp}\,(\alpha)$ since it has to satisfy $\varphi_+\le\varphi+O(1)$.
Note that $\varphi_+$ does not have minimal singularities unless $\alpha$ is a K\"ahler class.
\smallskip
In case $\alpha=\{\theta\}\in H^{1,1}(X,\mathbb{R})$ is a \emph{K{\"a}hler} class, plenty of \emph{smooth} $\theta$-psh functions are available.
When $\alpha$ is both big and nef (i.e. $\alpha$ belongs to the closure of the cone of K\"ahler classes), a good regularity theory is available thanks to \cite{BEGZ}.
However for a general \emph{big} class the existence of even a \emph{single} $\theta$-psh function with minimal singularities that is also $C^\infty$
on the ample locus $\mathrm{Amp}\,(\alpha)$ is unknown.
On the other hand we have the following regularity result of Berman-Demailly on the ample locus ~\cite{BD}:
\begin{thm}\label{thm:c11}
The function $V_\theta$ has locally bounded Laplacian on $\mathrm{Amp}\,(\theta)$.
In particular the Monge-Amp{\`e}re measure $\mathrm{MA}(V_\theta)$ has $L^\infty$-density with respect to Lebesgue measure. More specifically we have $\theta\ge 0$ pointwise on $\{V_\theta=0\}$ and
$$
\mathrm{MA}(V_\theta)={\bf 1}_{\{V_\theta=0\}}\theta^n.
$$
\end{thm}
Since $V_\theta$ is quasi-psh this result is equivalent to the fact that the curent $\theta+dd^c V_\theta$
has $L^\infty_{loc}$ coefficients on $\mathrm{Amp}\,(\alpha)$ and shows in particular by Schauder's elliptic estimates that $V_\theta$ is in fact $C^{2-\varepsilon}$ on $\mathrm{Amp}\,(\alpha)$ for each $\varepsilon>0$.
\subsection{Full Monge-Amp\`ere mass}
In~\cite{BEGZ} the \emph{non-pluripolar product}
$$
(T_1,...,T_p)\mapsto\langle T_1\wedge...\wedge T_p\rangle
$$
of closed positive $(1,1)$-currents is shown to be well-defined as a closed positive $(p,p)$-current putting no mass on pluripolar sets.
In particular given $\varphi_1,...,\varphi_n\in{PSH}(X,\theta)$ we define their mixed Monge-Amp{\`e}re measure as
$$\mathrm{MA}(\varphi_1,...,\varphi_n)=\langle(\theta+dd^c\varphi_1)\wedge...\wedge(\theta+dd^c\varphi_n)\rangle.$$
It is a non-pluripolar positive measure whose total mass satisfies
$$
\int_X\mathrm{MA}(\varphi_1,...,\varphi_n)\le \rm{vol}(\alpha)
$$
where the right-hand side denotes the \emph{volume} of the cohomology class $\alpha$.
If $\varphi_1,...,\varphi_n$ have minimal singularities then they are locally bounded on $\mathrm{Amp}\,(\alpha)$, and the product
$$
(\theta+dd^c\varphi_1)\wedge...\wedge(\theta+dd^c\varphi_n)
$$
is thus well-defined by Bedford-Taylor~\cite{BT82}. Its trivial extension to $X$ coincides with $\mathrm{MA}(\varphi_1,...,\varphi_n)$, and we have
$$
\int_X\mathrm{MA}(\varphi_1,...,\varphi_n)=\rm{vol}(\alpha).
$$
In case $\varphi_1=...=\varphi_n=\varphi$, we simply set
$$
\mathrm{MA}(\varphi)=\mathrm{MA}(\varphi,...,\varphi)
$$
and say that $\varphi$ has \emph{full Monge-Amp{\`e}re mass} iff $\int_X\mathrm{MA}(\varphi)=\rm{vol}(\alpha)$. We let
$$
{\mathcal E}(X,\theta):=\left\{ \varphi \in PSH(X,\theta) \, | \, \int_X\mathrm{MA}(\varphi)=\rm{vol}(\alpha) \right\}
$$
denote the set of $\theta$-psh functions with full Monge-Amp\`ere mass.
We thus see that $\theta$-psh functions with minimal singularities have full Monge-Amp{\`e}re mass, but the converse is not true.
A crucial point is that the non-pluripolar Monge-Amp{\`e}re operator is continuous along monotonic sequences of functions with full Monge-Amp{\`e}re mass.
In fact we have (cf.~\cite{BEGZ} Theorem 2.17):
\begin{prop}\label{prop:cont}
The operator
$$
(\varphi_1,...,\varphi_n)\mapsto\mathrm{MA}(\varphi_1,...,\varphi_n)
$$
is continuous along monotonic sequences of functions with full Monge-Amp{\`e}re mass.
If $\int_X(\varphi-V_\theta)\mathrm{MA}(\varphi)$ is finite, then
$$
\lim_{j\to\infty}(\varphi_j-V_\theta)\mathrm{MA}(\varphi_j)=(\varphi-V_\theta)\mathrm{MA}(\varphi)
$$
for any monotonic sequence $\varphi_j\to\varphi$.
\end{prop}
\subsection{Weighted energies}
Let $\psi \in PSH(X,\theta)$ be a $\theta$-psh function with minimal singularities. Its \emph{Aubin-Mabuchi energy} is
$$
E(\psi):=\frac{1}{n+1}\sum_{j=0}^n\int_X(\psi-V_\theta) \langle (\theta+dd^c \psi)^j \wedge (\theta+dd^c V_{\theta})^{n-j} \rangle.
$$
One can check \cite{BEGZ} that its G{\^a}teaux derivatives are given by
$$
E'(\psi)\cdot v=\int_X v \mathrm{MA}(\psi)
$$
showing in particular that $E$ is non-decreasing.
\begin{defi}
We let ${\mathcal E}^1(X,\theta)$ denote the class of all $\theta$-plurisubharmonic functions $\varphi$ such that
$$
E(\varphi):=\inf_{\psi \geq \varphi} E(\psi)>-\infty
$$
where the infimum is taken over all functions $\psi$ with minimal singularities.
\end{defi}
Alternatively a function $\varphi$ belongs to ${\mathcal E}^1(X,\theta)$ if and only if it belongs to ${\mathcal E}(X,\theta)$ and
$\varphi \in MA(\varphi)$.
More generally, given $\chi= \mathbb{R} \rightarrow \mathbb{R}$ a convex increasing function such that $\chi(-\infty)=-\infty$,
one considers, for $\psi$ with minimal singularities,
$$
E_{\chi}(\psi):=\frac{1}{n+1}\sum_{j=0}^n\int_X \chi(\psi-V_\theta) \langle (\theta+dd^c \psi)^j \wedge (\theta+dd^c V_{\theta})^{n-j} \rangle.
$$
This weighted energy is again non-decreasing \cite[Proposition 2.8]{BEGZ}, hence the following:
\begin{defi}
We let ${\mathcal E}_{\chi}(X,\theta)$ denote the class of all $\theta$-plurisubharmonic functions $\varphi$ such that
$$
E_{\chi}(\varphi):=\inf_{\psi \geq \varphi} E_{\chi}(\psi)>-\infty
$$
where the infimum is taken over all functions $\psi$ with minimal singularities.
\end{defi}
One can easily check that these classes exhaust the class of functions with full Monge-Amp\`ere mass \cite[Proposition 2.11]{BEGZ},
$$
{\mathcal E}(X,\theta) =\bigcup_{\chi} {\mathcal E}_{\chi}(X,\theta).
$$
We finally introduce the \emph{symmetric} expression
$$
I(\varphi,\psi):=\int_X(\varphi-\psi)(\mathrm{MA}(\psi)-\mathrm{MA}(\varphi)) \geq 0
$$
where the non-negativity can be deduced from the following formula
\begin{equation}
\label{equ:I}
I(\varphi,\psi)=\sum_{j=0}^{n-1}\int_\Omega d(\varphi-\psi)\wedge d^c(\varphi-\psi)\wedge \langle (\theta+dd^c\varphi)^j\wedge(\theta+dd^c\psi)^{n-1-j} \rangle.
\end{equation}
\begin{defi}
A sequence of functions $\varphi_j \in \mathcal E^1 (X,\theta)$ \emph{converges in energy} towards $\varphi \in\mathcal E^1 (X,\theta)$ if $I(\varphi_j,\varphi)\to 0$ as $j\to\infty$.
\end{defi}
This notion is introduced in \cite{BBGZ} where it is shown that convergence in energy implies continuity of the complex Monge-Amp\`ere operator.
\subsection{Monge-Amp{\`e}re capacity}
As in~\cite{GZ05, BEGZ} we define the \emph{Monge-Amp{\`e}re (pre)capacity} in our setting as the upper envelope of all measures
$\mathrm{MA}(\varphi)$ with $\varphi\in{PSH}(X,\theta)$, $V_\theta-1\le\varphi\le V_\theta$, i.e.
\begin{equation}\label{equ:cap}
\rm{Cap}(B):=\sup \left\{\int_B\mathrm{MA}(\varphi),\,\varphi\in{PSH}(X,\theta),\,V_\theta-1\le\varphi\le V_\theta\text{ on }X \right\}.
\end{equation}
for every Borel subset $B$ of $X$.
By definition, a positive measure $\mu$ is absolutely continuous with respect the capacity $\rm{Cap}$ iff $\rm{Cap}(B)=0$ implies $\mu(B)=0$. This means exactly that $\mu$ is non-pluripolar in the sense that $\mu$ puts no mass on pluripolar sets. Since $\mu$ is subadditive, it is in turn equivalent to the existence of a non-decreasing right-continuous function $F:\mathbb{R}_+\to\mathbb{R}_+$ such that
$$\mu(B)\le F(\rm{Cap}(B))$$
for all Borel sets $B$. Roughly speaking the speed at which $F(t)\to 0$ as $t\to 0$ measures "how non-pluripolar" $\mu$ is.
\begin{defi}
Fix $\beta>0$.
We say that $\mu$ satisfies the condition ${\mathcal H}(\beta)$ if there exists $C_{\beta}>0$ such that
for all Borel sets $B \subset X$,
$$
\mu(B) \leq C_{\beta} \rm{Cap}(B)^{\beta+1}.
$$
If this holds for all $\beta>0$, we say that $\mu$ satisfies the condition ${\mathcal H}(\infty)$.
\end{defi}
Such conditions were introduced by Kolodziej in \cite{Kol98} who showed that measures $\mu=MA(\varphi)$ satisfying the condition
${\mathcal H}(\beta)$ are such that $\varphi$ is continuous if the cohomology class $\alpha$ is K\"ahler. He further observed that
if $\mu=f \omega^n$ has density in $L^p$ for some $p>1$, then $\mu$ satsifies condition ${\mathcal H}(\infty)$.
These results were later on extended
to the case of big cohomology classes in \cite{EGZ,BEGZ,EGZ11}.
\smallskip
Recall that the complex Monge-Amp\`ere operator $\varphi \mapsto MA(\varphi)$ is discontinuous for the $L^1$-topology.
One needs to require a stronger notion of convergence of potentials:
\begin{defi}
A sequence $(\varphi_j)$ of $\theta$-plurisubharmonic functions converges in capacity towards $\varphi$ if for all $\varepsilon>0$,
$$
\rm{Cap}\left(\{�|\varphi_j-\varphi| >\varepsilon \}\right) \rightarrow 0
\text{ as } j \rightarrow +\infty.
$$
\end{defi}
If a sequence $\varphi_j \in {\mathcal E}^1(X,\theta)$ converges to $\varphi \in {\mathcal E}^1(X,\theta)$ in capacity, then
$MA(\varphi_j)$ weakly converges towards $MA(\varphi)$ \cite{GZ07,DH}. This generalizes previous continuity statements, as monotonic
convergence implies convergence in capacity.
\section{Weak stability properties}
In this section we establish the weakest stability property, i.e. Proposition A stated in the introduction.
\subsection{Unstability}
We start by observing that one can not expect stability in general. Recall \cite{BEGZ} that if $\mu$ is a non-negative Radon measure which vanishes
on pluripolar sets and whose total mass equals $\rm{vol}(\alpha)$, then there exists a unique positive closed current $T_{\mu} \in \alpha$
with full Monge-Amp\`ere mass and such that
$$
\langle T_{\mu}^n \rangle=\mu.
$$
The current $T_{\mu}$ decomposes as $T_{\mu}=\theta+dd^c \varphi_{\mu}$, where $\varphi_{\mu} \in {PSH}(X,\theta)$ is uniquely determined, once normalized
by $\sup_X \varphi_{\mu}=0$.
One can not expect the operator $\mu \mapsto \varphi_{\mu}$ (or equivalently $\mu \mapsto T_{\mu}$) to be continuous, as its inverse
operator $\varphi \mapsto \langle (\theta+dd^c \varphi)^n \rangle$ is not either. Here is a variation on a classical local example \cite{Ceg83} of such discontinuous behavior:
\begin{exa}
The functions
$$
\psi_j(z_1,z_2):=\frac{1}{2j} \log \left[ |z_1^j+z_2^j|^2+1 \right]
$$
are smooth and plurisubharmonic in $\mathbb{C}^2$. They form a locally bounded sequence which converges in $L_{loc}^1(\mathbb{C}^2)$ towards
$$
\psi(z_1,z_2)=\log \max[1, |z_1|, |z_2| ].
$$
Observe that the Monge-Amp\`ere measures $(dd^c \psi_j)^2$ vanish identically, while $(dd^c \psi)^2$ is the Lebesgue measure on the
real torus $\{ |z_1|=|z_2|=1 \}$.
One can globalize this example, working on $X=\mathbb{C}\mathbb{P}^2$ equipped with its Fubini-Study K\"ahler form $\theta=\omega_{FS}$. Set
$$
\varphi_j[z]=\frac{1}{2j} \log \left[ |z_1^j+z_2^j|^2+|z_0|^{2j} \right] -\log ||z||,
$$
where $[z]=[z_0:z_1:z_2]$ denotes the homogeneous coordinates in $\mathbb{C}\mathbb{P}^2$ and $(z_0=0)$ denotes the hyperplane at infinity,
$\mathbb{C}\mathbb{P}^2=\mathbb{C}^2 \cup (z_0=0)$.
The functions $\varphi_j$ are $\theta$-psh and smooth in $\mathbb{C}\mathbb{P}^2 \setminus S_j$, where
$S_j$ denotes the finite set of points at infinity $\{z_0=0=z_1^j+z_2^j\}$.
The $\varphi_j$'s converge in $L^1(\mathbb{C}\mathbb{P}^2)$ towards
$$
\varphi(z_1,z_2)=\log \max[|z_0|, |z_1|, |z_2| ] -\log ||z||,
$$
whose Monge-Amp\`ere measure is again the Lebesgue measure on the torus.
\end{exa}
This example is not so satisfactory since the Monge-Amp\`ere measures $MA(\varphi_j)$ are all supported on the
(pluripolar) hyperplane at infinity. We thus propose a slightly more elaborate construction where the approximants
are uniformly bounded:
\begin{exa}
Using the same notations as in previous example, we set
$$
\Phi_j:=\log \left[ e^{\varphi_j}+e^{-K} \right],
$$
where $K>0$ is a large constant. The reader will easily check that
$$
\theta+dd^c \Phi_j=\frac{e^{\varphi_j} \theta_{\varphi_j}+e^{-K} \theta}{e^{\varphi_j} +e^{-K}}+\frac{e^{\varphi_j-K} d \varphi_j \wedge d^c \varphi_j}{\left[ e^{\varphi_j}+e^{-K}\right]^2} \geq 0,
$$
so that $\Phi_j$ are uniformly bounded $\theta$-psh functions on $\mathbb{C}\mathbb{P}^2$. We use here the shortcuts $\theta=\omega_{FS}$
and $\theta_u:=\theta+dd^c u$.
A similar computation can be made for $\Phi:=\log \left[ e^{\varphi}+e^{-K} \right]$, showing in particular that
$$
MA(\Phi) \geq \frac{e^{ -2 \sqrt{3}}}{\left[ e^{-\sqrt{3}}+e^{-K} \right]^2} \sigma_{{\mathcal{T}}}
$$
dominates a multiple of the (normalized) Lebesgue measure $\sigma_{{\mathcal{T}}}$ on the real torus ${\mathcal{T}}=\{|z_0|=|z_1|=|z_2|\}$.
This multiple can be made arbitrarily close to $1$ by choosing $K$ large enough. On the other hand $MA(\Phi_j)$ can be computed explicitly
by using that $(dd^c \psi_j)^2, dd^c \psi_j \wedge d \psi_j, dd^c \psi_j \wedge d^c \psi_j$ are all zero in $\mathbb{C}^2$. One can this way verify that
any cluster point of $MA(\Phi_j)$ is different from $MA(\Phi)$, although $\Phi_j$ converges towards $\Phi$.
\end{exa}
\subsection{Proof of Proposition A}
We now prove a qualitative property of stability under a weak domination assumption.
Let $\mu_j,\mu$ be non negative Radon measures on $X$ which do not charge pluripolar sets
and whose total mass equals $\rm{vol}(\alpha)$.
\medskip
\noindent {\bf PROPOSITION A'.}
{\it
If the measures $\mu_j=f_j \nu$ are all absolutely continuous with respect to a fixed non pluripolar measure $\nu$ and
$f_j \rightarrow f$ in $L^1(\nu)$, then
$$
T_{\mu_j} \rightarrow T_{\mu}
\text{ in the weak sense of currents},
$$
where $\mu=f \nu$.
}
\medskip
This result can be seen as a generalization of a local result of Cegrell-Kolodziej \cite{CK06} who
asked for $f_j$ to be uniformly bounded.
\medskip
\noindent {\it Proof.} We let $\varphi_j,\varphi$ denote the normalized Monge-Amp\`ere potentials,
$$
\mu_j=(\theta+dd^c \varphi_j)^n, \,
\mu=(\theta+dd^c \varphi)^n,
\text{ with } \sup_X \varphi_j=\sup_X \varphi=0.
$$
We assume that
$
\mu_j=f_j \nu, \, \mu=f \nu,
$
where $\nu$ vanishes on pluripolar sets and $f_j \rightarrow f$ in $L^1(\nu)$, and we are going to show that in this case
$(\varphi_j)$ converges in $L^1(X)$ towards $\varphi$.
By weak compactness, we can assume -up to extracting- that $\varphi_j \rightarrow \psi \in {PSH}(X,\theta)$, with $\sup_X \psi=0$.
Extracting again, we can also assume that there exists $g \in L^1(\nu)$ such that
$$
f_j \leq g \text{ for all } j \in \mathbb{N}.
$$
Since the measure $g \nu$ does not charge pluripolar sets, it follows from \cite[Proposition 3.2]{BEGZ}
that there exist $\chi:\mathbb{R} \rightarrow \mathbb{R}$ a convex increasing weight and $C>0$ such that $\chi(-\infty)=-\infty$ and
for all $j \in \mathbb{N}$,
$$
\int (-\chi) (\varphi_j-V_{\theta}) g d\nu \leq C.
$$
This shows that
$$
\int (-\chi) (\varphi_j-V_{\theta}) MA(\varphi_j) \leq C,
$$
hence \cite[Proposition 2.10]{BEGZ} insures that $\psi \in {\mathcal E}_{\chi}(X,\theta)$.
The functions $\psi_j:=(\sup_{l \geq j} \varphi_l)^* \in {PSH}(X,\theta)$ decrease to $\psi$ and satisfy
$$
MA(\psi_j) \geq (\inf_{l \geq j} f_l) \nu.
$$
We infer $MA(\psi) \geq \mu=f \nu$, whence equality since these measures have the same mass $\rm{vol}(\alpha)$.
This shows that $MA(\psi)=MA(\varphi)$, hence these normalized potentials have to be equal, by the uniqueness in \cite[Theorem 3.1]{BEGZ}.
$\Box$
\medskip
We finally observe that Proposition A and Proposition A' are equivalent. Indeed if $\mu_j=f_j\nu$ and $\mu=f \nu$, then
by definition
$$
||\mu_j-\mu||=||f_j-f||_{L^1(\nu)}
$$
so that Proposition A' is a particular case of Proposition A.
Conversely, if $\mu_j,\mu$ are non pluripolar measures of the same mass $\rm{vol}(\alpha)$ such that
$||\mu_j-\mu|| \rightarrow 0$, then
$$
\nu:=\mu+\sum_{j \geq 0} 2^{-j} \mu_j
$$
is a well defined non pluripolar Radon measure with respect to which $\mu_j,\mu$ are absolutely continuous, thus the
hypotheses of Proposition A' are satisfied.
\section{Stability in energy} \label{sec:energy}
\subsection{Case of a K\"ahler class}
Our starting point is the following result which is a refinement of \cite[Lemma 3.12]{BBGZ}:
\begin{lem} \label{lem:BBGZ}
There exists $\kappa_n > 0$ such that if
$0 \geq \varphi_1, \varphi_2, \psi_1, \psi_2 \in \mathcal E^1 (X,\theta)$ satisfy $E (\varphi_i) \geq - B$, $E (\psi_i) \geq - B$, then
\begin{equation}
\label{eq:P0}
\left\vert \int (\varphi_1 - \varphi_2) (\mathrm{MA} (\psi_1) - \mathrm{MA} (\psi_2)) \right\vert \leq \kappa_n B_+^{2} I (\varphi_1,\varphi_2)^{ 2^{-n}} I (\psi_1,\psi_2)^{2^{ - n}}
\end{equation}
and
\begin{equation}
\label{eq:P1}
\int d (\varphi_1 - \varphi_2) \wedge d^c (\varphi_1 - \varphi_2) \wedge T_{n - 1} \leq \kappa_n B_+^{2} I (\varphi_1,\varphi_2)^{2^{-(n-1)}},
\end{equation}
where $B_+ :=\max(1,B)$ and
$$
T_{n-1}:= \sum_{j = 0}^{n - 1} (\theta+dd^c\psi_1)^j\wedge(\theta+dd^c\psi_2)^{n-1-j}.
$$
\end{lem}
A particular case of the second inequality was obtained in \cite{GZ07} when $\alpha$ is a K\"ahler class (see also \cite{Bl03} for bounded functions).
\begin{proof}
Observe that the first inequality follows from the second one using Stokes formula and Cauchy-Schwarz inequality.
We also note that it suffices to establish (\ref{eq:P1}) when $\psi_1=\psi_2=:\psi$, the general case follows
by considering $\psi=(\psi_1+\psi_2)/2$.
Set $u:=\varphi_1-\varphi_2$, $v:=(\varphi_1+\varphi_2)/2$ and for each $p=0,...,n-1$,
$$
b_p:=\int_X du\wedge d^c u\wedge\theta_v^{p}\wedge\theta_{\psi}^{n-p-1},
$$
where $\theta_v:=\theta+dd^c v$.
Our goal is to bound $b_0$ from above, since
$$
b_0=\frac{1}{n} \int d(\varphi_1-\varphi_2) \wedge d^c (\varphi_1-\varphi_2) \wedge T_{n-1},
$$
as $\psi=\psi_1=\psi_2$.
Using Stokes theorem we obtain
\begin{eqnarray*}
b_p &=& \int_X du\wedge d^c u\wedge\theta_v^{p+1}\wedge\theta_{\psi}^{n-p-2}
+\int_X du\wedge d^c u\wedge dd^c(\psi-v)\wedge\theta_v^p\wedge\theta_\psi^{n-p-2} \\
&=& b_{p+1}-\int_X du\wedge d^c(\psi-v)\wedge dd^c u\wedge\theta_v^p\wedge\theta_{\psi}^{n-p-2} \\
& = & b_{p+1}-\int_X du\wedge d^c(\psi-v)\wedge\theta_{\varphi_1}\wedge\theta_v^p\wedge\theta_{\psi}^{n-p-2} \\
& +& \int_X du\wedge d^c(\psi-v)\wedge\theta_{\varphi_2}\wedge\theta_v^p\wedge\theta_{\psi}^{n-p-2}.
\end{eqnarray*}
noting that $dd^c u = \theta_{\varphi_1}- \theta_{\varphi_2}$.
Recall that $\theta_{\varphi_i}\le 2\theta_v$, hence Cauchy-Schwarz inequality and (\ref{equ:I}) yield
$$
\left|\int_X du\wedge d^c(\psi-v)\wedge\theta_{\varphi_i}\wedge\theta_v^p
\wedge\theta_{\psi}^{n-p-2}\right|
\le 2 b_{p+1}^{1/2}I(\psi,v)^{1/2}.
$$
It follows from \cite[Lemma 2.7]{BBGZ} that $I(\psi,v) \leq a_n B_+$, where $a_n > 1$ is a uniform constant, thus
\begin{equation}\label{equ:boundb}
b_p \le b_{p+1}+ 2 (a_n B_+)^{1 \slash 2} \sqrt{b_{p+1}}=h(b_{p+1}),
\end{equation}
where $h (t) := t + 2 (a_n B_+)^{1 \slash 2} \sqrt{t},$ for $t \geq 0$, is monotone increasing in $t$. Thus
$$
b_0 \leq h^{n-1} (b_{n-1}) \leq h^{n-1} (I(\varphi_1,\varphi_2)),
$$
since
$$
b_{n-1} \leq \sum_{j=0}^{n-1} \int du \wedge d^c u \wedge \theta_{\varphi_1}^j \wedge \theta_{\varphi_2}^{n-1-j}=I(\varphi_1,\varphi_2).
$$
Here $h^{n-1}:=h \circ \cdots \circ h$ denotes the $(n-1)^{th}$-iterate of the function $h$.
Observe that $h (t) \leq C_1 \sqrt{t}$ for $0 \leq t\leq 1$, where $C_1 := 1 + 2 (a_n B_+)^{1 \slash 2}$.
We infer that if $0 \leq t \leq C_1^{-2^n}$ then $h^{n-1} (t) \leq C_1^2 t^{2^{-(n-1)}}$. Therefore
$$
b_0 \leq C_1^2 I(\varphi_1,\varphi_2)^{2^{-(n-1)}}
\text{ if } I(\varphi_1,\varphi_2) \leq C_1^{-2^n}.
$$
When $I(\varphi_1,\varphi_2)$ is relatively large, i.e. when $I(\varphi_1,\varphi_2)> C_1^{-2^n}$, we use
\cite[Lemma 2.7]{BBGZ} again to bound from above $b_0 \leq a_n B_+$, thus obtaining
$$
b_0 \leq a_n B_+ C_1^{2} I(\varphi_1,\varphi_2)^{2^{-(n-1)}}.
$$
In both cases we can bound from above $b_0$ by $\kappa_n B_+^2$.
\end{proof}
When the underlying cohomology class $\alpha$ is K\"ahler, one can use the classical Poincar\'e inequality to deduce from Lemma \ref{lem:BBGZ}
a quantitative stability inequality.
Indeed assume that $\theta = \omega$ is a K\"ahler form on $X$ and, for simplicity, that $MA(\varphi_i)=f_i \omega^n$ are
absolutely continuous with respect to Lebesgue measure, with $L^2$-densities.
We can apply the inequality (\ref{eq:P1}) with $\psi_1=\psi_2=0$ and
obtain a gradient estimate in terms of the energy deviation: for any $\varphi_1, \varphi_2 \in \mathcal{E}^1 (X,\omega)$ satisfying
$E (\varphi_i) \geq - B$,
$$
\int_X d (\varphi_1 - \varphi_2) \wedge d^c (\varphi_1 - \varphi_2) \wedge \omega^ {n - 1} \leq \kappa_n B_+^2 I (\varphi_1,\varphi_2)^{1 \slash 2^{n-1}},
$$
where
$$
I(\varphi_1,\varphi_2)=\int (\varphi_1-\varphi_2)(f_2-f_1) \omega^n \leq ||\varphi_1-\varphi_2||_{L^2} ||f_1-f_2||_{L^2}
$$
if $MA(\varphi_i)=f_i \omega^n$ have $L^2$-densities.
We normalize the potentials $\varphi_i$ so that $\sup_X \varphi_1=\sup_X \varphi_2=0$. It follows then from elementary arguments
(see \cite{GZ07}) that the energies of the $\varphi_i$'s are uniformly bounded from below, since
$$
\int (-\varphi_i) MA(\varphi_i)=\int (-\varphi_i) f_i \omega^n \leq ||\varphi_i||_{L^2} ||f_i||_{L^2},
$$
while Poincar\'e's inequality yields
$$
\Vert \varphi_1 - \varphi_2\Vert_{L^2 (X)}^2 \leq \delta_n \int_X d (\varphi_1 - \varphi_2) \wedge d^c (\varphi_1 - \varphi_2) \wedge \omega^ {n - 1},
$$
for some uniform constant $\delta_n > 0$. We have thus proved the following stability property:
\begin{prop}
Let $(X,\omega)$ be a compact K\"ahler manifold. Let $\varphi_1,\varphi_2 \in {\mathcal E}^1(X,\omega)$ be solutions of
$(\omega+dd^c \varphi_i)^n=f_i \omega^n$, where $\int_X f_i \omega^n=\int _X \omega^n$, $f_i \in L^2(X)$ and $\int (\varphi_1-\varphi_2) \omega^n=0$. Then
$$
\Vert \varphi_1 - \varphi_2\Vert_{L^2 (X)} \leq C ||f_1-f_2||^{1/{(2^n -1)}}_{L^2(X)},
$$
where $C>0$ is a uniform constant.
\end{prop}
This result can be seen as a quantitative version of Proposition A' when $\nu=\omega^n$.
Its purpose is to illustrate, in a simple setting, how Lemma \ref{lem:BBGZ} can be used to obtain quantitative
stability properties.
As we shall see in the sequel, similar inequalities will continue to hold in more general contexts.
\subsection{The general case}
We now go back to our original situation, when the cohomology class $\{\theta\} \in H^{1,1}(X,\mathbb{R})$ is merely big.
We start by establishing an important particular case of Theorem B:
\begin{prop} \label{thm:general}
There exists $C> 0$ such that for every $0 \geq \varphi, \psi \in \mathcal E^1 (X,\theta)$ normalized by $\sup_X \varphi=\sup_X \psi$,
$$
\Vert \varphi - \psi \Vert_{L^1(X)} \leq C \cdot B^2 \cdot I (\varphi,\psi)^{1 \slash 2^{n}},
$$
where $B := \max \{1, \vert E (\varphi)\vert, \vert E (\psi)\vert \}$.
\end{prop}
\begin{proof}
We can assume without loss of generality that $\nu=\omega^n$ is normalized so that $\nu(X):=\int_X \omega^n=\rm{vol}(\alpha)$.
If $\varphi \equiv \psi$ there is nothing to prove, so we assume in the sequel that $\varphi \neq \psi$. Reversing the roles of $\varphi,\psi$,
we can assume that $\nu(\varphi<\psi)>0$.
Set $Q_t:=\{x \in X \, | \, \varphi(x) >\psi(x) -t\}$. We can find arbitrarily small $t>0$ such that $\nu(Q_t) <\rm{vol}(\alpha)$, otherwise
$\varphi \geq \psi$ on $X$. Observe also that $\nu(Q_t)>0$ for all $t>0$, otherwise $\varphi \leq \psi-t$ contradicting
our normalizing assumption, thus for arbitrarily small $t>0$,
$$
0< a:=\frac{\nu(Q_t)}{\rm{vol}(\alpha)} < 1.
$$
We also set $b:=1-a=\nu(X \setminus Q_t) / \rm{vol}(\alpha) \in ]0,1[$ and decompose
$$
||\varphi-\psi||_{L^1(\nu)}=\int_{Q_t} (\varphi-\psi) d\nu+\int_{X \setminus Q_t} (\psi-\varphi) d\nu+O(t).
$$
We are going to bound from above each of these integrals by establishing estimates that are independent of $t$ and
then let $t$ decrease to zero.
It follows from \cite{BEGZ} that there exists uniquely determined functions $u,v \in {PSH}(X,\theta)$ with minimal singularities such that
$$
MA(u)=a^{-1} \bold{1}_{Q_t} \, \nu, \,
MA(v)=b^{-1} \bold{1}_{X \setminus Q_t} \, \nu
\text{ and }
\sup_X u=\sup_X v=0.
$$
We also set
$$
U:=a^{1/n} u+(1-a^{1/n}) V_{\theta}
\text{ and }
V:=b^{1/n} v+(1-b^{1/n}) V_{\theta}.
$$
Observe that $U,V \in {PSH}(X,\theta)$ again have minimal singularities and are still normalized
by $\sup_X U=\sup_X V=0$ (by the tautological maximum principle). Moreover
$$
MA(U) \geq a MA(u) \text{ while } MA(V) \geq b MA(v),
$$
therefore
$$
a \int (\varphi-\psi) MA(u)+b\int (\psi-\varphi) MA(v) \leq \int (\varphi-\psi) (MA(U)-MA(V)).
$$
It follows from Lemma \ref{lem:BBGZ} that the latter is bounded from above by
$$
\kappa_n B^2 I(\varphi,\psi)^{2^{-n}} I(U,V)^{2^{-n}},
$$
where $B=\max(1,-E(\varphi),-E(\psi),-E(U),-E(V))$.
Since $I(U,V)$ is controlled from above if we can bound from below the energies of $U$ and $V$
(see \cite[Lemma 2.7]{BBGZ}), it remains to estimate the latter.
This is in principle very easy, as $U$ and $V$ have minimal singularities, however we want to make clear that
the corresponding bounds are independent of $t$ (i.e. independent of $a$ and $b$). Since
$MA(u)=g \omega^n$ has density in $L^2$ (even $L^{\infty}$), It follows from \cite[Theorem 4.1]{BEGZ} that
$$
||u-V_{\theta}||_{L^{\infty}(X)} \leq c ||g||_{L^2}^{1/n} \leq c' a^{-1/n},
$$
since $g=a^{-1} \bold{1}_{Q_t} $. Therefore
$$
||U-V_{\theta}||_{L^{\infty} (X)}=a^{1/n} ||u-V_{\theta}||_{L^{\infty}(X)} \leq c''.
$$
We similarly get a uniform bound from above on $||V-V_{\theta}||_{L^{\infty} (X)}$.
Therefore
$$
-c''' \leq E(U),E(V) \leq 0,
$$
hence the proof is complete.
\end{proof}
We observe the following easy consequence of the previous estimates:
\begin{lem} \label{lem:corol}
There exists $C_n> 0$ such that for any $0 \geq \varphi, \psi , u \in \mathcal E^1 (X,\theta)$ normalized by $\sup_X \varphi=\sup_X \psi$,
$$
\int_X (\varphi - \psi) \mathrm{MA} (u) \leq C_n \cdot B^2 \cdot I (\varphi,\psi)^{1 \slash 2^{n}},
$$
where $B:=\max \{1, \vert E (\varphi)\vert, \vert E (\psi)\vert, \vert E (u)\vert \}$.
\end{lem}
\begin{proof}
We decompose
$$
\int_X (\varphi - \psi) \mathrm{MA} (u) = \int_X (\varphi - \psi) (\mathrm{MA} (u) - \mathrm{MA} (V_{\theta})) + \int_X (\varphi - \psi) \mathrm{MA} (V_{\theta})
$$
and observe that Lemma \ref{lem:BBGZ} allows to bound from above the first term while the second once is controlled by Proposition \ref{thm:general}, since
$\mathrm{MA} (V_{\theta})$ has a bounded density with respect to $\omega^n$ by Theorem \ref{thm:c11}.
\end{proof}
We can now prove Theorem B:
\begin{thm} \label{thm:B}
There exists $C_n > 0$ such that if $0 \geq \psi, \varphi_1, \varphi_2, \in \mathcal E^1 (X,\theta)$ are normalized by $\sup_X \varphi_1 = \sup_X \varphi_2 $, then
$$
\int_X \vert \varphi_1 - \varphi_2 \vert \mathrm{MA} (\psi) \leq C_n \cdot B^2 \cdot I (\varphi_1,\varphi_2)^{2^{-n}},
$$
where $B=\max \{1, \vert E (\varphi_1)\vert,\vert E (\varphi_2)\vert, \vert E (\psi)\vert \}$.
\end{thm}
\begin{proof}
Set $\varphi := \sup \{\varphi_1,\varphi_2\}$. Observe that $\sup_X \varphi=\sup_X \varphi_1=\sup_X \varphi_2$
and $\vert \varphi_1 - \varphi_2\vert = 2 (\varphi - \varphi_1) - (\varphi_2 - \varphi_1)$, thus
$$
\int_X \vert \varphi_1 - \varphi_2\vert \mathrm{MA} (\psi) = 2 \int_X (\varphi - \varphi_1) \mathrm{MA} (\psi) - \int_X (\varphi_2 - \varphi_1) \mathrm{MA} (\psi).
$$
The second term on the right hand side is bounded from above by the desired quantity thanks to Lemma \ref{lem:corol}.
We estimate the first one by using the same lemma, obtaining
$$
\int_X (\varphi - \varphi_1) \mathrm{MA} (\psi) \leq C_n \cdot D^2 \cdot I (\varphi,\varphi_1)^{1 \slash 2^{n}},
$$
where $D:= \max \{1, \vert E (\varphi)\vert, \vert E (\varphi_1)\vert, \vert E (\psi)\vert \}$.
Now $\vert E (\varphi) \vert \leq \vert E (\varphi_1) \vert$, since $0 \geq \varphi \geq \varphi_1$.
It therefore suffices to show that $ I (\varphi,\varphi_1) \leq I (\varphi_2,\varphi_1)$.
Recall that
$$
I (\varphi,\varphi_1) = \int_X (\varphi - \varphi_1) (\mathrm{MA} (\varphi_1) - \mathrm{MA} (\varphi)).
$$
and observe that $\mathrm{MA}(\varphi) = \mathrm{MA}(\varphi_1)$ on the plurifine open set $\{\varphi_1 > \varphi_2\}$ (see \cite{BT87,GZ05,BEGZ}).
Thus the measure $\mathrm{MA} (\varphi_1) - \mathrm{MA} (\varphi)$ is carried by the Borel set $\{\varphi_2 \geq \varphi_1\}$ where
$\varphi - \varphi_1 = \varphi_2 - \varphi_1$.
Therefore
$$
I (\varphi,\varphi_1) = \int_X (\varphi_2 - \varphi_1) (\mathrm{MA} (\varphi_1) - \mathrm{MA} (\varphi)).
$$
In the same way we get
$$
I (\varphi,\varphi_2) = \int_X (\varphi_1 - \varphi_2) (\mathrm{MA} (\varphi_2) - \mathrm{MA} (\varphi)).
$$
Adding the two identities yields
$$
I (\varphi,\varphi_1) + I (\varphi,\varphi_2) = I (\varphi_1,\varphi_2),
$$
hence $I (\varphi,\varphi_1) \leq I (\varphi_1,\varphi_2)$.
\end{proof}
\begin{rem}
We let the reader verify that Proposition \ref{thm:general} is a particular case of Theorem \ref{thm:B}.
The latter has the following interesting consequence: if we let $\psi$ be any $\theta$-psh function such that
$V_{\theta}-1 \leq \psi\leq V_{\theta}$ , then Chebyshev inequality, together with Theorem \ref{thm:B}, shows that for all $\varepsilon>0$,
$$
\rm{Cap}(\{|\varphi_1-\varphi_2|>\varepsilon\}) \leq \frac{C_n}{\varepsilon} B^2 I(\varphi_1,\varphi_2)^{2^{-n}}.
$$
This yields a quantitative estimate on how "convergence in energy" implies "convergence in capacity".
\end{rem}
\section{Strong stability} \label{sec:strong}
Let $\mu = f_{\mu} \omega^n$ be a non-negative Radon measure wich is absolutely continuous with respect
to a fixed volume form $\omega^n$, with density in $L^p$ for some $p>1$. When
$\mu(X)=\rm{vol}(\alpha)$, it has been shown in \cite{BEGZ} that the complex Monge-Amp\`ere equation
$$
\langle (\theta + dd^c \varphi_{\mu})^n \rangle = \mu=f_{\mu} \omega^n,
$$
has a unique solution $\varphi_{\mu} \in {PSH}(X,\theta)$ with minimal singularities such that $\sup_X \varphi = 0$. This is
a generalization to the case of big cohomology classes of a celebrated result of Kolodziej \cite{Kol98}
(which itself generalized Yau's celebrated ${\mathcal C}^0$ a priori estimate \cite{Yau78}).
In this section we prove Theorem C of the introduction, establishing a quantitative continuity property of the mapping $f_{\mu} \mapsto \varphi_{\mu}$. Since measures with $L^p$ densities, $p>1$, satisfy
conditions ${\mathcal H}(\beta)$ for all $\beta>0$, Theorem C is actually a consequence of the following
more general result:
\begin{thm} \label{thm:strong}
Fix $\beta>0$ and assume $\mu,\nu$ are non-negative Radon measures which satisfy the condition
${\mathcal H}(\beta)$ and are normalized so that
$$
\mu(X)=\nu(X)=\rm{vol}(\alpha).
$$
Let $\varphi_{\mu},\varphi_{\nu}$ be their normalized Monge-Amp\`ere potentials. Then
$$
||\varphi_{\mu}-\varphi_{\nu}||_{L^{\infty}(X)} \leq M_{\tau} ||\mu-\nu||^{\tau}
$$
where $\tau=\gamma/(2^n-\gamma)$ with $\gamma := \beta/[n+\beta(n+1)]$.
\end{thm}
When $\alpha$ is a K\"ahler class, Theorem C is due to Kolodziej \cite{Kol03} who obtained a better exponent $\tau$ (see \cite{DZ}
for a sharp improvement of the exponent).
\smallskip
We need the following refinement of a statement proved in \cite{EGZ} in the context of big and semi-positive cohomology classes:
\begin{prop}
Let $\nu$ be a non negative Radon measure which satisfies the condition $\mathcal H (\infty)$.
Let $\mu = f \nu$, where $0 \leq f \in L^p (X, \nu)$ with $p > 1$ and $\mu(X)=\rm{vol}(\alpha)$.
Fix $\varphi,\psi \in {PSH}(X,\theta)$ such that $\sup_X \varphi=\sup_X \psi$ and $\mathrm{MA} (\varphi) = \mu$.
Then for any $0 < \gamma < \frac{1}{n q + 1}$,
$$
\sup_X (\psi - \varphi)_+ \leq M \Vert(\psi - \varphi)_+ \Vert_{L^1 (X, \nu)}^{\gamma},
$$
where $M > 0$ only depends on $\gamma$ and a bound on the $L^p-$norm of $f$.
\end{prop}
Here $u_+=\max(u,0)$ denotes as usual the maximum of $u$ and $0$.
\smallskip
Let us stress that this relatively technical statement has interesting applications (see e.g \cite{DDGHKZ} where
it is used to establish H\"older-continuity properties of Monge-Amp\`ere potentials).
It is an immediate consequence of the following slightly more general (and more technical) result:
\begin{prop} \label{pro:EGZ}
Let $\varphi,\psi$ be $\theta$-plurisubharmonic functions such that
$$
- M_0 + V_{\theta} \leq \sup \{\varphi , \psi\} \leq V_{\theta},
$$
for some $M_0 >0$. Assume that $\mu:=(\theta+dd^c \varphi)^n$ satisfies the condition
$\mathcal H (\beta)$ for some $\beta>0$.
Then there exists $A_0 = A_0(\beta,M_0)$ such that for any $r > 0$ we have
$$
\sup_X (\psi - \varphi)_+ \leq A_0 \Vert(\psi - \varphi)_+ \Vert_{L^r (\mu)}^{\gamma}
\text{ with }
\gamma = \frac{\beta r }{n + \beta (n + r)}.
$$
Moreover if $\mu = f \nu,$ where $\nu$ a Borel measure and $f \in L^{p} (\nu)$, $p > 1$, then there exists $0< A_1 = A_1(\beta,M_0,p)$ such that
$$
\sup_X (\psi - \varphi)_+ \leq A_1 \Vert f\Vert_{L^p (\nu)}^{\gamma q} \Vert(\psi - \varphi)_+ \Vert_{L^1 (\nu)}^{\gamma'},
\text{ with }
\gamma' = \frac{\beta}{q n + \beta (n q + 1)},
$$
where $1/p+1/q=1$ and $(\psi-\varphi)_+:=\max(\psi-\varphi,0)$.
\end{prop}
\smallskip
Although the proof is very close to that of Propositions 2.6 and 3.1 in \cite{EGZ}, we briefly
sketch it for the convenience of the reader.
\begin{proof}
Observe first that $ (\psi - \varphi)_+ = \sup \{\varphi,\psi\} - \varphi$ on $X$. So up to replacing $\psi$ by $\sup \{\varphi,\psi\}$, we can assume that $\psi \geq \varphi$ and $\psi$ satisfies the condition
$ -M_0 + V_{\theta} \leq \psi \leq V_{\theta}$ on $X$.
Using the "big" comparison principle from \cite{BEGZ} and arguing exactly as in Proposition 2.6 in \cite{EGZ}, we conclude that
there is a constant $B_0 > 0$ such that for any $\varepsilon \in ]0,1]$
$$
\sup_X (\psi - \varphi) \leq \varepsilon + B_0 \left(Cap (\{\psi - \varphi > \varepsilon\}\right)^{\beta \slash n}
$$
The proof of \cite[Proposition 2.6]{EGZ} (cf equation (3) p.616) shows that
$$
\varepsilon^n Cap (\{\psi - \varphi > \varepsilon\}) \leq (1 + M_0)^n \int_{ \{\psi - \varphi > \varepsilon\slash 2\} } d \mu.
$$
Chebyshev's inequality then yields
$$
Cap (\{\psi - \varphi > \varepsilon\}) \leq 2^r \varepsilon^{- (n + r)} (1 + M_0)^n \int_X {{(\psi - \varphi)_+}^r} d \mu,
$$
for $r>0$ fixed.
Therefore
$$
\sup_X (\psi - \varphi) \leq \varepsilon + B_0 2^{\beta r \slash n} (1 + M_0)^\alpha \varepsilon^{- \beta (n + r) \slash n} \left(\int_X {{(\psi - \varphi)}_+}^r d \mu \right)^{\beta \slash n}
$$
Choosing $\varepsilon := (\Vert\psi - \varphi\Vert_{L^r (\mu)} \slash N)^{\gamma},$ where $N$ is an upper bound on $\psi - \varphi$ and $\gamma$ is
as in the satement of the proposition yields the desired inequality.
Now if $\mu = f \nu,$ where $f \in L^p (\nu)$ with $p>1$, H\"older's inequality yields
$$
\int_X {(\psi - \varphi)^r}_+ d \mu \leq \Vert f \Vert_{L^p (\nu)} \left(\int_X {(\psi - \varphi)}^{r q} d \nu \right)^{1 \slash q}.
$$
The conclusion follows by taking $r := 1 \slash q$.
\end{proof}
\medskip
\noindent {\bf Proof of Theorem \ref{thm:strong}.}
Since $\varphi=\varphi_{\mu}$ and $\psi=\varphi_{\nu}$ have minimal singularities, $\varphi - \psi$ is bounded hence
\begin{eqnarray*}
I(\varphi,\psi) & = & \int_X(\varphi-\psi)(\mathrm{MA}(\psi)-\mathrm{MA}(\varphi)) = \int_X (\varphi - \psi) d (\nu - \mu) \\
& \leq & \Vert \varphi - \psi\Vert_{L^{\infty}(X)} \Vert \mu - \nu\Vert
\end{eqnarray*}
It follows from Proposition \ref{pro:EGZ} that
$$
\Vert \varphi - \psi\Vert_{L^{\infty} (X)} \leq C_{\beta} \left[ \Vert \varphi - \psi\Vert_{L^1 (X,\mu)}^{\gamma}
+\Vert \varphi - \psi\Vert_{L^1 (X,\nu)}^{\gamma} \right].
$$
with $\gamma:= \beta/[n+\beta(n+1)]$.
Now Theorem \ref{thm:B} implies
$$
\Vert \varphi - \psi\Vert_{L^{\infty} (X)} \leq C_{\beta}' \left(\Vert \varphi - \psi\Vert_{L^{\infty}} \Vert \mu - \nu\Vert\right)^{\gamma \slash 2^n},
$$
thus
$$
\Vert \varphi - \psi\Vert_{L^{\infty} (X)} \leq C_{\beta}'' \Vert \mu - \nu\Vert^{\tau}
$$
where $\tau := \frac{\gamma}{2^n - \gamma}$.
\hfill $\Box$ |
1112.1025 | \section{Introduction}
\label{sec:Intro}
OH~231.8+4.2 (hereafter OH231) is an O-rich late spectral
type (M) central star (Mira variable, QX Pup) with bipolar high-velocity
dust and gas outflows~\citep{2001A&A...373..932A}, filamentary structures observed in
scattered and molecular line emission, and large angular size
(10$^{\prime\prime}$$\times$60$^{\prime\prime}$).
Often labeled as a post-AGB object or pre-planetary nebula, the
presence of both a Mira central star and a main-sequence companion of spectral
type A \citep{2004ApJ...616..519S} seems to contradict this classification. OH231
is more likely a D-type bipolar symbiotic system
\citep[][]{2010PASA...27..129F}. However in some cases, morphological similarities do exist
between post-AGB and symbiotic objects, most strikingly the presence of
highly collimated bipolar nebulae. It is via fast-collimated outflows that these
stars shape their surrounding nebula.
Understanding the development and origin of these fast outflows
is critical for advancing
hydrodynamical models of wind interaction. Recent work by
\citet{2009ApJ...696.1630L} attempting to
reproduce the high velocity molecular emission in AFGL~618 using
collimated fast wind models, emphasises the need for further observations and
model development in this area.
OH231 has been the
subject of many studies spanning multiple wavelength ranges, for example:
\citet{1985ApJ...297..702C} were first to propose the existence of a binary companion;
\citet{2002A&A...389..271B} imaged the
shape of the shocks using H${\alpha}$ (reproduced in
Fig.~\ref{fig:oh231_naco_sin_wfpc2} [a]) detected with the {\em Hubble Space
Telescope (HST)}; \citet{2003ApJ...585..482M} report {\em HST}/NICMOS NIR
images of the dust distribution and hence a high resolution map of the
extinction through the nebula. \citet{2006ApJ...646L.123M} using the MIDI
and NACO instruments on the Very
Large Telescope (VLT) detected a compact circumstellar disc. The envelope of
OH231 is also known to be rich in molecular species (e.g. H$_2$O, OH,
and SiO)
however previous studies in the NIR have all returned null detections of H$_2$
\citep[e.g.][]{1998ApJ...509..728W,2006ApJ...646L.123M}.
In this Letter, we present the results of preliminary observations of OH231 at
NIR ($K$-band) wavelengths showing for the first time the presence of shock-excited H$_2$.
Throughout this work we assume OH231 is a member of the open cluster M46 at a
distance of 1.3~kpc~\citep{1985ApJ...292..487J}. The origin of the
coordinate system used in Figures~1,3, and 4 is given by the location of the SiO maser emission at
RA=07$^h$42$^m$16$^s$.93,
Dec=-14$^\circ$42$^{\prime}$50$^{\prime\prime}$.2 (J2000)~\citep{2002A&A...385L...1S},
and the inclination angle of the bipolar axis is 36$^\circ$~to the plane
of the sky \citep{1992ApJ...398..552K}.
\section{Observations and Data Reduction}
\label{sec:obs}
The data were taken using the {\sc sinfoni}~\citep{2003SPIE.4841.1548E}
instrument located on UT4 at VLT at Paranal, Chile, on the 1$^{\rm st}$/2$^{\rm nd}$ Feb 2010.
We use the lowest resolution mode (LRM), corresponding to the widest
field-of-view (8$^{\prime\prime}$ x 8$^{\prime\prime}$) with adaptive optics
(AO) and a natural guide star (NGS). A plate scale
of 250$\times$125 mas pixel$^{-1}$, and a spectral and velocity resolution of
4580 and 66 km~s$^{-1}$, respectively, are available at this resolution
(for a dispersion of 2.45~\AA/pix and line FWHM of 1.96 pixels).
All observations utilised the $K$-band (2.2~$\umu$m) filter which covers many
ro-vibrational H$_2$ emission lines.
The ambient seeing varied from $\sim$~0.6$^{\prime\prime}$ to 1.1$^{\prime\prime}$ during the observations.
The OH231 observations consisted of three fields focused on (1) the edge of
the Northern lobe, (2) the central region, and (3) the middle of the Southern
lobe (labelled N, C, S in Fig.~\ref{fig:oh231_naco_sin_wfpc2} [b]).
No H$_2$ was detected in the Southern field and will not be discussed further.
Table~\ref{tab:sum_obs} summarises exposures for
each of the three fields. Telluric
standard stars used for calibration are HD~75004 (G0V), and HD~63487 (G2V) for
night one and two, respectively. An average AO-corrected PSF of \simm 340 mas
FWHM is estimated from the standard stars. The data were reduced using the ESO common pipeline library to a
wavelength-calibrated datacube and further analysed using both PyRAF\footnote{PyRAF is
a product of the Space Telescope Science Institute, which is operated by AURA
for NASA.}.
Wavelength calibration errors were corrected by comparison of OH emission lines
with a high resolution template. Quoted velocities were adjusted to local standard of
rest (LSR) velocities using 22.88 and 23.30~\kms~corrections for night one and
night two, respectively.
Line maps (Fig.~\ref{fig:oh231_naco_sin_wfpc2} [c,d]) were generated by
fitting the H$_2$ emission lines with a Gaussian profile; initial fit parameters
(FWHM$_0$, central wavelength$_0$, etc.) were determined by manually fitting an
individual H$_2$ line, only lines with a FWHM $\approx$ FWHM$_0$ make up the final
line maps. Signal-to-Noise (S/N) of the line maps was enhanced by smoothing the
data with a 2$\times$2 pixel boxcar filter.
Line rest wavelength information is from~\citet{1987ApJ...322..412B}.
\begin{table}
\caption{Details of the VLT/{\sc sinfoni} observations for OH231 and
the telluric standard stars taken on 1$^{\rm st}$/ 2$^{\rm nd}$ February 2010.}
\begin{threeparttable}
\begin{tabular}{@{}l@{}cc@{}cc@{}ccc}
\hline
\hline
&\multicolumn{4}{c}{OH231} & &\multicolumn{2}{c}{HD}\\ \cline{2-5}\cline{7-8}\noalign{\smallskip}
& South &\multicolumn{2}{c}{Central} & North&&75004 & 63487 \\
\hline
Tot. Exp. (secs) & 600 & 600 & 480 & 800 && 1 & 10 \\
K (mag) & -- & -- & -- & -- && 7.268$^{\dagger}$ & 7.674$^{\dagger}$ \\
Night & 1 & 1 & 2 & 1 && 1 & 2 \\
\hline
\end{tabular}
\label{tab:sum_obs}
\begin{tablenotes}
\item[$\dagger$] 2mass magnitude from: SIMBAD
\end{tablenotes}
\label{tab:comptab}
\end{threeparttable}
\end{table}
\section{Results}
\label{sec:h2emis}
\begin{figure*}
\begin{center}
\includegraphics[width=0.87\textwidth,height=0.6\textwidth,angle=0]{oh_wfpc2_naco_sinfo.eps}
\caption{Narrow-band images and line maps for OH231; a star symbol indicates the
SiO maser position (see text), inset labels display the
telescope/instrument information. $(a)$ Continuum-subtracted~\halpha~image
with the \ozso H$_2$ contours overlaid in white, $(b)$ 2.12 $\umu$m image,
showing mainly scattered light,
with H$_2$ contours overlaid, {\sc sinfoni} North, Central, and South (labeled N, C, S) fields are marked by green squares, inset compass shows orientation,
$(c)$ \ozso H$_2$ line map showing the extent of the emission located
along the edge of the Northern lobe, two knots (A, B) of H$_2$
emission are indicated by arrows; superposed contours indicate the location of
the \halpha~emission,
$(d)$ \ozso H$_2$ line map for the central region showing the
position/extent of the region used for \rto~calculation (ellipse). Colour bar units are in W m$^{-2}$ px$^{-1}$.
The \halpha~data, from the Hubble Legacy Archive (see
Acknowledgments), has been continuum-subtracted using the procedure outlined in
\citet{2002A&A...389..271B}. The 2.12 $\umu$m data was first published in
\citet{2006ApJ...646L.123M} and has been reprocessed using the ESO pipeline.}
\label{fig:oh231_naco_sin_wfpc2}
\end{center}
\end{figure*}
We report the detection of several H$_2$ emission lines arising from both
the centre and Northern lobe of OH231.
In Figure~\ref{fig:oh231_naco_sin_wfpc2} (c,d) we present continuum-subtracted
line maps of the \oozso~
transition for both fields, showing clearly H$_2$ emission
arising from the centre of OH231 (Fig.~\ref{fig:oh231_naco_sin_wfpc2} [d]) and
from knots of material in the Northern
lobe (Fig.~\ref{fig:oh231_naco_sin_wfpc2} [c]).
In Fig.~\ref{fig:oh231_naco_sin_wfpc2} (a) the
contours of the detected H$_2$ are shown in relation to \halpha~emission, while
Fig.~\ref{fig:oh231_naco_sin_wfpc2} (b)
gives the location of the H$_2$ relative to the strong continuum emission as
imaged with a 2.12 $\umu$m filter.
An integrated spectrum of the Western region of the centrally located H$_2$ is shown in
Figure~\ref{fig:oh231_spec}, from which we note,
\begin{inparaenum}[a)]
\item{several S- and Q-branch ro-vibrational H$_2$ lines, and }
\item{a CO bandhead absorption feature (\simm 2.3 $\umu$m).}
\end{inparaenum}
Channel maps, extracted from the {\sc sinfoni} datacube, are presented in Figures~\ref{fig:oh231_ch_maps_cen} and
\ref{fig:oh231_ch_maps} for the central and Northern regions respectively, showing how the distribution of the
H$_2$ changes across the line profile.
\begin{table}
\caption{Line fluxes, $F$, (in units of 10$^{-19}$ W m$^{-2}$) of the observed
H$_2$ lines for the central and
Northern fields. Measured peak line wavelengths (in $\umu$m) are given for the central field.}
\begin{threeparttable}
\begin{tabular}{lll@{ $\pm$ }llll}
\hline
\hline
& \multicolumn{2}{c}{Central} &&&& \multicolumn{1}{c}{Northern} \\ \cline{2-4}\cline{6-7}
\multicolumn{1}{c}{Line}&\multicolumn{1}{c}{$\lambda_{\rm meas}$}& \multicolumn{2}{c}{$F$} &&\multicolumn{2}{c}{$F$} \\
\hline
\ozst & 1.9579 & 67.2&14.2 &&& \multicolumn{1}{c}{11.5 $\pm$ 1.10} \\
\ozstwo & 2.0342 & 9.71&3.36$^{\diamond}$ &&& \multicolumn{1}{c}{---} \\
\ozso & 2.1222 & 91.5&6.16 &&& 9.69 $\pm$ 0.76 \\
\toso & 2.2481 & 7.73&3.38$^{\ddagger}$ &&& \multicolumn{1}{c}{---} \\
\ozqo & 2.4071 & 46.4&14.5 &&& \multicolumn{1}{c}{11.9 $\pm$ 2.95} \\
\ozqt & 2.4241 & 42.2&24.0 &&& \multicolumn{1}{c}{13.3 $\pm$ 4.42} \\
\hline
\end{tabular}
\begin{tablenotes}
\item [$\ddagger$] Flux measured for Western H$_2$ region only.
\item [$\diamond$] Measurement confined to elliptical region
(Fig.~\ref{fig:oh231_naco_sin_wfpc2} [d]).
\end{tablenotes}
\label{tab:h2_fluxes}
\end{threeparttable}
\end{table}
Table~\ref{tab:h2_fluxes} presents the flux measurements for the central and
Northern fields, for lines with flux errors less than \simm 50 per cent.
Accurate \tost, \ozsoo, and \ozqtwo line flux measurements are not
possible due to the
presence of strong sky subtraction residuals at these wavelengths.
We confine the calculation of the \rto~ratio for the central field to
the area marked by the ellipse in Fig.~\ref{fig:oh231_naco_sin_wfpc2} (d), where
the \toso flux error is smallest. In this region we calculate a \rto~ratio of 8.3$\pm$1.9 prior to
extinction correction. We do not detect any \toso flux in the Eastern H$_2$
region, instead we estimate an upper \toso limit (3$\sigma$) for the flux in this region
of 1.3$\times$10$^{-19}$ W m$^{-2}$, which in turn places a lower limit of \simm
9.5 on the \rto~ratio for this region.
\section{Analysis and Discussion}
The \rto~ratio is a useful discriminator of excitation mechanisms.
Pure fluorescence will yield a value of $\approx$ 2 while a value $\approx$ 10
indicates the excitation of the gas is being driven by
shocks.
However it is worth noting, these values depend on shock velocity and pre-shock gas
density, with shocked H$_2$ capable of producing values as low as
$\approx$4~\citep{1995A&A...296..789S}, while fluorescence can produce values
approaching those of shocks~\citep{1995ApJ...455..133H}.
The \rto~ratio values, given
above, suggest that shocks might be the main excitation mechanism, which agrees
with the detection of shock-excited \hco~in the centre of OH231
noted in the position-velocity diagrams of \citet{2000A&A...357..651S}.
In order to determine the intrinsic \rto~ratio, it is
necessary to remove the effects of extinction. In the \kband, it is sometimes possible to
derive the level of extinction via the comparison of S-/Q-branch H$_2$ emission
lines~\citep[see][]{2003ApJ...592..245S}. Unfortunately, due
to poor atmospheric transmission above 2.4 $\umu$m, it
was not possible to derive a sensible estimate for extinction using the
1$\to$0~S(1) and Q(3) lines.
Extinction values of $A_{\rm K}$=3--4 (mag) for the central region are estimated from
the $(K - L^{\prime})$ colour map of OH231 \citep{1998AJ....116.1412K}.
Although, most likely an over-estimate, adjusting the Western \rto~ratio for these levels of
extinction yields, for example, a value of 11.7$\pm$2.6 ($A_{\rm K}$=4). It is clear that
any adjustment for extinction will increase the observed \rto~ratio, pushing it
further towards the shock regime.
Using a typical value of 5.3$\times$10$^{-22}$ mag cm$^{2}$ for the extinction
per unit column density of hydrogen, $A_{\rm V}/N({\rm H})$, one can estimate
the hydrogen column density implied by an $A_{\rm K}$=4;
yielding a $N({\rm H})$ for the H$_2$ regions of \simm 7.2$\times$10$^{22}$ cm$^{-2}$.
Using a typical column length of the H$_2$ emitting regions of 1\arcsec
($\approx$2.0$\times$10$^{16}$ cm), as used in \citet{2002A&A...389..271B},
we estimate an average density for the H$_2$ regions of
3.5$\times$10$^{6}$ cm$^{-3}$. This value is in good agreement with \citet{2001A&A...373..932A}
who estimate an average central density of \simm 3.0$\times$10$^{6}$
cm$^{-3}$. The H$_2$ emission is most likely originating in the dense equatorial regions
surrounding the central star.
\subsection{Equatorial Region}
\label{sec:h2eq}
The \ozso H$_2$ line map (see Fig.~\ref{fig:oh231_naco_sin_wfpc2} [d]) shows two regions of H$_2$ oriented at a
position angle (PA) of 113$^\circ$, from brightest
to faintest peak.
If we assume that the distribution of the H$_2$ around the central region of
OH231 is in a disc configuration then by measuring the major and minor axis we can
estimate the
inclination angle of the equatorial disc with respect to the plane
of the sky from the H$_2$ data.
Measurements for \Rmax/\Rmin~are determined by superposing a full ellipse onto
the H$_2$ line map. We find \Rmax=
4.06\arcsec~and \Rmin= 2.76\arcsec from the centre of the ellipse, and using the relation $i =$ \asin(\Rmin/\Rmax), we
find $i=$43$^\circ$$\pm$8.
This result is in good agreement with previously published values.
The location and orientation of this H$_2$ adds to the already complex
picture of the equatorial region of OH231. Some previously reported
structures, from smallest to largest, include:
\begin{list}{\makelabel{-}}{\leftmargin=1em \itemsep=-0.2em}
\item an equatorial torus of SiO maser emission (\simm 2R$_{\star}$) that might lie
on the innermost edge of an expanding SO disc~\citep{2000A&A...357..651S,2002A&A...385L...1S};
\item a centrally located compact disc of circumstellar material with inner R=0.03--0.04\arcsec/40--50 AU \citep{2006ApJ...646L.123M};
\item an opaque flared disc with outer R=0.25\arcsec/330 AU revealed in mid-IR images~\citep{2002ApJ...574..963J};
\item a slowly expanding disc with characteristic R=0.9\arcsec/1160 AU, detected via the SO
($J$=2$_2$$\to$1$_1$) transition \citep{2000A&A...357..651S};
\item a torus of OH maser emission with outer R=2.5\arcsec/3250 AU~\citep{2001MNRAS.322..280Z};
\item an expanding hollow cylinder of \hco~with a characteristic radius equal to the OH torus radius
~\citep{2000A&A...357..651S}.
\item a halo of scattered light at R\simm 4\arcsec/5200~AU~\citep{2003ApJ...585..482M};
\end{list}
\noindent From our observations the geometry of the H$_2$ region is unclear, however we
offer two possibilities:
(1) A disc of H$_2$: Figure~\ref{fig:oh231_naco_sin_wfpc2} (d) shows what might be interpreted as an incomplete disc
of H$_2$, which fits with the series of concentric disc/tori structures listed
above. To understand why we observe only emission from the edges and not the
front of the disc, a comparison of the noise in the continuum at the front of the
putative disc, to the amplitude of the \ozso line peak shows both to be of the same
order. We might then attribute the `missing' H$_2$ in this region to variations in
the continuum. We would not expect to observe the back of the disc due to the
high level of extinction through the nebula. (2) A shell of H$_2$: if
the H$_2$ is situated in an axisymmetric shell
surrounding the central star, we might explain the geometry of the H$_2$ regions
by assuming a density contrast between the poles and equator. This, combined with an increased column
depth at the edge of the shell would manifest itself as two arcs of H$_2$
emission situated equatorially~\citep[as noted by][for IRAS~19306+1407]{2005ASPC..343..282L}. This is
supported by the detection of a shell of
higher density gas and dust at the same location as the H$_2$~\citep{2003ApJ...585..482M}.
Both scenarios offer plausible explanations for the geometry of the H$_2$
emitting region, however further observations are needed in order to favour one.
\begin{figure}
\includegraphics[width=0.5\textwidth,height=0.3\textwidth,angle=0]{spec_single2.eps}
\caption{An integrated spectrum of the Western side of
the central H$_2$ showing the detected lines, the position of the Br$\gamma$
recombination line is shown for reference. Extracted spectra location marked in
Fig.~\ref{fig:oh231_naco_sin_wfpc2} (d).}
\label{fig:oh231_spec}
\end{figure}
We fit the \ozso line profile yielding a $V_{\rm {LSR}}$=36$\pm$17 \kms and
FWHM=100 \kms for both Eastern and Western regions of H$_2$ emission, a value
consistent with the systemic velocity.
Channel maps of the \ozso line (Fig.~\ref{fig:oh231_ch_maps_cen}) show no
significant change in the distribution of the H$_2$.
In an attempt to explain the lack of reported H$_2$ in this object, we note
two previous studies: (1) \citet{1998ApJ...509..728W} give a 3$\sigma$ upper limit of 10$^{-5}$
ergs cm$^{-2}$ s$^{-1}$ ster$^{-1}$ (1$\sigma$ limit =6.6$\times$10$^{-18}$ W m$^{-2}$)
for the surface brightness of the \ozso line towards OH231. This places their
measurement limit close to the \ozso line strength given in
Table~\ref{tab:h2_fluxes}, implying that the \ozso line in their observations
would be difficult to distingush from noise, or possibly that their
slit position, aligned East-West across inferred central star position, did not
include the H$_2$ sites;
(2) \citet{2006ApJ...646L.123M}, using the {\sc isaac} instrument on VLT, do not
report any detection of H$_2$. However, this can be explained due to the orientation of
the slit, aligned from South-West to North-East along the major axis (private
comm.), with a slit-width of 0.8\arcsec, i.e., the central H$_2$ emission site was not covered.
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth,height=0.47\textwidth,angle=0]{oh231_cen_chanmaps.eps}
\caption{`Channel maps' of the four central
channels of the \ozso line from the central region of OH231. Inset labels show
datacube channel and corresponding $V_{\rm LSR}$ velocity.
Note: in ch794, the
central region (marked by a green box) is residual from the continuum subtraction and not true H$_2$
emission.}
\label{fig:oh231_ch_maps_cen}
\end{center}
\end{figure}
\subsection{Northern Region}
\label{sec:h2Nor}
Figure~\ref{fig:oh231_naco_sin_wfpc2} (c) shows the line map for the
\ozso transition in the Northern region. Most notable are the two knots
of H$_2$, labeled A and B for the top and bottom knot, respectively.
The \halpha~emission contours
are superposed on the H$_2$ line map. The peak intensity of the H$_2$ emission knots are slightly offset
from the two \halpha~emission knots (Fig.~\ref{fig:oh231_naco_sin_wfpc2} [c]), however this small offset can be accounted
for by the motion of the outflow \citep[e.g.$V$\simm 150 \kms, from][]{2002A&A...389..271B} during the 10 years between both sets of
observations. It is most likely that the optical and NIR line emission arise from the same shock event.
Weaker \ozso emission
is noted tracing the \halpha~edge of the bipolar outflow in the lower portion of
the \ozso line map.
The NACO 2.12~$\umu$m image does not show any trace of H$_2$ emission in this
region (see contours in Fig.~\ref{fig:oh231_naco_sin_wfpc2} [b]).
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth,height=0.47\textwidth,angle=0]{chan_maps_north.eps}
\caption{\ozso line `channel maps' of the four central channels extracted
from the Northern region of OH231. Knot A and B are regions of most intense
H$_2$ emission. Labels as in Fig.\ref{fig:oh231_ch_maps_cen}.
}
\label{fig:oh231_ch_maps}
\end{center}
\end{figure}
The \ozso line is spread over \simm six spectral pixels, the four central channels are
presented in Fig.~\ref{fig:oh231_ch_maps}. Examination of the
line profile and channel maps allows us to probe the
kinematics of the H$_2$ in this region, revealing two main features:
\begin{inparaenum}[1)]
\item{knot B which is persistent in all channel maps }
\item{the reduction of peak H$_2$ intensity in knot A at velocities $\leq$~-56
\kms.}
\end{inparaenum}
Both of these H$_2$ structures lie in the diffuse extended region
\citep[labeled B$_1$ in Fig.~4 of ][]{2002A&A...389..271B}
perpendicular to the axial flow, with a quoted \halpha~velocity of 150
\kms~which is in good agreement with not only our deprojected H$_2$
velocities
($V_{\rm {H_2}}$\simm 110 \kms)
but also with
\hco~velocities \citep{2000A&A...357..651S}.
In the case of knot A, the spectral line is strongly peaked in a single
channel, while in knot B the spectral line peak is spread over two channels.
Fitting the line profiles of knots A and B, yields LSR velocities of
$V_{\rm {LSR}}$=-8 \kms, and -30 \kms, with deprojected velocities of -75 \kms and -110 \kms, respectively.
This might indicate that further from the \halpha~bow
apex (Fig.~\ref{fig:oh231_naco_sin_wfpc2} [a]), we are starting to see emission originating from the front and back of
knot B, while knot A displays a narrower range of velocities, i.e., a single peak in its spectral profile.
It is worth noting that due to the slit-length of {\sc isaac}
(120\arcsec) coupled with the null H$_2$ detection discussed in \S\ref{sec:h2eq}, the H$_2$ in
the Northern region might be confined to the wings of the bow shock.
The existence of fast moving shocked H$_2$ has
previously been noted in other objects,
for example, \citet{2003ApJ...586L..87C} detect high velocity H$_2$
\simm 220--340 \kms~(dependent on adopted inclination angle) in the outflows of AFGL~618. It is currently unknown exactly how
shocked H$_2$ can be travelling at this speed without complete dissociation.
Further high resolution mapping of the H$_2$ is necessary in order to
resolve the shock surfaces, more accurately measure the H$_2$ kinematics, and apply shock models to this region.
\section{Conclusions}
\label{sec:discuss}
We have presented VLT/{\sc sinfoni} integral field observations of OH231,
revealing the presence of several ro-vibrational H$_2$ lines.
The main conclusions are:
\begin{list}{\makelabel{-}}{\itemsep=-0.2em \leftmargin=1em}
\item The discovery of H$_2$ emission near the centre of OH231, possibly
located at the edge of an axisymmetric shell or an incomplete disc.
\item A \rto~value of 8.3$\pm$1.9 is found for the equatorial H$_2$, suggesting
a collisional excitation mechanism.
\item Our observations of the central shell/disc of H$_2$ show no velocity structure.
However, higher S/N and/or velocity resolution data are needed to accurately
probe the kinematics in this region.
\item We detect fast-moving H$_2$ (\simm 110 \kms, along the bipolar axis) via the \ozso transition along the
North-Western tip of the nebula, a region where a strong \halpha~bow shock exists.
Due to the small FOV of our observations, the full extent of this H$_2$ is unknown.
\end{list}
\section{Acknowledgments}
\label{sec:Acknow}
This research is funded by UH studentship and based on observations made
with ESO Telescopes at the Paranal Observatory under programme ID 084.D-0487(A)
and
072.D-0766(A). We thank Mikako Matsuura for providing {\sc isaac} observation information.
This research used the HLA (ID 8326) facilities of the STScI, the ST-ECF and
the CADC with the support of the following granting agencies: NASA/NSF, ESA,
NRC, CSA.
\bibliographystyle{mn2e_orig} |
1508.03106 | \section{Introduction}
Classification plays an important role in many aspects of our society.
In medical research, identifying pathogenically distinct tumor types is central to advances in cancer treatments \citep{Golub.99, alderton2014breast}.
In cyber security, spam messages and virus make automatic categorical decisions a necessity. Binary classification is arguably the simplest and most important form of classification problems, and can serve as a building block for more complicated applications. We focus our attention on binary classification in this work. A few common notations are introduced
to facilitate our discussion.
Let $(X, Y)$ be a random pair where $X \in \mathcal{X} \subset {\rm I}\kern-0.18em{\rm R}^d$ is a vector of features and $Y \in \{0,1\}$ indicates $X$'s class label.
A \emph{classifier} $\phi : \mathcal{X} \to \{0,1\}$ is a mapping from $\mathcal{X}$ to $\{0,1\}$ that assigns $X$ to one of the classes.
A \emph{classification loss function} is defined to assign a ``cost" to each misclassified instance $\phi(X)\neq Y$,
and the \emph{classification error} is defined as the expectation of this loss function with respect to the joint distribution of $(X,Y)$.
We will focus our discussion on the 0-1 loss function ${\rm 1}\kern-0.24em{\rm I}\{\phi(X)\neq Y\}$ throughout the paper, where ${\rm 1}\kern-0.24em{\rm I}(\cdot)$ denotes the indicator function.
Denote by ${\rm I}\kern-0.18em{\rm P}$ and ${\rm I}\kern-0.18em{\rm E}$ the generic probability distribution and expectation, whose meaning depends on specific contexts.
The classification error is
$
R(\phi)={\rm I}\kern-0.18em{\rm E} {\rm 1}\kern-0.24em{\rm I}\{\phi(X)\neq Y\} = {\rm I}\kern-0.18em{\rm P}\left\{\phi(X)\neq Y\right\}
$.
The law of total probability allows us to decompose it into a weighted average of type~I error $R_0(\phi)={\rm I}\kern-0.18em{\rm P} \left\{\phi(X)\neq Y|Y=0\right\}$ and type~II error $R_1(\phi)={\rm I}\kern-0.18em{\rm P} \left\{\phi(X)\neq Y|Y=1\right\}$ as
\begin{equation}\label{EQ:risk break down}
R(\phi) \,=\, {\rm I}\kern-0.18em{\rm P}(Y=0)R_0(\phi)+{\rm I}\kern-0.18em{\rm P}(Y=1)R_1(\phi)\,.
\end{equation}
With the advent of high-throughput technologies,
classification tasks have experienced an exponential growth in the feature dimensions throughout the past decade.
The fundamental challenge of ``high dimension, low sample size" has motivated the development of a plethora of classification algorithms for various applications.
While dependencies among features are usually considered a crucial characteristic of the data \citep{Ackermann.Strimmer.2009}, and can effectively reduce classification errors under suitable models and relative data abundance \citep{Shao.Wang.ea.2011,Cai.Liu.2011,Fan.Feng.ea.2011,Mai.Zou.ea.2012, Witten.Tibshirani.2012},
independence rules, with their superb scalability, become a rule of thumb when the feature dimension grows faster than the sample size \citep{Hastie.Tibshirani.ea.2009,james2013introduction}.
Despite Naive Bayes models' reputation of being ``simplistic" by ignoring all dependency structure among features,
they lead to simple classifiers that have proven worthy on high-dimensional data with remarkably good performances in numerous real-life applications.
Taking the classical model setting of two-class Gaussian with a common covariance matrix, \citet{Bickel.Levina.2004} showed the superior performance of Naive Bayes models over (naive implementation of) the Fisher linear discriminant rule under broad conditions in high-dimensional settings.
\cite{Fan.Fan.2008} further established the necessity of feature selection for high-dimensional classification problems by showing that even independence rules can be as poor as random guessing due to noise accumulation.
Featuring both independence rule and feature selection, the (sparse) Naive Bayes model remains a good choice for classification when the sample size is \emph{fairly limited}.
\subsection{Asymmetrical priorities on errors}
Most existing binary classification methods target on the optimization of the overall risk \eqref{EQ:risk break down} and may fail to serve the purpose when users' relative priorities over type I/II errors differ significantly from those implied by the marginal probabilities of the two classes.
A representative example of such scenario is the diagnosis of serious disease.
Let $1$ code the healthy class and $0$ code the diseased class.
Given that usually $${\rm I}\kern-0.18em{\rm P}(Y=1) \gg {\rm I}\kern-0.18em{\rm P}(Y= 0)\,,$$
minimizing the overall risk \eqref{EQ:risk break down} might yield classifiers with small overall risk $R$ (as a result of small $R_1$) yet large $R_0$ --- a situation quite undesirable in practice given
flagging a healthy case incurs only extra cost of additional tests while failing to detect the disease endangers a life.
The neuroblastoma dataset introduced by \cite{Oberthur.2006} provides a perfect illustration of such intuition.
The dataset contains gene expression profiles on $d=10707$ genes from 246 patients in a German neuroblastoma trial, among which 56 are high-risk (labeled as 0) and 190 are low-risk (labeled as 1).
We randomly selected 41 `$0$'s and 123 `$1$'s as our training sample (such that the proportion of `$0$'s is about the same as that in the entire dataset), and tested the resulting classifiers on the rest 15 `$0$'s and 67 `$1$'s.
The average error rates of \text{PSN}$^2$\text{ } (to be proposed; implemented here at significance level 0.05), Gaussian Naive Bayes (nb), penalized logistic regression (pen-log), and Support Vector Machine (svm) over 1000 random splits are summarized in Table \ref{table:1}.
\begin{table}[h]
\caption{Average error rates over 1000 random splits for neuroblastoma dataset.\label{table:1} }
\begin{center}
\begin{tabular}{l r r r r r }
{Error Type} & \text{PSN}$^2$ & nb & pen-log & svm\\
\hline
type I\phantom{I} ($0$ as $1$) & \underline{.038} & .308 & .529& .603\\
type II ($1$ as $0$) & .761 & .150 & .103& .573 \\
\end{tabular}
\end{center}
\end{table}
All procedures except {\text{PSN}$^2$} led to high type~I errors, and are thus considered unsatisfactory given the more severe consequences of missing a diseased instance than vice versa.
One existing solution to asymmetric error control is \emph{cost-sensitive learning}, which assigns two different costs as weights of the type~I/II errors \citep{Elkan01,ZadLanAbe03}.
Despite many merits and practical values of this framework, limitations arise in applications when there is no consensus over how much costs to be assigned to each class, or more fundamentally, whether it is morally acceptable to assign costs in the first place.
Also, when users have a specific target for type~I/II error control, cost-sensitive learning does not fit. Other methods aiming for small type~I error include the Asymmetric Support Vector Machine \citep{WuLinChenChen.2008}, and the $p$-value for classification \citep{DumbgenIglMunk.2005}.
However, the former has no theoretical guarantee on errors, while the latter treats all classes as of equal importance. %
\subsection{Neyman-Pearson (NP) paradigm and NP oracle inequalities}
Neyman-Pearson (NP) paradigm was introduced as a novel statistical framework for targeted type~I/II error control.
Assume type~I error $R_0$ as the prioritized error type,
this paradigm seeks to control $R_0$ under a user specified level $\alpha$ with $R_1$ as small as possible.
The \emph{oracle} is thus
\begin{equation}
\label{eq::goal}
\phi^* \,\in\, \mathop{\mathrm{argmin}}_{R_0(\phi)\leq\alpha}R_1(\phi)\,,
\end{equation}
where the \emph{significance level} $\alpha$ reflects the level of conservativeness towards type I error.
Given $\phi^*$ is unattainable in the learning paradigm, the best within our capability is to construct a data dependent classifier $\hat{\phi}$ that mimics it.
Despite its practical importance, NP classification has not received much attention in the statistics and machine learning communities. \citet{CanHowHus02} initiated the theoretical treatment of NP classification.
Under the same framework, \citet{Sco05} and \citet{ScoNow05} derived several results for traditional statistical learning such as PAC bounds or oracle inequalities. By combining type I and type II errors in sensible ways, \citet{Sco07} proposed a performance measure for NP classification. More recently, \citet{BlaLeeSco10} developed a general solution to semi-supervised novelty detection by reducing it to NP classification.
Other related works include \citet{CasChe03} and \citet{HanCheSun08}.
A common issue with methods in this line of literature is that they all follow an empirical risk minimization (ERM) approach, and use some forms of relaxed empirical type I error constraint in the optimization program.
As a result, all type I errors can only be proven to satisfy some relaxed upper bound.
Take the framework set up by \citet{CanHowHus02} for example. Given $\varepsilon_0>0$, they proposed the program
$$
\min_{\phi\in\mathcal{H},\hat R_0(\phi)\leq \alpha + \varepsilon_0/2}\hat R_1(\phi)\,,
$$
where $\mathcal{H}$ is a set of classifiers with finite Vapnik-Chervonenkis dimension, and $\hat R_0$, $\hat R_1$ are the empirical type I and type II errors respectively. It is shown that with high probability, the solution $\hat{\phi}$ to the above program satisfies simultaneously: i) the type I error $R_0(\hat{\phi})$ is bounded from above by $\alpha + \varepsilon_0$, and ii) the type II error $R_1(\hat{\phi})$ is bounded from above by $R_1(\phi^*)+\varepsilon_1$ for some $\varepsilon_1>0$.
\citet{RigTon11} is a significant departure from the previous NP classification literature.
This paper argues that a good classifier $\hat{\phi}$ under the NP paradigm should respect the chosen significance level $\alpha$, rather than some relaxation of it.
More precisely, two \textbf{NP oracle inequalities} should be satisfied simultaneously with high probability:
\begin{itemize}
\item[(I)] the type~I error constraint is respected, i.e., $R_0(\hat{\phi})\leq\alpha$.
\item[(II)] the excess type~II error $R_1(\hat{\phi}) - R_1({\phi}^*)$ diminishes with explicit rates (w.r.t. sample size).
\end{itemize}
Recall that, for a classifier $\hat h$, the classical oracle inequality insists that with high probability
\begin{equation}
\label{classical}
\text{the excess risk $R(\hat h)- R(h^*)$ diminishes with explicit rates,}
\end{equation}
where $h^*(x)={\rm 1}\kern-0.24em{\rm I}(\eta(x)\geq 1/2)$ is the Bayes classifier, in which $\eta(x)=\mathbb{E}[Y|X=x]={\rm I}\kern-0.18em{\rm P}(Y=1|X=x)$ is the regression function of $Y$ on $X$ (see \citet{Koltchinskii.2008} and references within).
The two NP oracle inequalities defined above can be thought of as a generalization of \eqref{classical} that provides a novel characterization of classifiers' theoretical performances under the NP paradigm.
Using a more stringent empirical type I error constraint (than the level $\alpha$), \citet{RigTon11} established NP oracle inequalities for its proposed classifiers under convex loss functions (as opposed to the indicator loss). They also proved an interesting negative result: under the binary loss, ERM approaches (convexification or not) cannot guarantee diminishing excess type~II error as long as one insists type~I error of the proposed classifier be bounded from above by $\alpha$ with high probability.
This negative result motivated a plug-in approach to NP classification in \citet{Tong.2013}.
\subsection{Plug-in approaches}
Plug-in methods in classical binary classification have been well studied in the literature, where the usual plug-in target is the Bayes classifier ${\rm 1}\kern-0.24em{\rm I}(\eta(x)\geq 1/2)$. Earlier works gave rise to pessimism of the plug-in approach to classification. For example, under certain assumptions, \citet{Yang99} showed plug-in estimators cannot achieve excess risk with rates faster than $O(1/\sqrt{n})$, while direct methods can achieve rates up to $O(1/n)$ under \textit{margin assumption} \citep{MamTsy99, Tsy04, TsyGee05,TarGee06}. However, it was shown in \citet{Audibert05fastlearning} that plug-in classifiers ${\rm 1}\kern-0.24em{\rm I}(\hat \eta_n \geq 1/2)$ based on local polynomial estimators can achieve rates faster than $O(1/n)$, with a smoothness condition on $\eta$ and the margin assumption.
The oracle classifier under the NP paradigm arises from its close connection to the Neyman-Pearson Lemma in statistical hypothesis testing.
Hypothesis testing bears strong resemblance to binary classification if we assume the following model. Let $P_1$ and $P_0$ be two \textit{known} probability distributions on $\mathcal{X} \subset {\rm I}\kern-0.18em{\rm R}^d$. Assume that $Y\sim \text{Bern}(\zeta)$ for some $\zeta \in (0,1)$, and the conditional distribution of $X$ given $Y$ is $P_Y$.
Given such a model, the goal of statistical hypothesis testing is to determine if we should reject the null hypothesis that $X$ was generated from $P_0$.
To this end, we construct a randomized test $\phi:\mathcal{X} \to [0,1]$ that rejects the null with probability $\phi(X)$.
Two types of errors arise: type~I error occurs when $P_0$ is rejected yet $X\sim P_0$, and type~II error occurs when $P_0$ is not rejected yet $X\sim P_1$.
The Neyman-Pearson paradigm in hypothesis testing amounts to choosing $\phi$ that solves the following constrained optimization problem
$$
\text{maximize } {\rm I}\kern-0.18em{\rm E}[\phi(X)|Y=1]\,,
\text{ subject to } {\rm I}\kern-0.18em{\rm E}[\phi(X)|Y= 0 ]\leq\alpha\,,
$$
where $\alpha \in (0,1)$ is the significance level of the test. A solution to this constrained optimization problem is called \emph{a most powerful test} of level $\alpha$. The Neyman-Pearson Lemma gives mild sufficient conditions for the existence of such a test.
\begin{lem}[Neyman-Pearson Lemma]\label{lemma:NP}
Let $P_1$ and $P_0$ be two probability measures with densities $p
$ and $q$ respectively, and denote the density ratio as $r(x)=p(x)/q(x)$.
For a given significance level $\alpha$, let $C_{\alpha}$ be such that
$P_0\{r(X)>C_{\alpha}\}\leq\alpha$ and $P_0\{r(X)\geq C_{\alpha}\}\geq\alpha$.
Then,
the most powerful test of level $\alpha$ is
\begin{equation*}
\phi^*(X)=\left\{
\begin{array}{ll}
1 & \text{if $\,\,r(X)>C_{\alpha}$}\,,\\
0 & \text{if $\,\,r(X)<C_{\alpha}$}\,,\\
\frac{\alpha-P_0\{r(X)>C_{\alpha}\}}{P_0\{r(X)=C_{\alpha}\}} & \text{if $\,\,r(X)=C_{\alpha}$}\,.
\end{array} \right.
\end{equation*}
\end{lem}
Under mild continuity assumption, we take the NP \emph{oracle}
\begin{align}\label{eq::oracle}
\phi^*(x) \,=\, \phi^*_{\alpha}(x) \,=\, {\rm 1}\kern-0.24em{\rm I}\{p(x)/q(x)\geq C_{\alpha}\} \,=\, {\rm 1}\kern-0.24em{\rm I}\{r(x)\geq C_{\alpha}\}\,.
\end{align}
as our plug-in target for NP classification.
With kernel density estimates $\hat p$, $\hat q$, and a proper estimate of the threshold level $\widehat C_{\alpha}$, \cite{Tong.2013} constructed a plug-in classifier ${\rm 1}\kern-0.24em{\rm I}\{\hat p(x)/\hat q(x)\geq \widehat C_{\alpha}\}$ that satisfies both NP oracle inequalities with high probability when the dimensionality is small, leaving the high-dimensional case an unchartered territory.
\subsection{Contribution}
In the big data era,
NP classification framework faces the same curse of dimensionality as its classical counterpart.
Despite its wide potential applications, this paper is the \emph{first attempt} to construct performance-guaranteed classifiers under the NP paradigm in high-dimensional settings.
Based on the Neyman-Pearson Lemma, we employ Naive Bayes models and propose a computationally feasible plug-in approach to construct classifiers that satisfy the NP oracle inequalities.
We also improve the \textit{detection condition}, a critical theoretical assumption first introduced in \cite{Tong.2013}, for effective threshold level estimation that grounds the good NP properties of these classifiers. Necessity of the new detection condition is also discussed.
Note that classifiers proposed in this work are not straightforward extensions of \cite{Tong.2013}:
kernel density estimation is now applied in combination with feature selection, and the threshold level is estimated in a more precise way by order statistics that require only moderate sample size --- while \citet{Tong.2013} resorted to the Vapnik-Chervonenkis theory and required sample size much bigger than what is available in most high-dimensional applications.
The rest of the paper is organized as follows. Two screening based plug-in NP-type classifiers are presented in Section \ref{sec::methods}, where theoretical properties are also discussed. Performance of the proposed classifiers is demonstrated in Section \ref{sec::numeric} by both simulation studies and real data analysis.
We conclude in Section \ref{sec::discussion} with a short discussion.
The technical proofs are relegated to the Appendix.
\section{Methods\label{sec::methods}}
In this section, we first introduce several notations and definitions, with a focus on the \textit{detection condition}. Then we present the plug-in procedure, together with its theoretical properties.
\subsection{Notations and definitions\label{sec::notations-definitions}}
We introduce here several notations adapted from \citet{Audibert05fastlearning}.
For $\beta>0$, denote by $\lfloor\beta\rfloor$ the largest integer strictly less than $\beta$. For any $x, x'\in \mathbb{R}$ and any $\lfloor\beta\rfloor$ times continuously differentiable real-valued function $g(\cdot)$ on $\mathbb{R}$, we denote by $g_x$ its Taylor polynomial of degree $\lfloor\beta\rfloor$ at point $x$.
For $L>0$, the $(\beta, L, [-1,1])$-H\"{o}lder class of functions, denoted by $\Sigma(\beta, L, [-1,1])$, is the set of functions $g:[-1,1]\rightarrow \mathbb{R}$ that are $\lfloor\beta\rfloor$ times continuously differentiable and satisfy, for any $x,x'\in[-1,1]$, the inequality $|g(x')-g_x(x')|\leq L |x-x'|^{\beta}.$
The $(\beta, L, [-1,1])$-H\"{o}lder class of density is defined as
$$
\mathcal{P}_{\Sigma}(\beta, L, [-1,1])=\left\{f\,:\,f\geq0, \int f = 1, f\in\Sigma(\beta, L, [-1,1])\right\}\,.
$$
We will use $\beta$-valid kernels (kernels of order $\beta$, \citet{Tsy09}) for all the kernel estimation throughout the theoretical discussion, the definition of which is as follows.
\begin{defin}\label{DEF:BETA kernel}
Let $K(\cdot)$ be a real-valued function on $\mathbb{R}$ with support $[-1,1]$. The function $K(\cdot)$ is a $\beta$-valid kernel if it satisfies $\int K = 1$, $\int |K|^v <\infty$ for any $v \geq 1$, $\int |t|^{\beta}|K(t)|dt<\infty$, and in the case $\lfloor\beta\rfloor\geq 1$, it satisfies $\int t^l K(t)dt = 0$ for any $l\in\mathbb{N}$ such that $1\leq l\leq \lfloor\beta\rfloor$.
\end{defin}
We assume that all the $\beta$-valid kernels considered in the theoretical part of this paper are constructed from Legendre polynomials, and are thus Lipschitz and bounded, satisfying the kernel conditions for the important technical Lemma \ref{lemma::A1_1-dim}.
\begin{defin}[margin assumption]\label{Def: margin}
A function $f(\cdot)$ is said to satisfy margin assumption of order $\bar\gamma$ with respect to probability distribution $P$ at the level $C^{*}$ if there exists a positive constant $M_0$, such that for any $\delta\geq0$,
$$
P\{|f(X)-C^*|\leq \delta\} \,\leq\, M_0\delta^{\bar\gamma}\,.
$$
\end{defin}
This assumption was first introduced in \citet{Polonik95}. In the classical binary classification framework, \cite {MamTsy99} proposed a similar condition named ``margin condition"
by requiring most data to be away from the optimal decision boundary.
In the classical classification paradigm, definition \ref{Def: margin} reduces to the ``margin condition" by taking $f = \eta$ and $C^*=1/2$, with $\{x: |f(x)-C^*| = 0\} = \{x: \eta(x)=1/2\}$ giving the decision boundary of the Bayes classifier.
On the other hand, unlike the classical paradigm where the optimal threshold level is known and does not need an estimate, the optimal threshold level $C_{\alpha}$ in the NP paradigm is unknown and needs to be estimated, suggesting the necessity of having sufficient data around the decision boundary to detect it well. This concern motivated the following condition improved from \citet{Tong.2013}.
\begin{defin}[detection condition]
\label{def::detection}
A function $f(\cdot)$ is said to satisfy detection condition of order $\uderbar \gamma$ with respect to $P$ (i.e., $X\sim P$) at level $(C^*,\delta^*)$
if there exists a positive constant $M_1$, such that for any $\delta \in (0,\delta^*)$,
$$P\{C^{*} \leq f(X) \leq C^{*} + \delta \} \,\geq\, M_1 \delta^{\uderbar\gamma}\,.$$
\end{defin}
A detection condition works as an opposite force to the margin assumption, and is basically an assumption on the lower bound of probability.
Though we take here a power function as the lower bound, so that it is simple and aesthetically similar to the margin assumption, any increasing $u(\cdot)$ on $R^+$ with $\lim_{x\rightarrow 0+} g(x) = 0$ should be able to serve the purpose.
The version of detection condition we would use to establish the NP inequalities for the (to be) proposed classifiers takes $f = r$, $C^* = C_{\alpha}$, and $P = P_0$ (recall that $P_0$ is the conditional distribution of $X$ given $Y=0$).
Now we argue why such a condition is \textit{necessary} to achieve the NP oracle inequalities.
Consider the simpler case where the density ratio $r$ is known, and we only need a proper estimate of the threshold level $\widehat{C}_\alpha$. If there is nothing like the detection condition (Definition \ref{def::detection} involves a power function, but the idea is just to have any kind of lower bound), we would have, for some $\delta>0$,
\begin{align}\label{eq::detection-not-hold}
P_0\{ C_{\alpha} \leq r(X) \leq C_{\alpha}+\delta\} \,=\, 0\,.
\end{align}
In getting the threshold estimate $\widehat{C}_{\alpha}$ of $\hat \phi (x) = {\rm 1}\kern-0.24em{\rm I}\{r(x)\geq \widehat{C}_{\alpha}\}$, we can not distinguish any threshold level between $C_{\alpha}$ and $C_{\alpha}+\delta$. In particular, it is possible that
$$
\widehat{C}_{\alpha} > C_{\alpha} + \delta/2\,.
$$
But then the excess type II error is bounded from below as follows
$$
R_1(\hat \phi) - R_1(\phi^*) = P_1\{C_{\alpha} < r(X) < \widehat{C}_{\alpha}\} > P_1\{C_{\alpha} < r(X) < C_{\alpha}+\delta/2\}\,,
$$
where the last quantity can be positive. Therefore, the second NP oracle inequality (diminishing excess type II error) does not hold for $\hat \phi$.
Since some detection condition is necessary in this simpler case, it is certainly necessary in our real setup.
Note that Definition \ref{def::detection} is a significant improvement of the detection condition formulated in \cite{Tong.2013}, which requires
$$P\{C^{*}-\delta \leq f(X) \leq C^{*} \} \wedge P\{C^{*} \leq f(X) \leq C^{*} + \delta \} \,\geq\, M_1 \delta^{\uderbar\gamma}\,.$$
We are able to drop the lower bound for the first piece due to an improved layout of the proofs. Intuitively, our new detection condition ensures an upper bound on $\widehat C_{\alpha}$. But we do not need an extra condition to get a lower bound of $\widehat C_{\alpha}$, because of the type I error bound requirement (see the proof of Proposition \ref{prop::R1} for details).
\subsection{Neyman-Pearson plug-in procedure}
Suppose the sampling scheme is fixed as follows.
\begin{assumption}\label{assumption::independence-split}
Assume the training sample contains
$n$ i.i.d. observations $\mathcal{S}^1=\{U_1,\cdots,U_n\}$ from class 1 with density $p$, and
$m$ i.i.d. observations $\mathcal{S}^0=\{V_1,\cdots,V_m\}$ from class 0 with density $q$.
Given fixed $n_1$, $n_2$, $m_1$, $m_2$ and $m_3$ such that $n_1+n_2 = n$, $m_1+m_2+m_3=m$,
we further decompose $\mathcal{S}^1$ and $\mathcal{S}^0$ into independent subsamples as:
$\mathcal{S}^1 = \mathcal{S}^1_1 \cup \mathcal{S}^1_2$, and $\mathcal{S}^0 = \mathcal{S}^0_1 \cup \mathcal{S}^0_2 \cup \mathcal{S}^0_3$, where $|\mathcal{S}^1_1| = n_1$, $|\mathcal{S}^1_2| = n_2$, $|\mathcal{S}^0_1| = m_1$, $|\mathcal{S}^0_2| = m_2$, $|\mathcal{S}^0_3| = m_3$.
\end{assumption}
The sample splitting idea has been considered in the literature, such as in \citet{Meinshausen.Buhlmann.2010} and \citet{robins2006adaptive}. Given these samples, we introduce the following plug-in procedure.
\begin{defin}\label{pro::np-plug-in}
{\bf Neyman-Pearson plug-in procedure}
\begin{itemize}
\item[\underline{Step 1}] Use $\mathcal{S}^1_1$, $\mathcal{S}^1_2$, $\mathcal{S}^0_1$, and $\mathcal{S}^0_2$ to construct a density ratio estimate $\hat r$. The specific use of each subsample will be introduced in Section \ref{sec::density-ratio-estimate}.
\item[\underline{Step 2}] Given $\hatr,$ choose a threshold estimate $\widehat{C}_{\alpha}$ from the set
${\hat r}(\mathcal{S}^0_3) = \{\hatr(V_{i+m_1+m_2}) \}_{i=1}^{m_3}$.
\end{itemize}
\end{defin}
Denote by ${\hat r}_{(k)}(\mathcal{S}^0_3)$ the $k$-th order statistic of ${\hat r}(\mathcal{S}^0_3)$, $k \in \{1,\cdots,m_3\}$.
The corresponding plug-in classifier by setting $\widehat{C}_{\alpha}={\hat r}_{(k)}(\mathcal{S}^0_3)$ is
\begin{eqnarray}\label{eq:threshold-generic}
\label{eq::psiK}
\hat{\phi}_k (x) = {\rm 1}\kern-0.24em{\rm I}\{ \hat r(x) \geq {\hat r}_{(k)}(\mathcal{S}^0_3)\}\,.
\end{eqnarray}
\noindent A generic procedure for choosing the optimal $k$ will be given in Section \ref{sec::threshold-estimate}.
\subsection{Threshold estimate $\widehat C_{\alpha}$\label{sec::threshold-estimate}}
For any arbitrary density ratio estimate $\hat r$,
we employ a proper order statistic ${\hat r}_{(k)}(\mathcal{S}^0_3)$ to estimate the threshold $C_{\alpha}$, and establish a probabilistic upper bound for the type I error of $\hat{\phi}_k$ for each $k \in \{1,\cdots,m_3\}$.
\begin{prop}
\label{prop::general delta}
For any arbitrary density ratio estimate $\hat r$,
let $\hat \phi_k(x)={\rm 1}\kern-0.24em{\rm I}\{ \hat r(x) \geq {\hat r}_{(k)}(\mathcal{S}^0_3)\}$.
It holds for any $\delta \in (0, 1)$ and $k\in \{1,\cdots, m_3\}$ that
\begin{eqnarray}
\label{eq::Bin-bound}
{\rm I}\kern-0.18em{\rm P}\{ R_0(\hat{\phi} _k) > \delta\}
\,\leq\, \text{Beta.cdf}_{k, \,m_3+1-k}\left( 1-\delta \right)\,,
\end{eqnarray}
where $\text{Beta.cdf}_{k, \,m_3+1-k}(\cdot)$ is the \textsc{cdf} of Beta$(k, \,m_3+1-k)$.
The inequality becomes equality when $F_{0,\hat \Lr}(t)=P_0\{ \hat r(X) \leq t \}$ is continuous almost surely.
\end{prop}
In view of the above proposition,
a sufficient condition for the classifier $\hat{\phi}_k$ to satisfy NP Oracle Inequality (I) at tolerance level $\delta_3 \in (0,1)$ is thus
\begin{eqnarray}
\label{cond::betaCDF}
\text{Beta.cdf}_{k, \,m_3+1-k}\left( 1-\alpha \right) \,\leq\, \delta_3\,.
\end{eqnarray}
Despite the potential tightness of \eqref{eq::Bin-bound}, we are not able to derive an explicit formula for the minimum $k$ that satisfies \eqref{cond::betaCDF}.
To get an explicit choice for $k$, we resort to concentration inequalities for an alternative.
\begin{prop}
\label{prop::general k}
For any arbitrary density ratio estimate $\hat r$,
let $\hat \phi_k(x)={\rm 1}\kern-0.24em{\rm I}\{ \hat r(x) \geq {\hat r}_{(k)}(\mathcal{S}^0_3)\}$.
It holds for any $\delta_3 \in (0, 1)$ and $k\in \{1,\cdots, m_3\}$ that
\begin{eqnarray}
\label{eq::Chebyshev-bound}
{\rm I}\kern-0.18em{\rm P}\{ R_0(\hat{\phi} _k) > g(\delta_3, m_3, k) \}
\,\leq \, \delta_3\,,
\end{eqnarray}
where
\begin{equation}
\label{eq::g}
g(\delta_3, m_3, k) =
\frac{ m_3+1-k}{m_3+1} +
\sqrt{\frac{k(m_3+1-k)}{\delta_3(m_3+2)(m_3+1)^2}}\,.
\end{equation}
\end{prop}
\medskip
Let
$
\mathcal{K} = \mathcal{K}(\alpha,\delta_3,m_3)
= \{k \in \{1,\cdots,m_3\}:
g(\delta_3, m_3, k) \leq \alpha \}
$.
Proposition \ref{prop::general k} implies that $k\in \mathcal{K}(\alpha,\delta_3,m_3) $ is a sufficient condition for the classifier $\hat{\phi}_k$ to satisfy NP Oracle Inequality (I). The next step is to characterize $\mathcal{K}$ and choose some $k\in\mathcal{K}$, so that $\hat \phi_k$ has small excess type II error. Clearly, we would like to find the smallest element in $\mathcal{K}$.
\begin{prop}
\label{prop::kmin}
The minimum $k \in \{1,\cdots,m_3+1\}$ that satisfies $g(\delta_3,m_3,k) \leq \alpha$ is
\begin{equation}
\label{eq::kmin}
k_{\min}(\alpha,\delta_3,m_3) \,=\, \left\lceil (m_3+1)A_{\alpha,\delta_3}(m_3) \right\rceil,
\end{equation}
where $\lceil z\rceil$ denotes the smallest integer larger than or equal to $z$, and
\begin{equation*}
\label{eq::A}
A_{\alpha,\delta_3}(m_3) = \frac{1+2\delta_3(m_3+2)(1-\alpha) + \sqrt{1+4\delta_3(1-\alpha)\alpha(m_3+2)}}
{2\left\{\delta_3(m_3+2)+1 \right\}}\,.
\end{equation*}
Moreover,
\begin{enumerate}
\item $A_{\alpha,\delta_3}(m_3) \in (1-\alpha,1)$.
\item ${\hat r}_{(k_{\min}(\alpha, \delta_3, m_3))}(\mathcal{S}_3^0)$ is asymptotically the empirical $(1-\alpha)$-th quantile of $F_{0,\hat \Lr}$ in the sense that
\begin{equation*}
\label{eq::limit}
\lim_{m_3\to\infty}\frac{k_{\min}(\alpha,\delta_3,m_3)}{m_3}\,=\, \lim_{m_3\to\infty} A_{\alpha,\delta_3}(m_3)\,=\,1 - \alpha\,.
\end{equation*}
\item
For any $m_3 \geq 4/(\alpha\delta_3)$, we have
$k_{\min}(\alpha,\delta_3,m_3) \leq m_3$, and thus
$$\mathcal{K}(\alpha,\delta_3,m_3) \,=\, \left\{ k_{\min}(\alpha,\delta_3,m_3),k_{\min}(\alpha,\delta_3,m_3)+1,\ldots,m_3\right\}.$$
\end{enumerate}
\end{prop}
\noindent Introduce shorthand notations ${k_{\min}} = {k_{\min}}(\alpha, \delta_3, m_3)$, $\hat r_{(k)} =\hat r_{(k)}(\mathcal{S}_3^0)$, and $\widehat C_{\alpha} = \hat r_{(\min\{k_{\min},m_3\})}$.
We will take
\begin{align}\label{eq::phi-hat}
\hat{\phi} (x) \,=\, {\rm 1}\kern-0.24em{\rm I}\{ \hat r(x) \geq \widehat C_{\alpha}\} \,=\, \left\{
\begin{array}{ll}
{\rm 1}\kern-0.24em{\rm I}\{\hat r(x) \geq \hat r_{(k_{\min})}\}\,,&\text{if $\,\,k_{\min} \leq m_3$}\,,\\
{\rm 1}\kern-0.24em{\rm I}\{\hat r(x) \geq \hat r_{(m_3)}\}\,,& \text{if $\,\,k_{\min} = m_3+1$}
\end{array}\right.
\end{align}
as the default NP plug-in classifier for any arbitrary $\hat r$.
An alternative threshold estimate that also guarantees type I error bound is derived in the Appendix \ref{sec::alternative threshold}.
Assume $m_3\geq 4/(\alpha\delta_3)$ for the rest of the theoretical discussion.
It follows from Proposition \ref{prop::kmin} that $k_{\min} \leq m_3$, and thus $\widehat C_{\alpha} = \hat r_{(k_{\min})}$, $\hat\phi = {\hat \phi}_{(k_{\min})}$ with guaranteed type I error control.
\begin{rem}
\label{rem::quantileIntuition}
Note that
$\lim_{m_3\to\infty}{k_{\min}}/{\lceil m_3(1-\alpha)\rceil} = 1$.
Thus, choosing the $k_{\min}$-th order statistic of $\hat r(\mathcal{S}^0_3)$ as the threshold can be viewed as a modification to the classical approach of estimating the $1-\alpha$ quantile of $F_{0,\hat r}$ by the $\lceil m_3(1-\alpha)\rceil$-th order statistic of ${\hat r}(\mathcal{S}^0_3)$.
Recall that the oracle $C_\alpha$ is actually the $1-\alpha$ quantile of distribution $F_{0,r},$
so the intuition is that $\widehat C_{\alpha}$ is asymptotically (when $m_3\rightarrow \infty$) equivalent to the $1-\alpha$ quantile of $F_{0,\hat r},$ which in turn converges (when $n_1, n_2, m_1, m_2 \rightarrow \infty$) to $C_\alpha$ as the $1-\alpha$ quantile of $F_{0, r}$ under moderate conditions.
\end{rem}
\begin{lem}
\label{prop::R0}
Let $\alpha, \delta_3\in(0,1)$. In addition to Assumption \ref{assumption::independence-split}, suppose $\hat r$ be such that $F_{0,\hat r}$ is continuous almost surely.
Then for any $\delta_4 \in (0,1)$ and $m_3 \geq 4/(\alpha\delta_3)$,
the distance between $R_0( \hat{\phi})$ ($\hat{\phi}$ as defined in \eqref{eq::phi-hat}) and $R_0({\phi^*})$ can be bounded as
\begin{eqnarray*}
{\rm I}\kern-0.18em{\rm P}\{
|R_0( \hat{\phi} )
- R_0( {\phi^*}) | > \xi_{\alpha, \delta_3,m_3}(\delta_4) \}
\,\leq\, \delta_4\,,
\end{eqnarray*}
where
\begin{align}
\label{eq::xi}
\xi_{\alpha, \delta_3,m_3}(\delta_4) = \sqrt{\frac{{k_{\min}}(m_3+1-{k_{\min}})}{(m_3+2)(m_3+1)^2\delta_4}} + A_{\alpha,\delta_3}(m_3) - (1-\alpha) + \frac{1}{m_3+1}\,.
\end{align}
If $m_3 \geq \max(\delta_3^{-2}, \delta_4^{-2})$, we have
$
\xi_{\alpha, \delta_3,m_3}(\delta_4) \leq ({5}/{2}){m_3^{-1/4}}.
$
\end{lem}
\begin{prop}
\label{prop::R1}
Let $\alpha, \delta_3, \delta_4 \in (0,1)$. In addition to assumptions of Lemma \ref{prop::R0}, assume that the density ratio $r$ satisfies the margin assumption of order $\bar\gamma$ at level $C_{\alpha}$ (with constant $M_0$) and detection condition of order $\uderbar \gamma$ at
level $(C_{\alpha}, \delta^*)$ (with constant $M_1$), both with respect to distribution $P_0$.
\noindent
If $m_3 \geq \max\{ 4/{(\alpha\delta_3)}, \delta_3^{-2}, \delta_4^{-2},(\frac{2}{5}M_1{\delta^*}^{\uderbar\gamma})^{-4}\}$, the excess type II error of the classifier $\hat{\phi}$ defined in \eqref{eq::phi-hat} satisfies with probability at least $1-\delta_3-\delta_4$,
\begin{align*}
&R_1(\hat{\phi}) - R_1({\phi}^*)\\
&\leq\,
2M_0 \left[\left\{\frac{|R_0( \hat{\phi}) - R_0( \phi^*)|}{M_1}\right\}^{1/\uderbar{\gamma}} + 2 \| \hat r - r \|_{\infty} \right]^{1 + \bar\gamma}
+ C_{\alpha} |R_0( \hat{\phi}) - R_0( \phi^*)|\\
&\leq\,
2M_0 \left[\left(\frac{2}{5}m_3^{1/4}M_1\right)^{-1/\uderbar{\gamma}} + 2 \| \hat r - r \|_{\infty} \right]^{1 + \bar\gamma}
+ C_{\alpha} \left(\frac{2}{5} m_3^{1/4}\right)^{-1}\,.
\end{align*}
\end{prop}
Given the above proposition, we can control the excess type II error as long as the uniform deviation of density ratio estimate $\|\hat r-r\|_{\infty}$ is controlled.
In the following subsection, we will introduce estimates $\hat r$ and provide bounds for $\|\hat r-r\|_{\infty}$.
\subsection{Density ratio estimate $\hat r$\label{sec::density-ratio-estimate}}
Denote the marginal densities of class 1 and 0 as $p_j$ and $q_j$ ($j=1,\cdots,d$) respectively,
Naive Bayes models for the density ratio take the form
\begin{align*}
r(x) = \prod_{j=1}^d \frac{p_{j}(x_j)}{q_{j}(x_j)}\,, \quad\text{where $x_j$ is the $j$-th component of $x$}\,.
\end{align*}
The subsamples $\mathcal{S}^1_1=\{U_i\}_{i={1}}^{n_1}$, $\mathcal{S}^1_2=\{U_{i+n_1}\}_{i={1}}^{n_2}$, $\mathcal{S}^0_1=\{V_i\}_{i={1}}^{m_1}$ and $\mathcal{S}^0_2=\{V_{i+m_1}\}_{i={1}}^{m_2}$ are used to construct (nonparametric/parametric) estimates of $p_j$ and $q_j$ for $j = 1,\cdots,d$.
\vskip 6pt
\noindent \textbf{Nonparametric estimate of the density ratio}.
For marginal densities $p_j$ and $q_j$, we apply kernel estimates
$\,\,\hat p_{j}(x_j) = \{(n_1+n_2) h_1\}^{-1}\sum_{i=1}^{n_1+n_2} K\left(\frac{U_{i,j} - x_j}{h_1}\right)$, and
$\,\,\hat q_{j}(x_j) = \{(m_1+m_2) h_0\}^{-1}\sum_{i=1}^{m_1+m_2} K\left(\frac{V_{i,j} - x_j}{h_0}\right)$,
where
$K(\cdot)$ is the kernel function,
$h_1,h_0$ are the bandwidths,
and $V_{i,j}$ and $U_{i,j}$ denote the $j$-th component of $V_{i}$ and $U_{i}$ respectively.
The resulting nonparametric estimate is
\begin{eqnarray}
\label{eq::rHatI}
\hat r_{\text{N}}( x)
= \prod_{j=1}^d \frac{\hat p_{j}(x_j)}{\hat q_{j}(x_j)}\,.
\end{eqnarray}
\noindent \textbf{Parametric estimate of the density ratio}.
Assume the two-class Gaussian model
$
X|Y=0 \sim \mathcal{N}({\mu}^0,\Sigma) \text{ and } X|Y=1 \sim \mathcal{N}({\mu}^1,\Sigma ),
$ where $\Sigma=\mbox{diag}(\sigma_1^2,\cdots,\sigma_d^2)$.
We estimate $\mu^0$, $\mu^1$ and $\Sigma$ using their sample versions $\hat \mu^0$, $\hat \mu^1$ and $\hat \Sigma$.
Under this model, the density ratio function is given by
\begin{eqnarray*}
\label{eq::rI}
r_{\text{P}}( x)
\,=\, \exp\left\{\left(\mu^1-\mu^0\right)' \Sigma^{-1}x + \frac{1}{2} (\mu^0)^\prime\Sigma^{-1} \mu^0 - \frac{1}{2} (\mu^1)^\prime\Sigma^{-1}\mu^1 \right\}\,,
\end{eqnarray*}
and the corresponding parametric estimate is
\begin{equation}
\label{eq::rIhatVec}
\hat r_{\text{P}}( x)
\,=\, \exp\left\{\left(\hat\mu^1-\hat\mu^0\right)' \hat\Sigma^{-1}x + \frac{1}{2} (\hat\mu^0)^\prime\hat\Sigma^{-1} \hat\mu^0 - \frac{1}{2} (\hat\mu^1)^\prime\hat\Sigma^{-1}\hat\mu^1 \right\}\,.
\end{equation}
\subsection{Screening-based density ratio estimate and plug-in procedures\label{sec::screen-density-ratio-estimate}}
For ``high dimension, low sample size" applications, complex models that take into account all features usually fail; even Naive Bayes models that ignore feature dependency might lead to poor performance due to noise accumulation \citep{Fan.Fan.2008}. A common solution in these scenarios is to first study marginal relations between the response and each of the features \citep{Fan.Lv.2008, li2012feature}. By selecting the most important individual features, we greatly reduce the model size, and other models can be applied after this screening step. We now introduce screening based variants of $\hat r_{\text{N}}$ and $\hat r_{\text{P}}$. Let $F^{0}_j$ and $F^{1}_j$ denote the \textsc{cdf}s of $q_j$ and $p_j$ respectively, for $j=1,\cdots,d$.
Step 1 of Procedure \ref{pro::np-plug-in} introduced in Section \ref{sec::notations-definitions} is now decomposed into a screening substep and an estimation substep.
\vskip 6pt
\noindent\textbf{\underline{N}onparametric \underline{S}creening-based \underline{N}P \underline{N}aive Bayes (\text{NSN}$^2$) classifier}
\begin{description}
\item [\underline{Step 1.1}] Select features using $\mathcal{S}^0_1$ and $\mathcal{S}^1_1$ as follows:
\begin{equation}
\label{eq::Atau}
\widehat{\mathcal{A}}_{\tau}
\,=\, \left\{ 1 \leq j \leq d:
\| \hat{F}^{0}_{j} -\hat{F}^{1}_{j} \| _{\infty} \geq \tau\right\},
\end{equation}
where $\tau>0$ is some threshold level, and
\begin{equation}
\label{eq::ecdf}
\hat{F}^{0}_{j}(x_j) \,=\, \frac{1}{m_1}\sum_{i=1}^{m_1} {\rm 1}\kern-0.24em{\rm I}(V_{i,j} \leq x_j)\,,\,\,\,
\hat{F}^{1}_{j}(x_j)\,=\,\frac{1}{n_1}\sum_{i=1}^{n_1} {\rm 1}\kern-0.24em{\rm I}(U_{i,j} \leq x_j)
\end{equation}
are the empirical \textsc{cdf}s.
\item [\underline{Step 1.2}] Use $\mathcal{S}^0_2$ and $\mathcal{S}^1_2$ to construct kernel estimates of $q_{j}$ and $p_{j}$ for $j \in \widehat{\mathcal{A}}_{\tau}$. The density ratio estimate is given by
$$
\hat{r}^S_{\text{N}}(x) \,=\, \prod_{j \in \widehat{\mathcal{A}}_{\tau}} \frac{\hat p_{j}(x_j)}{\hat q_{j}(x_j)}\,.
$$
\item [\underline{Step 2}] Given $\hat{r}^S_{\text{N}}$, use $\mathcal{S}^0_3$ to get a threshold estimate $(\hat{r}^S_{\text{N}})_{(k_{\min})}$ as in \eqref{eq::phi-hat}.
\end{description}
The resulting \text{NSN}$^2$\text{ } classifier is
\begin{align}\label{eq::nsn2}
\hat{\phi}_{\text{NSN}^2} (x) \,=\, {\rm 1}\kern-0.24em{\rm I}\left\{\hat r^S_{\text{N}}(x) \geq (\hat{r}^S_{\text{N}})_{(k_{\min})}\right\}.
\end{align}
\noindent\textbf{\underline{P}arametric \underline{S}creening-based \underline{N}P \underline{N}aive Bayes (\text{PSN}$^2$) classifier}
\noindent The \text{PSN}$^2$\text{ } procedure is similar to \text{NSN}$^2$, except the following two differences. In Step 1.1, features are now selected based on $t$-statistics ($\widetilde{\mathcal{A}}_{\tau}$ represent the index set of the selected features). In Step 1.2, $p_j$, $q_j$ for $j \in \widetilde{\mathcal{A}}_{\tau}$ follow two-class Gaussian model, and the resulting parametric screening-based density ratio estimate is
$$
\hat r^S_{\text{P}}(x) = \prod_{j \in \widetilde{\mathcal{A}}_{\tau}} \frac{\tilde p_{j}(x_j)}{\tilde q_{j}(x_j)}\,.
$$
The corresponding \text{PSN}$^2$\text{ } classifier is thus given b
\begin{align}\label{eq::psn2}
\hat{\phi}_{\text{PSN}^2} (x) = {\rm 1}\kern-0.24em{\rm I}\left\{\hat r^S_{\text{P}}(x) \geq (\hat r^S_{\text{P}})_{(k_{\min})}\right\}.
\end{align}
We assume the domains of all $p_j$ and $q_j$ to be $[-1,1]$ for all the following theoretical discussion.
We will prove NP oracle inequalities for $\hat{\phi}_{\text{NSN}^2}$, and those for $\hat{\phi}_{\text{PSN}^2}$ can be developed similarly. Recall that by Proposition \ref{prop::R1}, we need an upper bound for $\|\hat r^S_N - r\|_{\infty}$. Necessarily, performance of the screening step should be studied. To this end, we assume that only a small fraction of the $d$ features have marginal differentiating power.
\begin{assumption}\label{assumption::exact-recovery}
There exists a signal set $\mathcal{A}\subset \{1,\cdots,d\}$ with size $|\mathcal{A}|=s\ll d$ such that $\inf_{j\in\mathcal{A}}\| F^{0}_j - F^{1}_j\| _{\infty} \geq D$ for some positive constant $D$, and $F^{0}_j = F^{1}_j$ for $j\notin\mathcal{A}$.
\end{assumption}
The following proposition shows that Step 1.1 achieves exact recovery ($\widehat{\mathcal{A}}_{\tau} = \mathcal{A}$) with high probability for some properly chosen $\tau$.
\begin{prop}[exact recovery]
\label{prop::exact recovery epsilon}
Let $\delta_1\in(0,1)$. In addition to Assumptions \ref{assumption::independence-split} and \ref{assumption::exact-recovery}, suppose $n_1 \wedge m_1 \geq 8D^{-2}\log(4d/\delta_1)$.
Then for any $\tau \in \left[ \Delta_0, D -\Delta_0 \right],$
where $\Delta_0 = \sqrt{\frac{\log(4d/\delta_1)}{2n_1}} + \sqrt{\frac{\log(4d/\delta_1)}{2m_1}}$,
the screening substep $\mbox{Step 1.1}$ \eqref{eq::Atau} satisfies
$${\rm I}\kern-0.18em{\rm P}(\widehat{\mathcal{A}}_{\tau} =\mathcal{A})\geq 1-\delta_1\,.$$
\end{prop}
Now we are ready to control the uniform deviation of density ratio estimate given in Step 1.2.
\begin{assumption}\label{assumption::beta-valid}
The marginal densities $p_j$, $q_j\in \mathcal{P}_{\Sigma}(\beta, L, [-1,1])$ for all $j = 1, \cdots, d$, and there exists $\uderbar \mu > 0$ such that $p_j, q_j \geq \uderbar \mu$ for all $j \in \mathcal{A}$. There exists some constant $\bar C>0$, such that $\|r\|_{\infty}\leq \bar C$, and there is a uniform absolute upper bound for $\|p_j^{(l)}\|_{\infty}$ and $\|q_j^{(l)}\|_{\infty}$ for $j \in \mathcal{A}$ and $l\in [0, \lfloor \beta\rfloor]$. Moreover, the kernel $K$ in the nonparametric density estimates is $\beta$-valid and $L'$-Lipschitz.
\end{assumption}
Smoothness conditions (Assumption \ref{assumption::beta-valid}) and the margin assumption were used together in the classical classification literature. However, it is not entirely obvious why Assumption \ref{assumption::beta-valid} does not render the detection condition redundant. We refer interested readers to Appendix \ref{sec::assumption3 and detection condition} for more detailed discussion.
Let $C^1_j$ and $C^0_j$ be the constants $C$ in Lemma \ref{lemma::A1_1-dim} when applied to $p_j$ and $q_j$ respectively. Assumption \ref{assumption::beta-valid} ensures the existence of absolute constants $C^1 \geq \sup_{j\in\mathcal{A}}C^1_j$ and $C^0 \geq \sup_{j\in\mathcal{A}}C^0_j$.
\begin{prop}[uniform deviation of density ratio estimate]\label{prop::joint_r}
Under Assumptions \ref{assumption::independence-split} - \ref{assumption::beta-valid},
for any $\delta_1, \delta_2 \in (0,1)$, if
$n_1 \wedge m_1 \geq 8D^{-2}\log(4d/\delta_1)$,
$\sqrt{\frac{\log(2n_2 s/\delta_2)}{n_2{h_1}}} \leq \min(1, \underline{\mu}/C^1)$,
$\sqrt{\frac{\log(2m_2 s/\delta_2)}{m_2{h_0}}} \leq \min(1, \underline{\mu}/C^0)$,
and the screening threshold $\tau$ is specified as in Proposition \ref{prop::exact recovery epsilon},
we have
\begin{equation}
\label{eq::T}
{\rm I}\kern-0.18em{\rm P}\left( \| \hat r^S_{\text{N}} - r \|_{\infty}
\leq T \right) \,\geq\, 1-\delta_1 - \delta_2\,,
\end{equation}
where $T = {B} e^B \|r\|_{\infty}$ with
$$
B
\,=\, s
\left\{ \frac{C^1\sqrt{\frac{\log(2n_2 s/\delta_2)}{n_2{h_1}}}}{ \underline{\mu} - C^1\sqrt{\frac{\log(2n_2 s/\delta_2)}{n_2{h_1}}}}
+ \frac{C^0\sqrt{\frac{\log(2m_2 s/\delta_2)}{m_2{h_0}}}}{ \underline{\mu} - C^0\sqrt{\frac{\log(2m_2 s/\delta_2)}{m_2{h_0}}}} \right\}\,.
$$
\noindent Moreover, assume that $n_2\wedge m_2 \geq 1/\delta_2$, $|\mathcal{A}|=s \leq (n_2 \wedge m_2)^{\frac{\beta}{2(\beta+1)}}$, and the bandwidths $h_1= (\log n_2/n_2)^{\frac{1}{2\beta+1}}$ and $h_0=(\log m_2/m_2)^{\frac{1}{2\beta+1}}$, then there exists an absolute constant $C_2>0$ such that
$$
{\rm I}\kern-0.18em{\rm P}\left[\| \hat r^S_{\text{N}} - r \|_{\infty}\leq C_2 \, s\left\{\left(\frac{\log n_2}{n_2}\right)^{\frac{\beta}{2\beta+1}} + \left(\frac{\log m_2}{m_2}\right)^{\frac{\beta}{2\beta+1}}\right\} \right] \,\geq\, 1-\delta_1-\delta_2\,.$$
\end{prop}
The condition $|\mathcal{A}|=s \leq (n_2 \wedge m_2)^{\frac{\beta}{2(\beta+1)}}$ in the above proposition ensures that the upper bound of the uniform deviation diminishes as sample sizes $n_2$, $m_2$ go to infinity. Now we are in a position to present the theorem finale of \text{NSN}$^2$.
\begin{TH1}[NP Oracle Inequalities for $\hat \phi_{\text{NSN}^2}$]\label{theorem::2}
In addition to Assumptions \ref{assumption::independence-split} - \ref{assumption::beta-valid},
assume the density ratio $r$ satisfies the margin assumption of order $\bar\gamma$ at level $C_{\alpha}$ and detection condition of order $\uderbar \gamma$ at level $(C_{\alpha}, \delta^*)$, both with respect to $P_0$.
For any given $\delta_1, \delta_2, \delta_3, \delta_4 \in (0,1)$,
let the \text{NSN}$^2$\text{ }classifier $\hat \phi_{\text{NSN}^2}$ be defined as in \eqref{eq::nsn2},
with the screening threshold $\tau$ specified as in Proposition \ref{prop::exact recovery epsilon} and kernel bandwidths $h_1= (\log n_2/n_2)^{\frac{1}{2\beta+1}}$ and $h_0=(\log m_2/m_2)^{\frac{1}{2\beta+1}}$, and $\hat r^S_N$ be such that $F_{0,\hat r^S_N}$ is continuous almost surely.
For subsample sizes that satisfy
$n_1 \wedge m_1 \geq 8D^{-2}\log(4d/\delta_1)$,
$n_2 \wedge m_2 \geq \max\{\delta_2^{-1}, s^{\frac{2(\beta+1)}{\beta}}\}$, $\sqrt{\frac{\log(2n_2 s/\delta_2)}{n_2{h_1}}} \leq \min(1, \underline{\mu}/C^1)$,
$\sqrt{\frac{\log(2m_2 s/\delta_2)}{m_2{h_0}}} \leq \min(1, \underline{\mu}/C^0)$, \\ and
$m_3 \geq \max\{ 4/{(\alpha\delta_3)}, \delta_3^{-2}, \delta_4^{-2},(\frac{2}{5}M_1{\delta^*}^{\uderbar\gamma})^{-4}\}$,
there exists an absolute constant $\tilde C>0$ such that with probability at least $1-\delta_1 - \delta_2-\delta_3-\delta_4$,
\begin{eqnarray*}
\label{eq::thm_finale_2}
&\text{(I\phantom{I})}&R_0(\hat\phi_{\text{NSN}^2}) \,\leq\, \alpha\,,\\
&\text{(II)}&R_1(\hat\phi_{\text{NSN}^2}) - R_1(\phi^*) \,\leq\, \tilde C \left\{m_3^{-(\frac{1}{4} \wedge \frac{1+\bar\gamma}{4\uderbar{\gamma}})} + s^{1+\bar\gamma}\left(\frac{\log n_2}{n_2}\right)^{\frac{\beta(1+\bar\gamma)}{2\beta+1}}+ s^{1+\bar\gamma}\left(\frac{\log m_2}{m_2}\right)^{\frac{\beta(1+\bar\gamma)}{2\beta+1}}\right\}\,.
\end{eqnarray*}
\end{TH1}
Theorem \ref{theorem::2} establishes the NP oracle inequalities for $\hat \phi_{\text{NSN}^2}$. To help understand the conditions of this theorem, recall that Assumption \ref{assumption::independence-split} is about sample splitting, Assumption \ref{assumption::exact-recovery} is on minimal signal strength for active feature set, Assumption \ref{assumption::beta-valid} is on marginal densities and kernels in nonparametric estimates, and the margin assumption and detection condition describe the neighbourhood of the oracle decision boundary. Note that the subsample sizes $n_1$ and $m_1$ do not enter the upper bound for the excess type II error explicitly. Instead, we have size requirements on them so that the important features are kept with high probability $1-\delta_1$ in the screening substep. The tolerance parameter $\delta_2$ arises from the nonparametric estimation of densities, the parameter $\delta_3$ is for the tolerance on violation of type I error bound,
and $\delta_4$ arises from controlling $|R_0(\hat \phi_{\text{NSN}^2}) - R_0(\phi^*)|$.
\section{Numerical investigation}\label{sec::numeric}
In this section, we analyze two simulated examples and two real datasets to demonstrate the performance of our newly proposed \text{NSN}$^2$\text{ } and \text{PSN}$^2$\text{ } classifiers, in comparison with their corresponding non-screening counterparts (denoted as NN$^2$ and PN$^2$ respectively) as well as three popular methods under the classical framework: Gaussian Naive Bayes (nb), penalized logistic regression (pen-log), and Support Vector Machine (svm). We use R package ``e1071" for nb and svm, and the R package ``glmnet" for pen-log.
To facilitate the presentation, we summarize the four Neyman-Pearson Naive Bayes classifiers in Table \ref{tb::4variants}.
\begin{table}[h]
\centering
\caption{A summary of the four Neyman-Pearson Naive Bayes classifiers. \label{tb::4variants}}
{\renewcommand{\arraystretch}{1.5}
\begin{tabular}{l|c|c}
& Screening-based & Non-screening\\\hline
Non-parametric
& $\hat{\phi}_{\text{NSN}^2} (x) = {\rm 1}\kern-0.24em{\rm I}\left\{\hat r^S_{\text{N}}(x) \geq (\hat{r}^S_{\text{N}})_{(k_{\min})}\right\}$
& $\hat{\phi}_{\text{NN}^2} (x) = {\rm 1}\kern-0.24em{\rm I}\left\{\hat r_{\text{N}}(x) \geq (\hat{r}_{\text{N}})_{(k_{\min})}\right\}$\\\hline
Parametric
& $\hat{\phi}_{\text{PSN}^2} (x) = {\rm 1}\kern-0.24em{\rm I}\left\{\hat r^S_{\text{P}}(x) \geq (\hat{r}^S_{\text{P}})_{(k_{\min})}\right\}$
& $\hat{\phi}_{\text{PN}^2} (x) = {\rm 1}\kern-0.24em{\rm I}\left\{\hat r_{\text{P}}(x) \geq (\hat{r}_{\text{P}})_{(k_{\min})}\right\}$\\
\end{tabular}
}
\end{table}
To train the classifiers in Table \ref{tb::4variants}, we set $\alpha = 0.05$, $\delta_1=0.05$, and $\delta_3=0.05$ throughout this section unless specified otherwise.
In Assumption \ref{assumption::independence-split}, motivated by Proposition \ref{prop::exact recovery epsilon}, we take
$m_1 = \min\{10\log(4d/\delta_1), m/4\}{\rm 1}\kern-0.24em{\rm I}(\text{screening})$, $n_1=\min\{10\log(4d/\delta_1), n/2\} {\rm 1}\kern-0.24em{\rm I}(\text{screening})$,
$m_2 = \lfloor m/2 \rfloor - m_1$, $n_2 = n - n_1$, and
$m_3 = m - \lfloor m/2\rfloor$.
Due to the absence of information with respect to the true $p$ and $q$, the theoretical screening cutoff that achieves exact recovery is not feasible in practice.
We resort to an empirical permutation-based approach \citep{Fan.Feng.ea.2011a} as a substitute. Specifically, the screening substep in \text{NSN}$^2$\text{ } is executed as follows:
\begin{enumerate}
\item Combine $\mathcal{S}^0_1$ and $\mathcal{S}^1_1$ into $\{( X_{i},Y_i)\}_{i=1}^{m_1+n_1},$
where $ X_{i} \in \mathcal{S}^0_1 \cup \mathcal{S}^1_1,$ and $Y_i$ is $ X_{i}$'s class label.
\item Calculate the marginal $D$-statistic for each feature:
$$
D_j = \|\hat{F}^{0}_{j}-\hat{F}^{1}_{j}\|_{\infty},
\quad j = 1,2,\cdots,d\,,
$$
where $\hat{F}^{0}_{j}(x)= \sum_{i:Y_i = 0} {\rm 1}\kern-0.24em{\rm I}(X_{i,j}\leq x_j)$ and
$\hat{F}^{1}_{j}(x) = \sum_{i:Y_i = 1} {\rm 1}\kern-0.24em{\rm I}(X_{i,j}\leq x_j)$.
\item Let $\pi=\{\pi(1),\cdots,\pi(m_1+n_1)\}$ be a random permutation of $\{1,\cdots,(m_1+n_1)\}$.
For $j=1,\cdots,d$, compute
$
D_j^{\text{null}}= \|\hat{F}^{0,\text{null}}_{j}-\hat{F}^{1,\text{null}}_{j}\|_{\infty}$,
where $\hat{F}^{0,\text{null}}_{j}(x_j)= \sum_{i:Y_{\pi(i)} = 0} {\rm 1}\kern-0.24em{\rm I}(X_{i,j}\leq x_j)$,
$\hat{F}^{1,\text{null}}_{j}(x_j) = \sum_{i:Y_{\pi(i)} = 1} {\rm 1}\kern-0.24em{\rm I}(X_{i,j}\leq x_j)$.
\item For some pre-specified $Q \in [0,1],$ let $\omega(Q)$ be the $Q$-th quantile of $\{D_j^{\text{null}}: j = 1,\cdots,d\}$ and select $\widehat{\mathcal{A}} = \{j: D_j \geq \omega(Q)\}$. Here, $Q$ is a tuning parameter that keeps the percentage of noise features that pass the screening around $1-Q$.
\end{enumerate}
The same permutation idea is applied to the screening substep of \text{PSN}$^2$.
$Q$ is set at 0.95 throughout this section.
\subsection{Simulation}
Samples in both simulated examples are generated from the model
\begin{equation*}
\label{eq::independence}
p(x) = \prod_{j=1}^d p_{j}(x_j),\quad q(x) = \prod_{j=1}^d q_{j}(x_j)
\end{equation*}
at 3 different dimensions: $d \in \{10, 100,1000 \}$.
Sparsity for $d=100$ and $1000$ is imposed by setting $p_{j} = q_{j}$ for all $j > 10$.
Seven different training sample sizes: $m=n \in \{$200, 400, 800, 1600, 3200, 6400, 12800$\}$ are considered. The number of replications for each scenario is 1000. Test errors are estimated using the average of 1000 independent observations from each class for each replication.
\subsubsection{Example 1: normals with different means}
Assume the two-class conditional densities $p \sim \mathcal{N}(0.5(1^\prime_{10}, 0^\prime_{d-10})^\prime, {I}_{d})$
and $q \sim \mathcal{N}( 0_{d}, {I}_{d})$ where ${I}_{d}$ is the identity matrix.
\iffalse
The true log density ratio function $\log r(x) = 0.5 \sum_{j=1}^{10} x_j - 1.25$ satisfies
$$\log r(X) |Y=0 \sim \mathcal{N}( -1.25, 2.5), \quad \log r(X) |Y= 1 \sim \mathcal{N}( 1.25, 2.5).$$
\fi
At significance level $\alpha = 0.05,$ the oracle type I/II risks are
$R_0(\phi^*_{\alpha}) = 0.05$ and $R_1(\phi^*_{\alpha}) = 0.53$ respectively.
We first evaluate the screening performance of \text{PSN}$^2$\text{ } and \text{NSN}$^2$\text{ } with results presented in Table \ref{tb::screening1}.
Both $t$-statistic (in \text{PSN}$^2$) and $D$-statistic (in \text{NSN}$^2$) are able to pick up most of the true signals while keeping the false positive rates at around $1-Q$.\\
\begin{table}[h]
\caption{Average screening performance summarized over 1000 independent replications at sample sizes $m=n=400$ and $Q = 0.95$ with standard errors in parentheses. \label{tb::screening1}}
\centering
\scriptsize{
\begin{tabular}{lrr|rr|rr}
&\multicolumn{2}{c|}{\# of selected features} & \multicolumn{2}{c|}{\# of missed signals}& \multicolumn{2}{c}{\# of false positive} \\
\hline
$d$
& \multicolumn{1}{c}{$t$-stat } & \multicolumn{1}{c|}{$D$-stat}
& \multicolumn{1}{c}{$t$-stat } & \multicolumn{1}{c|}{$D$-stat}
& \multicolumn{1}{c}{$t$-stat } & \multicolumn{1}{c}{$D$-stat}
\\\hline
10
&9.11 (1.14)& 8.11 (1.63)
&0.89 (1.14)&1.89 (1.63)
&0 (0)&0 (0) \\
100
&14.64 (3.46)&12.43 (3.38)
&0.78 (0.90)&2.00 (1.39)
&5.43 (3.17)&4.43 (2.77)\\
1000
&59.99 (9.77)&58.82 (9.87)
&0.48 (0.66)&1.14 (1.05)
&50.47 (9.71)&49.96 (9.78) \\\end{tabular}}
\end{table}
\vspace{-0.2in}
\begin{figure}[!h]
\caption{Average errors of $\hat{\phi}$'s over 1000 independent replications for each combination of $(d,m,n)$.
\label{fig::Ex1_risks}}
\centering
\includegraphics[width=\textwidth]{figure1_modified.pdf}
\end{figure}
We then move on to evaluate the trend of type I and type II errors as the sample size increases in Figure \ref{fig::Ex1_risks}.
All the Neyman-Pearson based classifiers have type I error approaching $\alpha$ from below as sample size increases and they have similar type I errors at each sample size. However, nb, pen-log and svm all lead to a type I error larger than $\alpha$.
By enlarging the second row of Figure \ref{fig::Ex1_risks}, one would observe the differences in type II errors among \text{PN}$^2$, \text{PSN}$^2$, \text{NN}$^2$, \text{NSN}$^2$. In the case of $d=10$ when all features are signals, \text{PN}$^2$\text{ } performs the best throughout all sample sizes since it assumes the correct model without the unnecessary screening substep. When sample size is small, \text{PSN}$^2$\text{ } outperforms \text{NN}$^2$, but \text{NN}$^2$\text{ } gradually catches up on larger samples. In the case of $d = 100$, screening helps \text{PSN}$^2$\text{ } to take the lead at low sample sizes. The advantage of screening fades off as the sample size increases. In the case of $d=1000$, \text{PSN}$^2$\text{ } dominates all other three classifiers throughout the sample size range we investigate.
Overall, the advantage of \text{PSN}$^2$\text{ } over \text{NSN}$^2$, and \text{PN}$^2$\text{ } over \text{NN}$^2$\text{ } are uniform across all dimensions and sample sizes.
This is consistent with the intuition that when the data are from a two-class Gaussian model, the parametric methods lead to more efficient estimators than nonparametric counterparts.
\subsubsection{Example 2: normal vs. mixture normal}
Normality assumption is violated in the second example. Assume $p \sim 0.5\mathcal{N}( a,\Sigma) + 0.5 \mathcal{N} (- a, \Sigma)$ and $q \sim \mathcal{N}( 0_d,I_d)$,
where $ a = (\frac{3}{\sqrt{10}} 1^\prime_{10}, 0^\prime_{d-10})^\prime$,
$\Sigma = \left( \begin{array}{cc}10^{-1} I_{10} & 0\\ 0 & I_{d-10} \end{array}\right)$. At significance level $\alpha = 0.05,$
the oracle type I/II risks are
$R_0(\phi^*_{\alpha}) = 0.05$ and $R_1(\phi^*_{\alpha}) = 0.027$ respectively.
The performance of the screening substep of \text{PSN}$^2$\text{ } and \text{NSN}$^2$\text{ } is shown in Table \ref{tb::scr2}.
While both screening methods keep the false positive rates at around $1-Q$, the parametric screening method (\text{PSN}$^2$) with $t$-statistic misses almost all signals. This is not surprising since $t$-statistics rank features by differences in means and the two groups have exactly the same marginal mean and variance across all dimensions.
\begin{table}[h]
\caption{
Average screening performance summarized over 1000 independent replications at sample sizes $m=n=400$ and $Q = 0.95$ with standard errors in parentheses. \label{tb::scr2}
}
\centering
\scriptsize{
\begin{tabular}{lrr|rr|rr}
&\multicolumn{2}{c|}{\# of selected features} & \multicolumn{2}{c|}{\# of missed signals}& \multicolumn{2}{c}{\# of false positive} \\
\hline
$d$
& \multicolumn{1}{c}{$t$-stat } & \multicolumn{1}{c|}{$D$-stat}
& \multicolumn{1}{c}{$t$-stat } & \multicolumn{1}{c|}{$D$-stat}
& \multicolumn{1}{c}{$t$-stat } & \multicolumn{1}{c}{$D$-stat}
\\\hline
10
&1.76 (1.53)&8.13 (1.83)
& 8.24 (1.53)&1.87 (1.83)
& 0 (0) &0 (0) \\
100
&5.93 (3.44)&11.96 (3.57)
& 9.38 (0.80)&2.34 (1.59)
& 5.31 (3.17)& 4.29 (2.68) \\
1000
&50.69 (9.60)&58.78 (9.87)
& 9.50 (0.69)&1.26 (1.04)
&50.19 (9.51)&50.04 (9.62) \\
\end{tabular}}
\end{table}
\begin{figure}[!h]
\caption{Average error rates of $\hat{\phi}$'s over 1000 independent replications for each combination of $(d,m,n)$.
Error rates are computed as the average of 1000 independent testing data points from each class in each replication, and then average over replications. \label{fig::Ex2_risks}}
\centering
\includegraphics[width=\textwidth]{figure2_modified.pdf}
\end{figure}
Figure \ref{fig::Ex2_risks} presents the average error rates.
The same reason that causes the above fiasco of $t$-statistic screening
reduces \text{PSN}$^2$\text{ } and \text{PN}$^2$\text{ } to nothing more than, if not less than, two unfair random coins with probability 0.05 of landing 1, while the behaviors of nb and pen-log bear more resemblance to that of fair random coins.
This fundamental difference is due to that the classical framework aims to minimize the overall risk, and therefore tends to distribute errors evenly when the sample size for the two classes are about the same.
The \text{NSN}$^2$\text{ } and \text{NN}$^2$\text{ } based on nonparametric assumptions, on the other hand, perform very well on non-normal data.
Their difference in type II error performances are similar as in Example 1.
From the two simulation examples, it is clear that the
screening-based \text{NSN}$^2$\text{ } and \text{PSN}$^2$\text{ } \\ exhibit advantages over their non-screening counterparts under
high-dimensional settings. When the normality assumption is violated, and the sample sizes are reasonably large for efficient kernel estimates,
\text{NSN}$^2$\text{ } prevails over \text{PSN}$^2$\text{ }.
As a rule of thumb, for high-dimensional classification problems that emphasize type I error control, we recommend \text{NSN}$^2$\text{ } if the sample size is relatively large and \text{PSN}$^2$\text{ } otherwise.
\subsection{Real data analysis}
In addition to the neuroblastoma dataset analyzed in the introduction, we now demonstrate the performance of \text{PSN}$^2$\text{ } and \text{NSN}$^2$\text{ } for targeted asymmetric error control on two additional real datasets.
\subsubsection{p53 mutants dataset}
The p53 mutants dataset \citep{danziger2006functional} contains $d=5407$ attributes extracted from biophysical experiments for 16772 mutant p53 proteins, among which 143 are determined as ``active" and the rest as ``inactive" via in vivo assays.
All 143 active samples and the first $1500$ inactive samples are included in our analysis.
We treat the active class as class 0 and aimed to control the error of missing an active under $\alpha = 0.05$.
This dataset is split into a training set with 100 observations from the active class and 1000 observations from the inactive class, and a testing set with the remaining observations. \text{PSN}$^2$\text{ } is used as the representative of our proposed methods, as the class 0 sample size is small for nonparametric methods.
The average type I and type II errors over 1000 random splits are shown in Table \ref{tb::realData}. Compared with pen-log, nb and svm,
\text{PSN}$^2$\text{ } performs much better in controlling the type I error.
\begin{table}[h]
\caption{Average errors over 1000 random splits with standard errors in parentheses. $\alpha = 0.05$, $\delta_1 = 0.05$, $Q=0.95$, and $\delta_3 = 0.1$.
\label{tb::realData}}
\vskip 6pt
\centering
{\renewcommand{\arraystretch}{1.5}
\begin{tabular}{lrrrr}
&\multicolumn{1}{c}{\text{PSN}$^2$}
&\multicolumn{1}{c}{pen-log}
&\multicolumn{1}{c}{nb}
&\multicolumn{1}{c}{svm}\\\hline
type I &\underline{.019 (.028)} & .162 (.060) &.056 (.034) &.484 (.222)\\
type II &.461 (.291)&.010 (.004)&.458 (.033) &.004 (.003)
\end{tabular}
}
\end{table}
\subsubsection{Email spam dataset}
Now, we consider an e-mail spam dataset available at \url{https://archive.ics.uci.edu/ml/datasets/Spambase}, which contains 4601 observations with 57 features, among which 2788 are class 0 (non-spam) and 1813 are class 1 (spam).
We first standardize each feature and add 5000 synthetic features consisting of independent $\mathcal{N}(0,1)$ variables to make the problem more challenging.
The augmented data has $n=4601$ observations with $d=5057$ features.
This augmented dataset is split into a training set with 1000 observations from each class and a testing set with the remaining observations. We use \text{NSN}$^2$\text{ } since the sample size is relatively large.
The average type I and type II errors over 1000 random splits are shown in Table \ref{tb::spam}.
To evaluate the flexibility of \text{NSN}$^2$\text{ } in terms of prioritized error control, we also report the performance when the priority is switched to control the type II error below $\alpha=0.05$. The results in Table \ref{tb::spam} demonstrate that \text{NSN}$^2$\text{ } is able to control either type I or type II error depending on the specific need of the practitioner.
\begin{table}[h]
\caption{Average errors over 1000 random splits with standard errors in parentheses.
$\alpha = 0.05$, $\delta_1 = 0.05$, $Q=0.95$, and $\delta_3=0.05$. The suffix after \text{NSN}$^2$\text{ } indicates the type of error it targets to control under $\alpha$.
\label{tb::spam}}
\vskip 4pt
\centering
{\renewcommand{\arraystretch}{1.5}
\begin{tabular}{lrrrrr}
&\multicolumn{1}{c}{\text{NSN}$^2$-$R_0$}
&\multicolumn{1}{c}{\text{NSN}$^2$-$R_1$}
&\multicolumn{1}{c}{pen-log}
&\multicolumn{1}{c}{nb}
&\multicolumn{1}{c}{svm}\\\hline
type I &.\underline{019 (.007)}&.488 (.078)&.064 (.007)&.444 (.018)&.203 (.013)\\
type II &.439 (.057)&\underline{.020 (.009)}&.133 (.015)&.054 (.008)&.235 (.017)
\end{tabular}
}
\end{table}
\section{Discussion\label{sec::discussion}}
The Neyman-Pearson classification framework is an important and interesting paradigm to explore beyond the Naive Bayes models considered in this work. For example, we can relax the independence assumption on \text{PSN}$^2$, and consider a general covariance matrix. Also, we can consider NP-type classifiers with decision boundaries involving feature interactions. It is also worthwhile to study the non-probabilistic approaches under high-dimensional NP paradigm. Methods of potential interest include the $k$ nearest neighbor \citep{weiss2010text} and the centroid based classifiers \citep{Tibshirani.Hastie.ea.2002, hall2010optimal}. However, the NP oracle inequalities are likely to be replaced by a new theoretical formulation for these methods.
A benefit of the present approach is that, for any given estimator $\hat{r}$, we have a uniform method to determine the proper threshold level in the plug-in classifiers.
However, it would be interesting to develop new ways to estimate the threshold level $C_{\alpha}$ that is adaptive to the particular method used to approximate the density ratio $r$. |
1508.03207 | \section{Introduction}
The coupling of non-relativistic field theories to curved backgrounds was initially investigated with the motivation of establishing a covariant framework. In recent years, such minimally coupled theories have gained a renewed importance due to its major applications in the field of condensed matter physics, such as in the descriptions of the fractional quantum Hall effect, trapped electron gas, and various transport phenomena, to name a few \cite{Son:2005rv, Son:2013rqa, Andreev:2013qsa, Wu:2014dha, Gromov:2014vla, Geracie:2015dea}. Non-relativistic diffeomorphism has certain distinct features in contrast to the usual relativistic diffeomorphism invariance which is so fundamental in understanding the metric formulation of Einstein gravity. For Einstein gravity, the vielbein formulation is related directly with the metric formulation because
the spacetime manifold is endowed with a nondegenerate metric. Unlike this in the non-relativistic context due to the absolute nature of time, the spacetime manifold possesses two degenerate metrics, a contravariant spatial metric and a covariant temporal metric, orthogonal to one another. In this context it is useful to recall that, a covariant geometrical formulation of Newtonian gravity was first constructed by Elie Cartan \cite{Cartan:1923zea} and is well known as Newton-Cartan geometry in the literature. This formulation helps in appreciating Newtonian gravity as a non-relativistic limit of general relativity. However, the current requirement is a formulation based on the vielbeins which will be an analog of the Cartan formulation of Einstein’s gravity. One may think that a suitable algorithm may be obtained from relativistic theories by contraction. However, the requirement of spatial diffeomorphism in FQHE or any planar system is difficult to obtain by taking a non-relativistic limit of some appropriate relativistic theory. Moreover, sometimes such non - relativistic limits have been found to be problematic \cite{Wu:2014dha}.
A way out follows from the well known procedure for directly deriving relativistic matter theories minimally coupled to curved backgrounds, namely through the localization of spacetime symmetries in the flat spacetime field theory \cite{Utiyama:1956sy}. One starts with a matter theory invariant under global Poincare transformations which does not remain invariant when the parameters of the Poincare transformations become functions of spacetime. To modify the matter theory so that it becomes invariant under the local Poincare transformations, compensating fields are introduced in the process by defining covariant derivatives \cite{Blagojevic:2002du}. A very important aspect of this approach is the correspondence of these new fields with the vierbeins and spin-connection in Riemann-Cartan spacetime.
Following the spirit of this procedure, a general prescription to attain non-relativistic diffeomorphism invariance was proposed in our previous work \cite{Banerjee:2014pya, Banerjee:2015tga}, wherein the localization of the Galilean symmetry for field theories was carried out. The Newton-Cartan spacetime was found to be the most general Galilean invariant curved background through this procedure\cite{Banerjee:2014nja}. The present work seeks to extend this formalism to further include scale transformations.
We will now briefly consider the main approaches which have been used in the derivation of minimal coupling to, and the geometry of, curved non-relativistic backgrounds. This will serve to place our work in a clear context with respect to these approaches. The minimal gravitational coupling of the Newtonian theory had been initially considered in \cite{Duval:1976ht,Duval:1983pb}, whose results have been cast in a more unified approach recently in \cite{Geracie:2015dea}. It gained renewed attention in \cite{Son:2005rv} where the minimal coupling of non-relativistic particles (electrons) to the external gauge field and the metric were determined by using principles of effective field theory. The invariance of the derived action under time dependent diffeomorphisms had been restored by demanding non-canonical transformations of the spatial external gauge fields, which leads to problems when considering the flat space limit \cite{Banerjee:2014pya,Banerjee:2015rca}. In this limit, the flat space Galilean transformations are restored through a specific assumption, involving a particular relation between the gauge parameter and the boost parameter. In contrast, the flat limit can easily be obtained in our field theoretic approach \cite{Banerjee:2014pya}.
Other approaches have been put forward to determine the nature of curved non-relativistic backgrounds directly from the consideration of non-relativistic symmetries. One of these involves the derivation of the background geometry with appropriate metric and curvature tensors, by gauging the centrally extended Galilean algebra (Bargmann algebra) \cite{Andringa:2010it}. The conformal extension of this procedure has been carried out in \cite{Bergshoeff:2014uea}. However, it should be stressed that this is a strictly algebraic approach without reference to any dynamical content of the underlying theory. In addition, the approach necessarily requires the imposition of curvature constraints in order to derive the connection, which formally results in a torsionless theory. Torsion is eventually accounted for in \cite{Bergshoeff:2014uea} by defining it as the antisymmetric piece of a metric compatible and boost invariant connection, with the further definition of the dilatation gauge field in terms of temporal one-form and its generalized inverse. This torsion tensor has important applications in the field of non-relativistic holography \cite{Christensen:2013lma,Hartong:2014oma}. Yet another approach, which is very closely related to the gauging approach mentioned above is the coset construction \cite{Brauner:2014jaa, Jensen:2014aia}. Given a particular symmetry group, and through a prudent choice of a subgroup within it, a coset can be defined which determines the background geometry invariant under the symmetry group. The main feature of this approach is that different choices of the subgroup can lead to several possible realizations of non-relativistic curved backgrounds \cite{Brauner:2014jaa}. The general spacetime connection follows directly from the construction of the Maurer-Cartan form within the coset formalism.
Central to the success of these approaches, as well as our own, are the presence of vielbeins. The coset construction in being used for any non-local symmetry group necessarily involves the use of vielbeins. The same holds true for gauging the algebra directly as in \cite{Bergshoeff:2014uea}, and the localization of symmetries in our previous work \cite{Banerjee:2014pya,Banerjee:2014nja}.
Through their involvement, the end result is guaranteed to be manifestly covariant and independent of any specific choice of coordinates. In the context of the coset construction, this statement corresponds to a gauge fixing choice of the parameters \cite{Brauner:2014jaa}. Thus, in dealing with the generators of Galilean and scale transformations in accordance with our procedure, the involvement of vielbeins in the resultant action defined on the curved background is manifestly covariant, and invariant under local Galilean and scale transformations. Localizing the symmetries via the procedure in this paper has two specific advantages. The first is that it determines the minimal coupling of non-relativistic field theories to the corresponding curved background by its direct involvement from the onset. The second feature is that it reveals that the vielbeins and the relations between them, as well as the form of the connection, are as much a result of the generators being considered as they are of the dependence on the coordinates used at the time of localization. In particular, bearing the non-relativistic nature of absolute time, the parameters of temporal transformations depend only on time, and not space. This in turn affects which vielbeins do result from the procedure, and serves to elucidate the relation between the vielbeins and the localization of the parameters one begins with.
The main motivation of this paper is to include the anisotropic scale transformation in the localization procedure
Prior to this work, certain non-relativistic conformal backgrounds have been considered and constructed in \cite{Jensen:2014aia, Duval:2009vt, Bergshoeff:2014uea}. However, the special case of scale invariant fields minimally coupled to curved backgrounds appears to have attracted limited attention despite its known significance in flat space effective field theory descriptions of the fractional quantum hall effect \cite{Fradkin:1991nr} and the Aharanov Bohm effect \cite{Bergman:1993kq}. In general, Hall fluids are incompressible in nature \cite{Laughlin:1983fy, Zhang:1992eu} and a well known fact is that incompressible non-relativistic fluids are invariant under scale transformations only, and not under the special conformal transformation \cite{Fouxon:2008ik}. Thus, while scale invariance is a symmetry of all conformal fluids, the full conformal group need not be. Scaling also plays an important role in determining temperature dependence of transport coefficients in the hydrodynamic description of condensed matter systems with ordinary critical points \cite{Hohenberg:1977ym}. Motivated by these observations the present work is being undertaken with the goal of investigating scale invariant non-relativistic field theories on curved backgrounds.
In the relativistic context, the localization of the scale and Poincare transformations results in the identification of the Weyl-Cartan geometry \cite{Blagojevic:2002du}. This geometry is characterized by a rescaled Riemann-Cartan metric, and a connection which includes the scale factors. While the Riemann tensor is not invariant under scale transformations of the metric, a scale invariant Weyl tensor can be constructed out of the Riemann and Ricci tensors. In light of these known relativistic results, the inclusion of scale symmetry in the localization procedure for non-relativistic field theories should give the Weyl extension of Newton-Cartan geometry. The Weyl rescaled Newton-Cartan geometry is thus expected to apply to both metrics individually, while respecting the properties of the Newton-Cartan structure described earlier. In this paper, this has been explicitly demonstrated to be the case.
We also have in mind the application of our formalism in the description of non-relativistic conformal fluids on curved backgrounds. In this context, we note that the fluid gravity correspondence, for non-relativistic fluids, has been considered in \cite{Rangamani:2008gi} using the well-known light cone reduction formalism. Non-relativistic fluids on the Newton-Cartan background have been discussed in \cite{Duval:1976ht, Geracie:2015xfa,Geracie:2014nka}. However, our interest in scale invariance regarding fluids stems from its known significance in the relativistic case. In the case of relativistic conformal fluids, the scale factor acquires an expression in terms of expansion and acceleration. The covariant expression for the entropy current can be derived for these fluids on curved backgrounds \cite{Loganayagam:2008is}. The thermodynamic description of these fluids, in effect, is related to the scale factor on curved backgrounds. Inspired by these results and the aforementioned relevance of scale in the hydrodynamic descriptions of systems near criticality, we expect that understanding scale invariance in non-relativistic conformal fluids on curved backgrounds might serve a broader context.
We will also consider the implications of scale invariance of the incompressible Hall fluid. For the incompressible fractional quantum Hall (FQH) states of two-dimensional electron
fluids in external magnetic fields, the quantized Hall conductivity is the most fundamental transverse response. The charge current flows perpendicular to the direction of an external electric field, and the transport is dissipationless \cite{Tsui:1982yy}. While the Hall conductivity is a key topological property, independent of the microscopic details of the system, it does not specify the Hall fluid completely.
A full characterization of the FQH fluid also requires the intrinsic orbital spin and the corresponding Hall viscosity. These quantities generate from the coupling of the Hall fluid to a curved background. The Hall viscosity is the response of the Hall fluid to an external shear deformation of the background surface, under which the Hall fluid develops a momentum density perpendicular
to it. As a result, the net energy for the deformation vanishes resulting in a non-dissipative viscosity. At the level of the effective hydrodynamic theory, this coupling of the hydrodynamic gauge fields of the fluid to the spin connection is represented by the Wen-Zee term whose coefficient involves the orbital spin \cite{Wen:1992ej}, while the usual contribution to the Hall viscosity arises from the Berry phase term, which contains the density and the temporal piece of the spin connection \cite{Cho:2014vfl}.
In considering scale transformations, we find the scale analogs of the Wen Zee and Berry phase terms, which involve the gauge field for scale transformations in place of the spin connection. While the Berry phase and Wen Zee terms provide a hall viscosity due to the antisymmetric nature of the spin connection, the gauge field for scale transformations does not possess this feature. As such, the corrections correspond to a different response function altogether, whose aspects are further discussed in the paper.
This paper is organized as follows. In section 2 and 3, the basic properties of the non-relativistic symmetries and the Newton-Cartan geometry are stated. In section 4, the localization of the Galilean and scale symmetry is described. In the following section, as an application of this localization procedure, non-relativistic spatial diffeomorphism invariance on a general curved background has been achieved. In section 6, Weyl rescaled Newton-Cartan geometry (with or without torsion) is constructed from the geometrical identification of the fields introduced during localization. Finally we develop the first order non-relativistic conformal incompressible fluid dynamics in our approach, beginning with a manifestly Weyl covariant formalism in section 7. In section 8, we discuss the contributions of scale symmetry to the quantum Hall fluid. Some useful mathematical calculations are given in appendices.
\section{Non-relativistic gravitational background : Newton-Cartan geometry}
The Newton-Cartan background is Cartan's spacetime formulation of the classical Newtonian theory of gravity. It is a non-relativistic manifold which contains a degenerate inverse spatial metric and a degenerate temporal 1-form satisfying the following relations,
\begin{align}
\nabla_{\mu}h^{\alpha \beta} = 0 \qquad & \qquad \nabla_{\mu}\tau_{\nu} = 0 \notag \\
h^{\mu \nu} \tau_{\mu} &= 0
\label{ncmet}
\end{align}
Given that $h^{\mu \nu}$ and $\tau_{\mu}$ are degenerate, their inverses do not exist. Formally, we can define a generalized inverse for the temporal 1-form, $\tau^{\mu}$, such that
\begin{equation}
\tau^{\mu}\tau_{\mu} = 1
\end{equation}
There exists a class of $\tau^{\mu}$ which satisfy the above relation, with respect to which we can further define a spatial metric, $h_{\mu \nu}$, that satisfies the following relations
\begin{align}
h_{\mu \nu}\tau^{\mu} &= 0 \notag \\
\delta^{\mu}_{\nu} &= h^{\mu \lambda}h_{\lambda \nu} + \tau^{\mu}\tau_{\nu}
\label{ncproj}
\end{align}
$h^{\mu \lambda}h_{\lambda \nu} = P^{\mu}_{\nu}$ is the projection operator of the Newton-Cartan background. There exists a covariant derivative which is metric compatible with both the metrics. A direct consequence of this is that the resultant connection is not uniquely determined by these metrics alone. This allows the Newton-Cartan background to geometrically capture the presence of external forces \cite{D}. With all these considerations, a linear symmetric connection which satisfies the metricity conditions given in Eq.~(\ref{ncmet}) has a general form given by
\begin{align}
{\Gamma^\rho}_{\nu\mu} &= \tau^{\rho}\partial_{(\mu}\tau_{\nu)} +
\frac{1}{2}h^{\rho\sigma} \Bigl(\partial_{\mu}h_{\sigma\nu}+\partial_{\nu}h_{\sigma\mu} - \partial_{\sigma}h_{\mu\nu}\Bigr)+ h^{\rho\lambda}K_{\lambda(\mu}\tau_{\nu)} \notag \\
&= {\Gamma}'^{\rho}_{\nu\mu} + h^{\rho\lambda}K_{\lambda(\mu}\tau_{\nu)}
\label{nccon}
\end{align}
${\Gamma}'^{\rho}_{\nu\mu}$ in Eq.~(\ref{nccon}) represents the inertial part of the connection, while the full connection ${\Gamma^\rho}_{\nu\mu}$ contains additional non-inertial forces through the term $K_{\lambda \mu}$ \cite{Duval:1983pb}.
Given this connection, one can construct the Riemann tensor in the usual way
\begin{equation}
[\nabla_{\mu}, \nabla_{\nu}]V^{\lambda}=R^{\lambda}_{\sigma\mu\nu}V^{\sigma}\label{R}
\end{equation}
For a symmetric Newton-Cartan connection, the following relations hold
\begin{equation}
\tau_{\rho}R^{\rho}_{\sigma\mu\nu}=0,~~R^{\lambda}_{\sigma(\mu\nu)}=0,~~R^{\lambda}_{[\sigma\mu\nu]}=0,
~~R^{(\lambda\sigma)}{}{}_{\mu\nu}=0
\label{ncRSsymm}
\end{equation}
The theory considered thus far is completely general. If in addition the Galilean connection has to possess the correct Newtonian limit of the connection of a Riemannian manifold, then the following additional condition known as \emph{Trautman's condition} is required
\begin{equation}
R^{\lambda}{}_{\sigma}{}^{\mu}{}_{\nu}=R^{\mu}{}_{\nu}{}^{\lambda}{}_{\sigma}
\label{trautman}
\end{equation}
This condition is equivalent to the condition that $dK=0$ which implies that
\begin{equation}
K_{\lambda \mu} = 2 \partial_{[\lambda} A_{\mu]}\label{kexp}
\end{equation}
where $A_{\mu}$ is an arbitary 1-form.
Using (\ref{ncRSsymm}), (\ref{kexp}) the Ricci tensor satisfies the following equation,
\begin{equation}
R_{\mu\nu}=4\pi\rho\tau_{\mu}\tau_{\nu}\label{ncRicci}
\end{equation}
which is of course, the correct Newtonian limit of Einstein's equations. `$\rho$' is the mass density which occurs in Poisson's equation.
From (\ref{ncproj}) we can obtain the variation of $h_{\mu\nu}$ as,
\begin{equation}
\delta h_{\mu\nu}=-2h_{\rho(\mu}\tau_{\nu)}\delta \tau^{\rho}
\end{equation}
In a similar manner the covariant derivative on $h_{\mu\nu}$ will act in the following way,
\begin{equation}
\nabla_{\gamma} h_{\mu\nu}=-2h_{\rho(\mu}\tau_{\nu)}\nabla_{\gamma} \tau^{\rho}
\end{equation}
\section{Localization of the Galilean and the scale symmetry}
One important motivation of this paper is to couple non-relativistic field theories, which are invariant under Galilean and anisotropic scale transformations, to curved backgrounds. This can be achieved through the localization of the non-relativistic symmetries. The localization of the Galilean symmetry for a non-relativistic field theoretic model was discussed in detail in \cite{Banerjee:2014pya, Banerjee:2015rca}. This procedure will be extended in this section to in addition include scale transformations.
The procedure applies to any action invariant under the (global) Galilean (\ref{globalgalilean}) and scale transformations (\ref{gst}) in flat space.
\begin{equation}
S = \int dt d^3x {\cal{L}}\left(\phi, \partial_t \phi, \partial_k \phi\right)\label{genaction}
\end{equation}
where the index `$t$' and `$k = 1,2,3$' denote time and spatial coordinates respectively. In covariant notation these can be represented collectively by $\mu$.
In non-relativistic systems the Galilean symmetry implies invariance under the Galilean transformation. Due to the absolute nature of Newtonian time, space and time has to be treated separately. The Galilean transformation on time is a translation, and on spatial coordinates is a composition of spatial translation, rotation and boost. Explicitly
\begin{equation}
t\rightarrow t-\epsilon,~~~~~~x^i\rightarrow x^{i}+\epsilon^{i}+ \omega^{i}{}_{j}x^{j}-v^{i}t =x^i+\eta^i-v^i t\label{globalgalilean}
\end{equation}
where $\eta^i=\epsilon^{i}+ \omega^{i}{}_{j}x^{j}$, and $\omega^{ij}$ is antisymmetric. The parameters $\epsilon$, $\epsilon^{i}$, $\omega^{ij}$ and $v^{i}$ correspond to time translation, space translation, spatial rotation and boost respectively. For global transformations, these parameters are constant.
In non-relativistic systems, time gets rescaled `z' times as compared to the space coordinates \cite{Hagen:1972pd}, where `z' is called the dynamical critical exponent. This is well known as `Lifshitz scaling' \footnote{Another non-relativistic conformal extension known in the literature is that of the Galilean Conformal algebra \cite{Bagchi:2009my}. The generator for non-relativistic scaling in GCA is,
\begin{equation}
D =-(x^i\partial_i+t\partial_t)\notag\\
\end{equation}
In GCA, space and time scale uniformly like the relativistic case and the number of generators are the same as those of the relativistic conformal group. } and this scaling plays an important role in strongly coupled systems, which have been investigated holographically and also found to be relevant in the description of strange metals\cite{Hartnoll:2009ns}. The expression of the scale transformations in time and space coordinates, are given by,
\begin{equation}
t'=e^{zs} t,~~~x^{i'}=e^s x^i\label{s}
\end{equation}
where `s' is the parameter of the scale transformation. The infinitesimal transformation takes the following form,
\begin{equation}
x^i\rightarrow x^i+sx^i,~~~t\rightarrow t+zst
\label{gst}
\end{equation}
Under any general coordinate transformation $x^{\mu} \to x^\mu + \xi^\mu$ the action (\ref{genaction}) changes by
\begin{equation}
\Delta S = \int dt d^3x ~\Delta {{\cal{L}}} =\int dt d^3x ~[\delta_0{{\cal{L}}} + \xi^{\mu}\partial_{\mu}{{\cal{L}}}+ \partial_{\mu}\xi^{\mu}{{\cal{L}}}]\label{formvariation}
\end{equation}
Here $\delta_0$ is the form variation given by $\delta_0 \phi = \phi^{\prime}\left({\bf{r}}, t\right) - \phi\left({\bf{r}}, t\right)$. For invariance we require that $\Delta{\cal{L}}$ will either vanish or be a total derivative. When $\Delta \cal{L}$ vanishes, $\xi^{\mu}$ corresponds to a symmetry generator of the Lagrangian.
To elaborate on the steps of this procedure, we will consider the complex Schr\"odinger field theory as a specific example, whose action is given by
\begin{equation}
S = \int dt \int d^3x \left[ \frac{i}{2}\left( \phi^{*}\partial_{t}\phi-\phi\partial_t\phi^{*}\right) -\frac{1}{2m}\partial_k\phi^{*}\partial_k\phi\right]
\label{globalaction}
\end{equation}
This theory is invariant under Galilean transformations and Eq.~(\ref{gst}), when $z=2$. Under the scale transformation in $(3+1)$d, $\phi$ must vary as
\begin{equation*}
\phi'(x')=e^{-\frac{3}{2}s}\phi(x)
\end{equation*}
to retain the scale invariance of the theory.
The first step of the procedure involves determining the conditions under which the action (\ref{globalaction}) is invariant under the Galilean and scale transformations. From Eq.~(\ref{globalgalilean}) and (\ref{gst}) the infinitesimal coordinate transformation can be written as,
\begin{equation}
t\rightarrow t-\epsilon+2st,~~~x^i\rightarrow x^i+\eta^i-v^i t+sx^i\label{galscale}
\end{equation}
Eq. (\ref{galscale}) can be formally written as,
\begin{equation}
x^\mu \to x^\mu + \xi^\mu \label{gendiff}
\end{equation}
where
$\xi^0=-\epsilon+2st,~~\xi^i=\eta^i-v^i t+sx^i$. Note that $\xi^{\mu}$ can not be treated as independent diffeomorphisms at this stage. The equation (\ref{gendiff}) is just a shorthand way of writing the transformations we will be interested in. In particular, the choice of transformation parameters in (\ref{galscale}) correspond to those of the generators of Galilean and scale transformations. Throughout the section we will be considering these generators specifically, which is necessary for the localization procedure.
While $\partial_{\mu}\xi^{\mu}=0$ for the global Galilean transformations, it is not so for global scale transformations.
From Eq.~ (\ref{galscale}), we see that $\partial_{\mu}\xi^{\mu}=5s$. Thus, $$\Delta{\cal{L}}=\delta_0{{\cal{L}}} + \xi^{\mu}\partial_{\mu}{{\cal{L}}}+ \partial_{\mu}\xi^{\mu}{{\cal{L}}}=0$$ provided the field and its derivatives vary in the following manner,
\begin{align}
\delta_0\phi &= -\xi^0\partial_t\phi-\xi^i\partial_i\phi- imv^{i}x_i \phi-\frac{3}{2}s\phi\notag\\
\delta_0\partial_k\phi&=-\xi^0\partial_{t}(\partial_{k}\phi)-\xi^i \partial_{i}(\partial_{k}\phi)-imv^i\partial_{k}(x_i\phi)+\omega_k{}^{m}\partial_{m}\phi-\frac{3}{2}s\partial_k\phi-s\partial_k\phi
\notag\\
\delta_0 \partial_t\phi&=-\xi^0\partial_{t}(\partial_{t}\phi)-\xi^i \partial_{i}(\partial_{t}\phi)-imv^i x_i\partial_{t}\phi+v^{i}\partial_{i}\phi-\frac{3}{2}s\partial_t\phi-2s\partial_t\phi
\label{delphi}
\end{align}
The `$imv^{i}x_i$' term is important in Eq.~(\ref{delphi}) to retain the invariance of the Schrodinger action under the spatial boost \cite{Banerjee:2014pya}. In this paper, the generators of Galilean transformations are taken to be those of the centrally extended Galilean group, and mass is treated as a parameter throughout.
At this stage, the action (\ref{globalaction}) is invariant under global transformations. When we localize Eq.~(\ref{galscale}), the transformation parameters `$\epsilon$, $\epsilon^{i}$, $\omega^{ij}$, $v^{i}$' and `$s$' are functions of space and time. Keeping in mind the nature of non-relativistic spacetime, i.e. absolute time and relative space, the time transformations may be generalised as some function of time whereas spatial transformations are functions of both
time and space. Hence, the most general local transformations are given by,
\begin{equation}
t\rightarrow t-\epsilon(t)+\lambda(t)t,~~~~~~x^{i}\rightarrow x^i+\epsilon^{i}(x,t)+ \omega^{i}{}_{j}(x,t)x^{j}-v^{i}(x,t)t+s(x,t)x^i
\end{equation}
Note that at the time of local scale transformation, the magnitude of the time rescaling parameter always has to be twice the magnitude of the space rescaling parameter to keep the Schr\"odinger action invariant.
The action which was invariant under global transformations is clearly no longer invariant under the local ones. This follows not only from $\partial_{\mu}\xi^{\mu}\neq0 $ , but also from the derivatives of $\phi$ not varying covariantly as in (\ref{delphi}). To retain the invariance, the next step involves the introduction of additional fields which are defined through `gauge covariant derivatives' \cite{Banerjee:2014pya}. We take the covariant derivatives as,
\begin{eqnarray}
D_k\phi=\partial_k\phi+iB_k\phi+iC_k\phi\nonumber\\
D_t\phi=\partial_t\phi+iB_t\phi +iC_t \phi\label{firstcov}
\end{eqnarray}
The `B' fields were introduced in earlier work \cite{Banerjee:2014pya} to incorporate the change due to localization of the Galilean transformation. Similarly `C' fields are introduced in (\ref{firstcov}) to account for scale transformations. The explicit structure of gauge fields `$B$' and `$C$' are as follows,
\begin{align}
B_k &= B_k^{ab}\lambda_{ab} + B_k^{a0}\lambda_{a}\notag\\
B_t &= B_t^{ab}\lambda_{ab} + B_t^{a0}\lambda_{a}\notag\\C_{\mu}&=Db_{\mu}
\label{gaugefields}
\end{align}
where $\lambda_{ab}$ and $\lambda_{a}$ are respectively the generators of rotations and Galilean boosts, and `$D$' is the dilatation generator. The expression for the generator of the Galileo boost is given by $\lambda_a = mx_a$. Since the fields being dealt with are scalars, it follows that $B_k^{ab}$ and $B_t^{ab}$ could be ignored safely in what follows. However an important exception occurs in two space dimensions where the rotation generator is a scalar. This is crucial in the study of the coupling of a scalar field with the spin connection in the study of FQHE as we will see.
It can be noted that these definitions are insufficient, as $D_k\phi \; \text{and} \; D_t\phi$ do not vary as in (\ref{delphi}). In order to remedy this, we proceed in the following way. First, local spatial coordinates `$x^a$' (a =1, 2, 3) are introduced, which will also help in providing a geometrical framework to the local Galilean and scale transformations. We can then define the covariant derivatives with respect to these local coordinates in the following way
\begin{align}
\tilde{D}_{0}\phi &={\Sigma_0}^{0}D_t \phi+{\Sigma_0}^{k}D_k \phi\notag\\
\tilde{D}_a\phi &={\Sigma_a}^{k} D_k\phi
\label{finalcov}
\end{align}
where the $\Sigma$'s are additional fields introduced in order to define the new covariant derivatives. Note that, the local time will be same as the global one due to the absolute nature of Newtonian time. It can now be demonstrated that the derivatives defined in Eq.~(\ref{finalcov}) do transform covariantly as in (\ref{delphi}). For $\tilde{D}_a\phi$ we have the following variation,
\begin{align}
\delta_0 \tilde{D}_a\phi&=-\xi^0\partial_t(\tilde{D}_a \phi)-\xi^i\partial_i (\tilde{D}_a \phi)-imv^i x_i (\tilde{D}_a \phi)-\omega^b{}_a \tilde{D}_b \phi-imv_a\phi\notag\\&-\left( \frac{3s+\lambda}{2}\right) \tilde{D}_a \phi
\end{align}
provided the fields $B_k, C_k$ and ${\Sigma_a}^{k}$ vary according to
\begin{align}
{\delta}_0 B_{k}&=(\epsilon-\lambda t) \dot{B}_k - \left(\eta^i-tv^i+sx^i\right) {\partial}_i B_k-{\partial}_k \left(\eta^i - tv^i+sx^i\right) B_i+ m{\partial}_kv^ix_i + m\left(v_k-{\Lambda_k}^a v_a\right)\notag\\&=-\xi^{\mu}\partial_{\mu}B_k-\partial_k\xi^{\mu} B_{\mu}+ m{\partial}_kv^ix_i + m(v_k-{\Lambda_k}^a v_a)\notag\\
\delta_0 C_k &= (\epsilon-\lambda t)\partial_t C_k-(\eta^i-v^i t+sx^i)\partial_i C_k-\partial_k(\eta^i-v^i t+sx^i)C_i+\frac{1}{2}\partial_k s\notag\\&=-\xi^{\mu}\partial_{\mu}C_k-\partial_{k}\xi^{\mu} C_{\mu}+\frac{1}{2}\partial_k s\notag\\
\delta_0\Sigma_{a}{}^{k}&=(\epsilon-\lambda t)\partial_t\Sigma_{a}{}^{k}-(\eta^i-v^i t+sx^i)\partial_i\Sigma_{a}{}^{k}+\partial_i(\eta^k-v^kt)\Sigma_{a}{}^{i}+\partial_i sx^k\Sigma_{a}{}^{i}-\omega^b{}_a \Sigma_{b}{}^{k}\notag\\&=-\xi^{\mu}\partial_{\mu}\Sigma_{a}{}^{k}+\partial_i\xi^k\Sigma_{a}{}^{i}-s\Sigma_{a}{}^{k}-\omega^b{}_a \Sigma_{b}{}^{k}
\label{delBk}
\end{align}
Likewise, the variation of the temporal covariant derivative, $\tilde{D}_0\phi$, satisfies
\begin{align}
\delta_0 \tilde{D}_0\phi&=\xi^0\partial_t(\tilde{D}_0\phi)
-\xi^i\partial_{i}(\tilde{D}_0\phi)-imv^i x_i(\tilde{D}_0\phi)
+v^{b}\tilde{D}_{b}\phi-\left( \frac{s+3\lambda}{2}\right)\tilde{D}_0\phi\label{cdv}
\end{align}
provided the variations of $B_t, C_t, \Sigma_{0}{}^{0}$ and $\Sigma_{0}{}^{k}$ satisfy,
\begin{align}
\delta_0 B_t &=(\epsilon-\lambda t) \dot{B}_t-({\eta}^i-v^it+sx^i)\partial_i B_t-\partial_t(-\epsilon+\lambda t) B_t-\partial_t(\eta^i-v^it+sx^i)B_i\notag\\& +m\Psi^k{\Lambda_k}^a v_a+m{\dot{v}}^i x_i\notag\\&=-\xi^{\mu}\partial_{\mu}B_t-\partial_t\xi^{\mu} B_{\mu}+m\Psi^k{\Lambda_k}^a v_a+m{\dot{v}}^i x_i\notag\\
\delta_0 C_t &= (\epsilon-\lambda t)\partial_t C_t-(\eta^i-v^i t+sx^i)\partial_i C_t+\partial_t(\epsilon-\lambda t) C_t-\partial_t(\eta^i-v^i t+sx^i)C_i+\frac{1}{2}\partial_t(s+\lambda)\notag\\&=-\xi^{\mu}\partial_{\mu}C_t-\partial_{t}\xi^{\mu} C_{\mu}+\frac{1}{2}\partial_t(s+\lambda)\notag\\
\delta_0 \Sigma_{0}{}^{0}&=(\epsilon-\lambda t)\partial_t\Sigma_{0}{}^{0}-(\eta^i-v^it+sx^i)\partial_i\Sigma_{0}{}^{0}-\partial_t \epsilon\Sigma_{0}{}^{0}+t\partial_t \lambda \Sigma_{0}{}^{0}\notag\\&=-\xi^{\mu}\partial_{\mu} \Sigma_{0}{}^{0}+\partial_t\xi^0 \Sigma_{0}{}^{0}\notag\\
\delta_0 \Sigma_{0}{}^{k} &=(\epsilon-\lambda t)\partial_t\Sigma_{0}{}^{k}-(\eta^i-v^i t+sx^i)\partial_i\Sigma_{0}{}^{k}+\partial_t(\eta^k-v^kt+ sx^k)\Sigma_{0}{}^{0}+\partial_i(\eta^k-v^k t)\Sigma_{0}{}^{i}\notag\\&+\partial_i sx^k\Sigma_{0}{}^{i}+(s-\lambda)\Sigma_{0}{}^{k}+v^b \Sigma_b{}^k\notag\\&=-\xi^{\mu}\partial_{\mu}\Sigma_{0}{}^{k}+\partial_{\mu}\xi^k\Sigma_{0}{}^{\mu}-\lambda\Sigma_{0}{}^{k}+v^b \Sigma_b{}^k
\label{delBt}
\end{align}
The inverse of the `$\Sigma$' fields are defined as,
\begin{equation}
{\Sigma_\alpha}^{\mu}{\Lambda_\nu}^{\alpha}=\delta^\mu_\nu,~~~~{\Sigma_\alpha}^{\mu}{\Lambda_\mu}^{\beta}=\delta^\beta_\alpha\label{sl}
\end{equation}
We can now replace the partial derivatives in the action (\ref{genaction}) with these local covariant derivatives to give,
$$
{{\cal{L}}\left(\phi, \partial_t\phi, \partial_k\phi\right)} \to
{{\cal{L}^{\prime}}\left(\phi, \tilde{D}_0\phi, \tilde{D}_a\phi\right)}
$$
However, this `${\cal{L'}}$' is not invariant under the local Galilean and scale transformations as it does not obey
\begin{equation}
\Delta {{\cal{L'}}} =\delta_0{{\cal{L'}}} + \xi^{\mu}\partial_{\mu}{{\cal{L'}}}+ \partial_{\mu}\xi^{\mu}{{\cal{L'}}}= 0.
\label{l}
\end{equation}
We remember that the factor $\partial_{\mu}\xi^{\mu}$ arises from the Jacobian of the coordinate transformations.
The invariance of the action thus requires that the change in the measure should also be accounted for. Therefore, the Lagrangian needs to be modified to
\begin{equation}
{\cal{L}}={\Lambda} {\cal{L'}}
\end{equation}
To retain the invariance of the action,
`$\Lambda$' has to satisfy,
\begin{equation}
\delta_0 \Lambda+\xi^{\mu}\partial_{\mu}\Lambda=0
\end{equation}
The appropriate form of `$\Lambda$' is found to be,
\begin{equation}
\Lambda=\frac{1}{\Sigma_0{}^0}det {\Lambda_k}^a=det {\Lambda_\mu}^{\alpha}
\label{mes}
\end{equation}
which is the Jacobian for the Galilean and scale transformations.
The localization procedure is thus completed by replacing the partial derivatives with the local covariant derivatives, and in addition considering the change in the measure for the action (\ref{globalaction}). The whole procedure has led to the derivation of the following action, which is invariant under local Galilean and scale transformations
\begin{equation}
S = \int dt d^3x \Lambda{\cal{L}}\left(\phi, \tilde{D}_0\phi, \tilde{D}_a\phi\right)
\label{localactionold}
\end{equation}
For the Schrodinger action (\ref{globalaction}), the localized form is given by
\begin{equation}
S = \int dt \int d^3x\left( \frac{1}{\Sigma_0{}^0}det {\Lambda_k}^a\right) \left[ \frac{i}{2}\left( \phi^{*}\tilde{D}_0\phi-\phi \tilde{D}_0\phi^{*}\right) -\frac{1}{2m}\tilde{D}_a\phi^{*}\tilde{D}_a\phi\right].
\label{localscaleschrodinger}
\end{equation}
This action (\ref{localscaleschrodinger}) is invariant under both the Galilean and scale transformations. We also note that unlike in relativistic theories, the \emph{mass} is not the coefficient of the linear term in the potential here, but enters as a passive parameter in the kinetic term since non-relativistic theories hold in the regime where the energies being dealt with are far less than the (rest) mass. As such, massive scale invariant non-relativistic theories can, and do exist. The significance of the localizing procedure lies in its ability to describe the most general geometrical framework consistent with the symmetries. This aspect is the focus of the next section.
\section{Non-relativistic spatial diffeomorphism invariance}
Localizing the Galilean symmetry provides non-relativistic spatial diffeomorphism invariance. At this stage it is worth checking whether the additional localization of the scale transformation for a non-relativistic field theoretic model preserves NRDI or not. To demonstrate that (\ref{localscaleschrodinger}) corresponds to a matter field coupled to a curved background, we need the spatial metric to be manifest in the action. It is instructive to recall a property of differential manifolds equipped with a metric, namely that the determinant of the metric tensor is equivalent to the square of the jacobian. This property is reflected in the invariant measure in the relativistic context, which is given by $\sqrt{\lvert g \rvert} d^n x$, where $\lvert g \rvert$ is the (positive) determinant of the metric.
In (\ref{localscaleschrodinger}) the invariant measure is given by (\ref{mes}), which suggests that the `$\Sigma$' and `$\Lambda$' fields are related with the metric being sought, if (\ref{localscaleschrodinger}) is to be a description of the field theory on a curved background.
In the following subsections, it will be demonstrated that this is indeed the case. The additional fields introduced at the time of the localization procedure can be geometrically realized as observables of the `Weyl rescaled Newton-Cartan' geometry (with or without torsion), whose specific combinations lend themselves to the description of geometric objects satisfying all the transformation properties of a metric. In \cite{Banerjee:2014nja}, the background was seen to be that of the Newton-Cartan geometry, thereby establishing that this is the most general curved background consistent with spatial non-relativistic symmetries, excluding scale transformations. The following subsections will systematically establish that the resultant background for (\ref{localscaleschrodinger}) is the `Weyl rescaled Newton-Cartan' geometry, which apart from having the non-degenerate metrics of the Newton-Cartan background, is invariant under Weyl transformations.
Non-relativistic spatial diffeomorphism invariance can be derived from (\ref{localscaleschrodinger}) by requiring that the change in time coordinate vanishes. This can be achieved by either assuming the magnitude of $\epsilon$ and $\lambda$ be zero, or by letting $\epsilon=\lambda t$. The former assumption will be implemented, as the latter implies a connection between time translation and scaling parameter which is not desirable. Following this assumption, the local Galilean plus scale transformation is modified to,
\begin{equation}
x^i\longrightarrow x^i+\eta^i(x,t)-v^i(x,t)t+s(x,t)x^i
\label{3ddiff}
\end{equation}
From the first of the set of transformations (\ref{delBt}) it is evident that when $\epsilon$ and $\lambda$ are vanishing, $\Sigma_0{}^0=$ constant. For simplicity, we will set $\Sigma_0{}^0= 1$. Following the same prescription stated in \cite{Banerjee:2014pya}, we now introduce the following definition of the spatial `metric tensor'
\begin{equation}
h_{ij}=\delta_{cd}{\Lambda^c}_i {\Lambda^d}_j
\label{metric1}
\end{equation}
The variation of `${\Lambda^a}_k$' can be calculated using (\ref{delBk}) and (\ref{sl}).
\begin{align}
\delta_0 {\Lambda^a}_{k}=&(\epsilon-\lambda t)\partial_t{\Lambda^a}_{k}-(\eta^{i}- v^{i}t+sx^i)\partial_{i}{\Lambda^a}_{k} -{\Lambda^a}_{l}\partial_{k}(\eta^{l}-t v^{l})-\partial_{k}sx^i {\Lambda^a}_{i}+\omega^a{}_c{\Lambda^c}_{k}\notag\\=&-\xi^{\mu}\partial_{\mu}{\Lambda^a}_{k}-{\Lambda^a}_{l}\partial_k\xi^l+s{\Lambda^a}_{k}+\omega^a{}_c{\Lambda^c}_{k}
\label{delLamb}
\end{align}
Using the transformation relation of ${\Lambda^a}_k$ from (\ref{delLamb}), the variation of $h_{ij}$ can easily be calculated.
\begin{equation}
\delta_0 h_{ij} =-\xi^k \partial_k h_{ij}-h_{ik}\partial_j\xi^k-h_{kj}\partial_i\xi^k+2sh_{ij}
\label{diffh}
\end{equation}
The transformation relation (\ref{diffh}) clearly demonstrates and confirms that (\ref{metric1}) is an appropriate definition for the rescaled metric of a non-relativistic, diffeomorphism invariant, 3-dimensional curved space and `${\Lambda^a}_i$' should be treated as inverse spatial vierbein. Corresponding to (\ref{metric1}) the inverse metric can be defined as,
\begin{equation}
h^{kl} = \delta^{ab}{\Sigma_a}^k{\Sigma_b}^l
\label{metricin}
\end{equation}
Its variation can be shown to satisfy
\begin{equation}
\delta_0 h^{kl}=-\xi^i\partial_i h^{kl}+h^{il}\partial_i \xi^k+h^{ki}\partial_i\xi^l-2sh^{kl}
\end{equation}
Following the definition (\ref{metric1}), it can be observed that the change in measure in the previous section can be written as $\Lambda=\sqrt{h}$, where `$h$' is the determinant of $h_{ij}$.
Using this expression in (\ref{localscaleschrodinger}), the action takes the following form
\begin{equation}
S = \int dt \int d^3x \sqrt{h}\left[ \frac{i}{2}\left( \phi^{*}\tilde{D}_{t}\phi-\phi\tilde{D}_t\phi^{*}\right) -\frac{1}{2m}\tilde{D}_a\phi^{*}\tilde{D}_a\phi\right]
\label{localschrodinger}
\end{equation}
This can be further simplified by using (\ref{finalcov}), to give
\begin{equation}
S = \int dt \int d^3x \sqrt{h} \left[ \frac{i}{2}\left( \phi^{*}\tilde{D}_{t}\phi-\phi \tilde{D}_t\phi^{*}\right) -\frac{1}{2m}h^{kl}D_k\phi^{*}D_l\phi\right]
\label{diffschrodinger}
\end{equation}
In the next section this action (\ref{diffschrodinger}) will be represented in a fully covariant notation (in global coordinates). (\ref{diffschrodinger}) can be interpreted as that of a massive Schr\"odinger complex scalar field with both the Galilean and scale symmetries, coupled to a non-relativistic curved background. In the relativistic context, scale invariant theories are massless in general. However it is evident from Eq. (\ref{diffschrodinger}) that the scale invariant theories in non-relativistic case can contain {\textit{mass}} as a {\textit{passive parameter}}, which has {\textit{no scaling dimension}} \cite{Bergman}.
The equations for the variation of the spatial metric and its inverse, provided in (\ref{diffh}) and (\ref{metricin}), contains an additional term that accounts for the Weyl rescaling of the metric, which is a manifestation of the scale transformation of the coordinates. The description of the curved background is as yet incomplete, in that the explicit form of the connection involved in the covariant derivatives has not yet been derived. The next subsection provides a specification, thereby completing the description of the Weyl-rescaled Newton Cartan geometry.
\section{Geometrical interpretation : Weyl-rescaled Newton-Cartan geometry}\label{wrncg}
One major application of the localization procedure in \cite{Banerjee:2014pya} was the construction of Newton-Cartan geometry through a specific identification of the fields, which was discussed in detail in \cite{Banerjee:2014nja}. A four dimensional manifold was defined with two coordinate systems, local and global, such that at every global coordinate point there is a local coordinate system. The previously introduced field, $\Sigma_{\alpha}{}^{\mu}$, can be interpreted as the vierbein which maps the global and local frames. It was demonstrated in \cite{Banerjee:2014nja} that the 4-d manifold endowed with $\Sigma_{\alpha}{}^{\mu}$ and its inverse $\Lambda_{\mu}{}^{\alpha}$ had the features of the Newton-Cartan geometry.
As we have seen, the additional inclusion of scale invariance has led to a different result following localization. First, the transformation properties of the additional fields that were introduced at the time of localization of the Galilean symmetry were modified. Second, the localization procedure brought in additional gauge fields that were required in order to render the action invariant. The gauge fields reduce to those found in the localization of Galilean symmetry when the scale parameters $s, \lambda \to 0$. We thus expect on account of the different fields introduced in the localization procedure, each with their own scaling dimension, to lead to a different geometry upon identifying the vierbeins of the manifold. However, this geometric structure should reduce to the Newton-Cartan geometry in the limit of vanishing scale parameters.
Identifying ${\Sigma_a}^{\mu}$ as the vierbein fields, the inverse spatial metric can be defined as,
\begin{equation}
h^{\mu\nu}={\Sigma_a}^{\mu}{\Sigma_b}^{\nu}\delta^{ab}
\label{spm}
\end{equation}
the spatial component of which ($h^{ij}$) was already defined in previous section. The temporal one-form can also be defined in terms of the inverse vierbein field ${\Lambda_\mu}^{0}$.
\begin{equation}
\tau_{\mu}={\Lambda_\mu}^{0}~~~({\Lambda_k}^{0}=0, {\Lambda_0}^{0}\neq 0)
\label{tem}
\end{equation}
With these definitions, (\ref{delBk},\ref{delBt}) and (\ref{sl}) in addition leads to the following variations of $h^{\mu \nu}$ and $\tau_{\mu}$
\begin{align}
\delta_0 h^{\mu\nu} &= -\xi^{\rho} \partial_{\rho} h^{\mu\nu}+h^{\rho\nu}\partial_{\rho}\xi^{\mu}
+h^{\mu \rho}\partial_{\rho}\xi^{\nu}-2sh^{\mu\nu} \notag\\
& \notag\\
\delta_0 \tau_{\mu} &= -\tau_{\mu}\partial_0 \xi^0-\xi^0\partial_0 \tau_{\mu}+\lambda\tau_{\mu}
\label{diff}
\end{align}
To obtain a full geometric structure the connection should be introduced following the metric. Like any general gauge theory, the connection can be incorporated by making use of the vierbein postulate, which will also help to explore the metricity condition for this geometry. The vierbein postulate for the vierbein $\Lambda^{\alpha}_{\nu}$ is given by
\begin{align}
\tilde{\nabla}_\mu{\Lambda_\nu}^{0} &= \partial_{\mu}{\Lambda_\nu}^{0} - {\tilde{\Gamma}}_{\nu\mu}^{\rho}{\Lambda_\rho}^{0}
+B^{0}{}_{\mu\beta}{\Lambda_\nu}^{\beta}+2b_{\mu}{\Lambda_\nu}^{0} =0\notag\\
{\tilde{\nabla}}_\mu{\Lambda_\nu}^{a} &= \partial_{\mu}{\Lambda_\nu}^{a} - {\tilde{\Gamma}}_{\nu\mu}^{\rho}{\Lambda_\rho}^{a}
+B^{a}{}_{\mu\beta}{\Lambda_\nu}^{\beta}+b_{\mu}{\Lambda_\nu}^{a} =0\
\label{P}
\end{align}
For the temporal part we find,
\begin{equation}
\partial_{\mu}\Lambda_{\nu}{}^0 - \tilde{\Gamma}_{\nu\mu}^{\rho}\Lambda_{\rho}{}^0=-2b_{\mu}\Lambda_{\nu}^{\phantom{\nu} 0}\label{P1}
\end{equation}
using the fact that $B^{0}{}_{\mu\beta}$ vanishes for Galilean transformations. Thus, (\ref{P1}) directly leads to the expression for the action of the covariant derivative on $\tau_{\nu}$,
\begin{equation}
\tilde{\nabla}_{\mu} \tau_{\nu} = - 2b_{\mu}\tau_{\nu}
\label{tnmetricity}
\end{equation}
From the spatial part of (\ref{P}) the action of covariant derivative on the spatial inverse metric is found to be the following,
\begin{align}
\tilde{\nabla}_\mu h^{\rho\sigma}=\partial_{\mu}h^{\rho\sigma}+
\tilde{\Gamma}_{\nu\mu}^{\rho}h^{\nu\sigma}+\tilde{\Gamma}_{\nu\mu}^{\sigma}h^{\nu\rho}= 2b_{\mu}h^{\rho\sigma}.
\label{nmetricity}
\end{align}
The explicit calculation leading to the derivation of \ref{nmetricity} has been provided in Appendix \ref{MC}.
Taking the limit $b_{\mu}\rightarrow0$ in (\ref{tnmetricity}) and (\ref{nmetricity}), results in the following relations,
\begin{align}
\nabla_\mu h^{\rho\sigma}&=
0\notag\\\nabla_\mu \tau_{\nu}&=0
\label{metricity}
\end{align}
which are the well known metricity conditions for Newton-Cartan geometry. But for the scale extension of the Newton-Cartan geometry the metricity conditions does not hold which is evident from the equations (\ref{tnmetricity}, \ref{nmetricity}).
That non-metricity results from the localization of the Weyl symmetry, and is a general feature of Weyl gravity, is a well known result in relativistic theories.
It will be instructive to note some other relations satisfied by making use of the definitions given in (\ref{spm}) and (\ref{tem}). These are the respective inverses given by,
\begin{equation}
h_{\nu\rho}=\Lambda_{\nu}{}^{a} \Lambda_{\rho}{}^{a}
\label{spm2}
\end{equation}
and
\begin{equation}
\tau^{\rho}={\Sigma_0}^{\rho}.
\label{tm2}
\end{equation}
In addition these definitions (\ref{spm}, \ref{tem}, \ref{spm2}, \ref{tm2}) satisfy the orthogonality relation between the spatial metrices and temporal metrices, since
\begin{align}
h^{\mu\nu}\tau_\nu &={\Sigma_a}^{\mu}{\Sigma_a}^{\nu} \Lambda_{\nu}{}^{0},~~~~h_{\mu\nu}\tau^\nu= {\Lambda_{\mu}}^a {\Lambda_\nu}^a {\Sigma_0}^\nu\notag\\
&={\Sigma_a}^{\mu}\delta^0_a~~~~~~~~~~~~~~~~~~~= {\Lambda_{\mu}}^a\delta_0^a\notag\\ &=0~~~~~~~~~~~~~~~~~~~~~~~~~=0
\label{ortho}
\end{align}
In a similar manner the following projection relations,
\begin{equation}
h^{\mu\lambda}h_{\lambda\nu} = \delta^\mu_\nu - \tau^\mu\tau_\nu,~~~\tau^{\mu}\tau_{\mu}=1.
\label{project}
\end{equation}
can also be satisfied. Given these relations, it is clear that despite the Weyl rescaled Newton-Cartan geometry generating non-metricity, the familiar relations of Newton-Cartan geometry continue to hold.
We can now use the definitions introduced in this section to rewrite Eq. (\ref{diffschrodinger}) in the following fully covariant notation.
\begin{equation}
S = \int dt \int d^3x \sqrt{h} \left[ \frac{i}{2}\left( \phi^{*}\tau^{\mu}{D}_{\mu}\phi-\phi \tau^{\mu}{D}_{\mu}\phi^{*}\right) -\frac{1}{2m}h^{\mu\nu}D_{\mu}\phi^{*}D_{\nu}\phi\right]
\label{diffschrodinger2}
\end{equation}
Note that for scalar field `$\tilde{\nabla}_{\mu}$' can be identified with the covariant derivative `$D_{\mu}$' defined in previous section \cite{Banerjee:2014nja}. The curved background in
action (\ref{diffschrodinger2}) can now be interpreted as the Weyl-rescaled Newton-Cartan geometry.
In the context of the covariant derivative, the explicit form of the connection can be determined. This follows from the vierbein postulate by contracting (\ref{P}) with $\Sigma_{\alpha}^{\phantom{\alpha} \sigma}$, which gives the following general expression for the connection
\begin{equation}
\tilde\Gamma_{\nu\mu}^{\rho} = \partial_{\mu}{\Lambda_{\nu}}^\alpha {\Sigma_\alpha}^{\rho}
+B^{\alpha}{}_{\mu\beta}{\Lambda_{\nu}}^\beta{\Sigma_\alpha}^{\rho}+2b_{\mu}{\Lambda_{\nu}}^{0} {\Sigma_0}^{\rho}+b_{\mu}{\Lambda_{\nu}}^{a} {\Sigma_a}^{\rho}\label{vpcon}
\end{equation}
In both Newton-Cartan and the Weyl rescaled geometry, considered here, we can allow for torsion. The expression for torsion will be provided after the derivation of the symmetric connection.
\textit{Symmetric connection:}
Using (\ref{vpcon}), we find the following expression for the symmetric connection of the Weyl rescaled Newton-Cartan background
\begin{align}
{\tilde{\Gamma}^\rho}_{\nu\mu} & = \tau^{\rho}\partial_{(\mu}\tau_{\nu)} +
\frac{1}{2}h^{\rho\sigma} \Bigl(\partial_{\mu}h_{\sigma\nu}+\partial_{\nu}h_{\sigma\mu} - \partial_{\sigma}h_{\mu\nu}\Bigr)+(b_{\mu}\delta_{\nu}^{\rho}
+b_{\nu}\delta_{\mu}^{\rho}-b_{\sigma}h^{\rho\sigma}h_{\nu\mu})\notag\\&+ h^{\rho\lambda}K_{\lambda(\mu}\tau_{\nu)}
\label{wrcon}
\end{align}
The detailed calculation leading to \ref{wrcon} is given in appendix \ref{MC}. In \ref{wrcon}, the two form K is defined in a similar way as the one given in \cite{Banerjee:2014nja}
\begin{align}
h^{\rho\lambda}K_{\lambda(\mu}\tau_{\nu)} &=\frac{1}{2}h^{\rho\lambda}[K_{\lambda\mu}\tau_{\nu}+K_{\lambda\nu}\tau_{\mu}]\notag\\
&=\frac{1}{2}h^{\rho\lambda}[{\Lambda_{\lambda}}^aB^{a}{}_{0\mu}\tau_{\nu}+{\Lambda_{\lambda}}^{a}B^{a}{}_{0\nu}\tau_{\mu}]
\label{k}
\end{align}
Defining `K' in this way makes the connection unique.
Note that, $\delta^{\mu}_{\nu}=h^{\mu\rho}h_{\nu\rho}+\tau^{\mu}\tau_{\nu}$.
It is evident from (\ref{wrcon}) that in the limit of vanishing `b', the expression reduces to that of the Newton-Cartan connection.\vspace{1em}\\
\textit{Connection with torsion :} In general, the connection can also have antisymmetric part. Then from (\ref{P}) we can write,
\begin{align}
\partial_{[\mu}\Lambda_{\nu]}{}^{\alpha}-\tilde{\Gamma}^{\rho}_{[\nu\mu]}\lambda_{\rho}{}^{\alpha}+B^{\alpha}_{[\mu\vert\beta}\Lambda_{\nu]}{}^{\beta}+2b_{[\mu}\Lambda_{\nu]}{}^0\delta_{0}^{\alpha}+b_{[\mu}\Lambda_{\nu]}{}^b\delta_{b}^{\alpha}=0
\end{align}
Multiplying with $\Sigma_{\alpha}{}^{\sigma}$ on both sides we will get,
\begin{align}
\tau^{\sigma}\partial_{[\mu}\tau_{\nu]}+\Sigma_{a}{}^{\sigma}(\partial_{[\mu}\Lambda_{\nu]}^a+B^a_{\mu\vert\beta}\Lambda_{\nu]}{}^{\beta})+b_{[\mu}\tau_{\nu]}\tau^{\sigma}+b_{[\mu}\delta_{\nu]}^{\sigma}=
\frac{T^{\sigma}_{\nu\mu}}{2}\label{antvp}
\end{align}
where ${T^\rho}_{\nu\mu} = 2\tilde{\Gamma}^\rho_{[\nu\mu]}$ is known as the torsion tensor.
Manipulating the terms in the parenthesis using the vielbein postulate one can get,
\begin{align}
\tau^{\sigma}\partial_{[\mu}\tau_{\nu]}+2b_{[\mu}\tau_{\nu]}\tau^{\sigma}=
\frac{T^{\rho}_{\nu\mu}}{2}\tau^{\sigma}\tau_{\rho}\label{wtor}
\end{align}
We can infer two important facts from the relation (\ref{wtor}). First, due to the presence of the scale term in (\ref{wrcon}) one can have non-vanishing torsion even while $\tau_{\mu}$ is hypersurface orthogonal, i.e. satisfies Frobenius' theorem. This distinguishes this result from that in the Newton-Cartan literature where the requirement of $\tau_{[\mu} \nabla_{\nu} \tau_{\lambda]} = 0$ is at odds with the torsion constraint. The second implication has been pointed out in the literature \cite{Bergshoeff:2014uea}, namely that when the right hand side of Eq.~(\ref{wtor}) vanishes, it leads to a non-trivial condition $$\partial_{[\mu}\tau_{\nu]} =- 2b_{[\mu}\tau_{\nu]}$$\\
This specifically restricts the spatial hypersurfaces of the Weyl rescaled Newton-Cartan background.
In presence of torsion the expression of the connection (\ref{wrcon}) will be modified as,
\begin{align}
{\tilde{\Gamma}^\rho}_{\nu\mu} & = \tau^{\rho}\partial_{(\mu}\tau_{\nu)} +
\frac{1}{2}h^{\rho\sigma} \Bigl(\partial_{\mu}h_{\sigma\nu}+\partial_{\nu}h_{\sigma\mu} - \partial_{\sigma}h_{\mu\nu}\Bigr)+(b_{\mu}\delta_{\nu}^{\rho}
+b_{\nu}\delta_{\mu}^{\rho}-b_{\sigma}h^{\rho\sigma}h_{\nu\mu})\notag\\&+ h^{\rho\lambda}K_{\lambda(\mu}\tau_{\nu)}+\frac{1}{2}h^{\rho\sigma}\left[ -T_{\mu\nu\sigma}-T_{\nu\mu\sigma}+T_{\sigma\nu\mu}\right]
\label{Vtor}
\end{align}
The expression in Eq.~(\ref{wtor}) is just one part of the torsion tensor. The full expression can be obtained from (\ref{wtor}) using (\ref{project}) and (\ref{antvp}),
\begin{align}
\frac{T^{\sigma}_{\nu\mu}}{2}=\tau^{\sigma}\partial_{[\mu}\tau_{\nu]}+2b_{[\mu}\tau_{\nu]}\tau^{\sigma}+\left(\partial_{[\mu}\Lambda_{\nu]}{}^a+B^a_{[\mu\vert b}\Lambda_{\nu]}{}^b+b_{[\mu}\Lambda_{\nu]}{}^a\right)h^{\sigma\gamma}\Lambda_{\gamma}{}^a+K_{\gamma[\mu}\tau_{\nu]}h^{\sigma\gamma}
\label{torfull}
\end{align}
Eq.~(\ref{torfull}) is the general expression of the torsion tensor, which includes a spatial contribution. This expression has not been considered in the literature thus far. The spatial contribution, which is the everything other than the first two terms in the right hand side of Eq.~(\ref{torfull}), could have interesting consequences. For instance, it is possible that $$K_{\gamma[\mu}\tau_{\nu]} = b_{[\nu} h_{\mu]\gamma},$$ which is a non-trivial constraint between the external forces and the scale gauge parameter $b_{\mu}$. This condition would relate the scale gauge field `$b_{\mu}$' with the U(1) gauge field `$A_{\mu}$' of the Newton-Cartan background following Eq.~(\ref{kexp}), and would lead to a spatial contribution to the torsion tensor which can be ignored under standard arguments.
\vspace{0.5em}
\textit{Weyl tensor} : Finally, just as in the case of Weyl invariant relativistic backgrounds, we can construct the Newton-Cartan analogue of the Weyl tensor in General Relativity. For the purposes of deriving a simple expression for the Weyl tensor, we will for the moment assume that the Riemann tensor of the Newton-Cartan background satisfies the conditions in (\ref{ncRSsymm}) and (\ref{trautman}), and that the connection is symmetric. The Weyl tensor can be constructed as the trace free part of the Riemann tensor. To this end we seek a tensor
\begin{equation}
C_{\lambda\sigma\mu\nu}=R_{\lambda\sigma\mu\nu} +2(h_{\lambda[\mu}S_{\nu]\sigma}+\tau_{\lambda}\tau_{[\mu}S_{\nu]\sigma})-2(h_{\sigma[\mu}S_{\nu]\lambda}
+\tau_{\sigma}\tau_{[\mu}S_{\nu]\lambda})
\label{ncweyl}
\end{equation}
where the above construction has been made for the Newton-Cartan Riemann tensor. Due to the first equality in Eq.~(\ref{ncRSsymm}),we have $R_{\lambda\sigma\mu\nu} = h_{\lambda \rho} R^{\rho}_{\phantom{\rho}\sigma\mu\nu}$. The key property of $C_{\lambda\sigma\mu\nu}$ is that, it vanishes when any pair of indices is contracted with either $h^{\mu \nu}$ or $\tau^{\mu} \tau^{\nu}$.
But the Riemann tensor provides a non-vanishing result when contracted with both $h^{\mu \nu}$ and $\tau^{\mu} \tau^{\nu}$, depending on the indices being contracted {\footnote{In Appendix \ref{Schouten}, these non-vanishing contractions of the Riemann tensor, and the derivation of the expression of $S_{\nu \sigma}$ are discussed.}}. Therefore both $h_{\sigma\mu}$ and $\tau_{\sigma} \tau_{\mu}$ appear in (\ref{ncweyl}).
By requiring that $C_{\lambda\sigma\mu\nu}$ be trace free, the following expression for $S_{\nu \sigma}$ can be derived,
\begin{equation}
S_{\nu \sigma} = \frac{R_{\sigma\nu}}{n-2} - \frac{R (h_{\sigma \nu} + \tau_{\sigma}\tau_{\nu})}{2(n-2)(n-1)}
\label{ncSch}
\end{equation}
The tensor $S_{\nu \sigma}$ has a form which is similar to that of the Schouten tensor in General Relativity.
We now turn our attention to the Weyl rescaled Newton-Cartan Riemann tensor. To relate it with the Newton-Cartan Riemann tensor Eq.~(\ref{wrcon}) can be written in the following way
\begin{equation}
\tilde{\Gamma}^{\rho}_{\nu\mu} = \Gamma^{\rho}_{\nu\mu}+(b_{\mu}\delta_{\nu}^{\rho}
+b_{\nu}\delta_{\mu}^{\rho}-b_{\sigma}h^{\rho\sigma}h_{\nu\mu})
\label{wrcon2}
\end{equation}
where $\Gamma^{\rho}_{\nu\mu}$ represents the usual Newton-Cartan connection.
The Riemann tensor for the connection in Eq.~(\ref{wrcon2}) is defined in the usual way
\begin{equation}
[\tilde{\nabla}_{\mu}, \tilde{\nabla}_{\nu}]V^{\lambda}=\tilde{R}^{\lambda}_{\phantom{\lambda}\sigma\mu\nu}V^{\sigma}\label{WeylR}
\end{equation}
Upon expansion, we find the following result
\begin{align}
\tilde{R}^{\lambda}_{\phantom{\lambda}\sigma\mu\nu}&=R^{\lambda}_{\phantom{\lambda}\sigma\mu\nu}+2\nabla_{[\mu}(b_{\nu]}\delta^{\lambda}_{\sigma}+\delta^{\lambda}_{\nu]}b_{\sigma}-h_{\nu]\sigma}b_{\delta}h^{\delta\lambda})+2\delta^{\lambda}_{[\mu}(b_{\nu]}b_{\sigma}-h_{\nu]\sigma}b_{\rho}b_{\sigma}h^{\rho\sigma})\notag\\&+2b_{\rho}h^{\rho\lambda}b_{[\mu}h_{\nu]\sigma}-
2b_{\rho}\tau^{\rho}\tau_{[\mu}h_{\nu]\sigma}b_{\gamma}h^{\gamma\lambda}
\label{WRiem}
\end{align}
While $R^{\lambda}_{\phantom{\lambda}\sigma\mu\nu}$ satisfies the properties given in Eq.~(\ref{ncRSsymm}) and \ref{trautman}), $\tilde{R}^{\lambda}_{\phantom{\lambda}\sigma\mu\nu}$ in general does \emph{not}. These differences have important consequences in the way the Riemann tensor in Eq.~(\ref{WRiem}) is contracted. Requiring that $\delta^{\mu}_{\lambda}\tilde{R}^{\lambda}_{\phantom{\lambda}\sigma\mu\nu} = \tilde{R}_{\sigma \nu}$ implies that one can lower with the combination $h_{\mu \nu} + \tau_{\mu}\tau_{\nu}$ and raise with the combination $h^{\mu \nu} + \tau^{\mu} \tau^{\nu}$ . In other words, we are using the identity
\begin{equation}
\delta^{\mu}_{\lambda} = (h^{\mu \alpha} + \tau^{\mu} \tau^{\alpha})(h_{\alpha \lambda} + \tau_{\alpha}\tau_{\lambda})
\end{equation}
to infer how to lower and raise indices. This contraction also agrees with the fact that the fields $b_{\mu}$ have both spatial and temporal components.
In the case of the Newton-Cartan background, $\tau_{\lambda}R^{\lambda}_{\phantom{\lambda} \sigma \mu \nu} = 0$ led to $R_{\lambda\sigma\mu\nu} = h_{\lambda \rho} R^{\rho}_{\phantom{\rho}\sigma\mu\nu}$. But for the Weyl rescaled Newton-Cartan background, we have
\begin{align}
(h_{\lambda\epsilon}+\tau_{\lambda}\tau_{\epsilon})\tilde{R}^{\lambda}_{\sigma\mu\nu}&=\tilde{R}
_{\epsilon\sigma\mu\nu}=R_{\epsilon\sigma\mu\nu}+2(h_{\epsilon\sigma}+\tau_{\epsilon}\tau_{\sigma})
\nabla_{[\mu}b_{\nu]}+2(h_{\epsilon[\nu}\nabla_{\mu] }b_{\sigma}+\tau_{\epsilon}\tau_{[\nu}\nabla_{\mu]}
b_{\sigma})\notag\\&-2\nabla_{[\mu}(h_{\nu]\sigma}b_{\epsilon})+2\tau^{\delta}\tau_{\epsilon}\nabla_
{[\mu}(h_{\nu]\sigma}b_{\delta})+2h_{\epsilon[\mu}b_{\nu]}b_{\sigma}+2\tau_{\epsilon}\tau_{[\mu}b_{\nu]}
b_{\sigma}\notag\\&-2h_{\epsilon[\mu}h_{\nu]\sigma}h^{\gamma\rho}b_{\gamma}b_{\rho}-2\tau_{\epsilon}
\tau_{[\mu}h_{\nu]\sigma}h^{\gamma\rho}b_{\gamma}b_{\rho}+2b_{\epsilon}b_{[\mu}h_{\nu]}\sigma-2b_{\rho}
\tau^{\rho}\tau_{\epsilon}b_{[\mu}h_{\nu]\sigma}\notag\\&-2b_{\rho}\tau^{\rho}\tau_{[\mu}h_{\nu]\sigma}b_{
\epsilon}+2\tau^{\gamma}\tau^{\rho}b_{\gamma}b_{\rho}\tau_{\epsilon}\tau_{[\mu}h_{\nu]\sigma}\label{tilderiemann}
\end{align}
By contracting \ref{tilderiemann} with $h^{\epsilon \mu} + \tau^{\epsilon} \tau^{\mu}$ we get $\tilde{R}_{\sigma \nu}$. The result is,
\begin{align}
\tilde{R}_{\sigma\nu}&=R_{\sigma\nu}+2\nabla_{[\sigma}b_{\nu]}-\nabla_{\mu}(h_{\nu\sigma}b_{\epsilon}h^{\epsilon\mu})+(n-2)[b_{\nu}b_{\sigma}-\nabla_{\nu}b_{\sigma}-h_{\nu\sigma}h^{\gamma\rho}b_{\gamma}b_{\rho}]
\notag\\&-\tau_{\sigma}\nabla_{\nu}(\tau^{\rho}b_{\rho})+2b_{\rho}\tau^{\rho}\tau_{(\sigma}b_{\nu)}-(b_{\rho}\tau^{\rho})(b_{\gamma}\tau^{\gamma})(\tau_{\nu}\tau_{\sigma})
\label{tildeRicc}
\end{align}
This is of course the same result one would get from Eq.~(\ref{WRiem}) by setting $\lambda = \mu$. To get the Ricci scalar from Eq.~(\ref{tildeRicc}) we again contract with $h^{\sigma \nu} + \tau^{\sigma} \tau^{\nu}$. The result of this contraction is
\begin{align}
\tilde{R}&=R-h^{\mu\nu}\nabla_{\mu}b_{\nu}(2n-3)-(\tau^{\mu}\nabla_{\mu}(b_\rho\tau^{\rho})-
\tau^{\gamma}\tau^{\rho}b_{\gamma}b_{\rho})(n-1)\notag\\&+(n-2)b_{\sigma}\tau^{\rho}\nabla_{\rho}\tau^{\sigma}-
(n-2)^2h^{\gamma\rho}b_{\gamma}b_{\rho}
\label{tildeRiccs}
\end{align}
As we have seen, certain symmetries of the Newton-Cartan Riemann tensor are not satisfied by the Weyl rescaled counterpart.
For instance, Eq.~(\ref{tildeRicc}) reveals that the Ricci tensor is not symmetric. At this stage we could require the symmetries $\tilde{R}_{[\sigma \nu]} = 0 = \tilde{R}^{\lambda}_{\phantom{\lambda} \lambda \mu\nu}$ to hold for the Riemann tensor of the Weyl rescaled Newton-Cartan background. This in turn determines conditions on the `b' fields through which these symmetries are satisfied. Proceeding in this way will lead to the Newton-Cartan Weyl tensor defined in Eq.~(\ref{ncweyl}) being invariant under non-relativistic scale transformations. That is
\begin{equation}
C^{\lambda}_{\phantom{\lambda}\sigma \mu \nu} = \tilde{C}^{\lambda}_{\phantom{\lambda}\sigma \mu \nu}
\end{equation}
In General Relativity this simply leads to the condition that $b_{\sigma} = \partial_{\sigma} \alpha$, for some scalar field $\alpha$. Here, apart from this constraint, the additional requirement of $b_{[\mu} \tau_{\nu]} = 0$ needs to be satisfied. This constraint will be satisfied whenever the spatial hypersurfaces satisfy Frobenius' theorem \cite{Bergshoeff:2014uea}. However, it may be useful to consider the symmetries of the Weyl rescaled Riemann tensor, without imposing additional conditions. For instance, this is useful in the treatment of conformal fluids on curved backgrounds \cite{Loganayagam:2008is}, whose non-relativistic construction will be provided in the next section. In following this course of action, no terms are dropped in the expressions of the Riemann and Ricci tensors given in Eqs.~(\ref{WRiem}),(\ref{tildeRicc}) and (\ref{tildeRiccs}), and the Weyl rescaled Newton-Cartan Riemann tensor can be used to define the \emph{general} Newton-Cartan Weyl tensor, which is of a considerably more complicated form than the one provided here. In other words, the general Weyl tensor will still have the form given in Eq.~(\ref{ncweyl}), but the Schouten tensor is not as simple as the form given in Eq.~(\ref{ncSch}), and contains additional terms.
The equations provided in this section completes the description of the Weyl-rescaled Newton-Cartan geometry. It can be seen that this resultant geometry does not spoil the basic premise of non-relativistic curved backgrounds; that they are comprised of a degenerate spatial and temporal metric. Rather, the scale symmetry now manifests itself in the (anisotropic) Weyl rescaling of the two metrics separately, altering the description of both the symmetric connection as well as the torsion. Additionally, as we have just seen, we can construct a Weyl tensor which is invariant under anisotropic scale transformations.
\section{Construction of non-relativistic conformal fluid dynamics}
The aim of this section is to elaborate on an important application of the construction thus far, namely, in the description of non-relativistic conformal fluids. We will first develop a Weyl-covariant formalism
which simplifies the study of conformal non-relativistic hydrodynamics analogous to the relativistic case. This leads to a proposal for the entropy current of an non-relativistic conformal fluid coupled to a curved background.
A preliminary study of non-relativistic fluids on the usual Newton-Cartan background was performed in \cite{Duval:1976ht}. We will base our subsequent calculations on this work and extend them to the conformal case.
\subsection{Fluid dynamics on the Newton-Cartan background} \label{varis}
In this section we review the properties of ideal non-relativistic fluids on the Newton-Cartan background following \cite{Duval:1976ht, Geracie:2015xfa,Geracie:2014nka}.
The constitutive relations for the non-relativistic ideal fluid are,
\begin{align}
\partial_t{\rho}+\partial_i(\rho v^i)&=0~~~ \text{(Continuity equation)}\notag\\
\partial_t(\rho v^i)+\partial_{i}T^{ij}&=0~~~\text{(Momentum conservation equation)}\notag\\
\partial_t\left(\epsilon+\frac{1}{2}\rho {\bf{v}}^2\right)+\partial_i j^i&=0~~~\text{(Energy conservation equation)}
\end{align}
where $\rho, v^{i}, T^{ij}, \epsilon$ and $j^i$ are density, velocity vector, stress-energy tensor, energy density and matter current of fluid respectively.
The description of non-relativistic fluids requires a choice of fluid velocity. For this purpose, let us consider the fluid velocity $u^{\mu}$ \footnote{There exist many choices for fluid velocity one could adopt, but for simplicity in the remainder of this paper we will assume that the fluid is co-moving i.e $u^{\alpha}$ is in the direction of $\tau^{\alpha}$.} such that
\begin{equation}
u^{\mu}\tau_{\mu}=1 \qquad \qquad u^{\mu}h_{\mu \nu} = 0
\label{vel}
\end{equation}
The relations satisfied by this velocity vector follow from some basic considerations of the Newton-Cartan background, as first discussed in \cite{Duval:1983pb}, which we will now review. Before delving into these relations, let us briefly recall that the Newton-Cartan covariant derivative actually decomposes into two parts, that corresponding to the inertial piece, and another corresponding to an external force. Thus, the action of the Newton-Cartan covariant derivative acting on a vector field $V^{\lambda}$ is given by
\begin{equation}
\nabla_{\mu}V^{\lambda}= \nabla'_{\mu} V^{\lambda} + h^{\lambda \rho}K_{\rho ( \mu} \tau_{\nu)}V^{\nu} = \partial_{\mu}V^{\lambda}+\Gamma'^{\lambda}_{\mu\nu}V^{\nu} + h^{\lambda \rho}K_{\rho ( \mu} \tau_{\nu)}V^{\nu}
\end{equation}
where $\Gamma'^{\rho}_{\nu\mu}$ is the inertial piece of the connection given by
\begin{equation}
\Gamma'^{\rho}_{\nu\mu} = \tau^{\rho}\partial_{(\mu}\tau_{\nu)} +
\frac{1}{2}h^{\rho\sigma}(\partial_{\mu}h_{\sigma\nu}+\partial_{\nu}h_{\sigma\mu} - \partial_{\sigma}h_{\mu\nu})
\end{equation}
A sensible requirement is that the fluid has no acceleration and is irrotational when considered with respect to this inertial frame, i.e.
\begin{equation}
a'^{\mu}=u^{\rho}\nabla'_{\rho}u^{\mu}=0,~~~\omega'^{\mu\nu}=h^{\gamma[\mu}\nabla'_{\gamma}u^{\nu]}=0
\label{eq.inertial}
\end{equation}
The fluid velocity $u^{\nu}$ also satisfies,
\begin{equation}
\nabla_{\mu} u^{\nu} = \nabla'_{\mu} u^{\nu} - \frac{1}{2}h^{\nu \lambda} K_{\mu \lambda} - \frac{1}{2}h^{\nu \lambda} K_{\rho \lambda} \tau_{\mu} u^{\rho}=\nabla'_{\mu} u^{\nu}+h^{\nu\lambda}K_{\lambda(\rho}\tau_{\mu)}u^{\rho}
\label{eq.Kun}
\end{equation}
Using (\ref{ncproj}), (\ref{eq.inertial}) and (\ref{eq.Kun}) one can obtain,
\begin{equation}
K_{\lambda\mu}=2h_{\nu[\lambda}\nabla_{\mu]}u^{\nu},~~~\delta K_{\lambda\mu}=-2\nabla_{[\lambda}h_{\mu]\nu}\delta u^{\nu}\label{Kexpr2}
\end{equation}
From (\ref{eq.inertial}) and (\ref{eq.Kun}), it then follows that the fluid variables for the expansion, acceleration, shear and vorticity for a general Newton-Cartan frame are given by
\begin{align}
\theta &= \nabla_{\mu} u^{\mu} = \nabla'_{\mu} u^{\mu} = \theta' \notag\\
a^{\nu} &= u^{\mu}\nabla_{\mu} u^{\nu} = h^{\nu \lambda} K_{\lambda \rho} u^{\rho}\notag\\ \sigma^{\mu\nu}&= [h^{ \lambda(\mu}\nabla_{\lambda}u^{\nu)}]-\frac{\theta}{n-1}h^{\mu\nu} = [h^{ \lambda(\mu}\nabla'_{\lambda}u^{\nu)}]-\frac{\theta}{n-1}h^{\mu\nu} = \sigma'^{\mu \nu}\notag \\
\omega^{\mu\nu} &= [h^{ \lambda[\mu} \nabla_{\lambda}u^{\nu]}]= \omega'^{\mu \nu}=0
\end{align}
Thus apart from the acceleration, all other basic variables used to describe the fluid are invariant in going from an inertial to a non-inertial Newton-Cartan frame.
In addition to these basic fluid variables, the description of a fluid requires a definition of stress energy tensor and other matter currents of the theory. Since the Newton-Cartan background contains two degenerate metrics ($h^{\mu\nu}, \tau_{\mu}$) and additional gauge fields ($h_{\mu\nu}, \tau^{\mu}, A_{\mu}$), these definitions should follow from a careful variation of the action.
The most general variation of the action, which leaves the connection invariant, is given by
\begin{align}
0=\delta S=\int\sqrt{h}d^4x [-\frac{1}{2}P_{\mu\nu}\delta h^{\mu\nu}+Q^{\mu}\delta \tau_{\mu}+J^{\mu}\delta A_{\mu}+R_{\mu}\delta u^{\mu}]\label{genvar}
\end{align}
where $P_{\mu \nu}, Q^{\mu}, J^{\mu}$ and $R_{\mu}$ at this stage are merely coefficients to the quantities being varied. Two of these variations correspond to non-gauge variables, i.e. $\delta h^{\mu \nu}$ and $\delta \tau_{\mu}$, which are the variations of the given inverse spatial metric and temporal 1-form. Setting these variations to vanish provides the contributions from the pure gauge variables $A_{\mu}$ and $u^{\mu}$. (\ref{genvar}) then reduces to,
\begin{equation}
\delta S=\int\sqrt{h}d^4x [J^{\mu}\delta A_{\mu}+R_{\mu}\delta u^{\mu}]\label{gac2}
\end{equation}
We can simplify (\ref{gac2}) further by using the properties of $K_{\lambda\mu}$. Following (\ref{kexp}) and (\ref{Kexpr2}) we get,
\begin{equation}
\delta A_{\mu}=-h_{\mu\rho}\delta u^{\rho}+ \partial_{\mu}\chi\label{A}
\end{equation}
where $\partial_{\mu}\chi$ is a constant.
Using the expression of $\delta A_{\mu}$ from (\ref{A}) the action (\ref{gac2}) simplifies to,
\begin{equation}
\delta S=\int\sqrt{h}d^4x [(-J^{\mu}h_{\mu\rho}+R_{\rho})\delta u^{\rho}-(\nabla_{\rho}J^{\rho})\chi]
\label{gauge}
\end{equation}
For arbitary $\chi$, $\delta u^{\rho}=0$ gives,
\begin{equation}
\nabla_{\rho}J^{\rho}=0
\label{nccur}
\end{equation}
This is the equation for the conserved (matter) current in the theory.
For arbitrary $\delta u^{\rho}$ and $\chi=0$ we have from \ref{gauge},
\begin{equation}
R_{\mu}=J^{\rho}h_{\mu\rho}
\end{equation}
Considering the variation of the action under diffeomorphisms one can get,
\begin{align}
0=\delta S=\int\sqrt{h}d^4x [-\frac{1}{2}P_{\mu\nu}{\pounds}_{\xi} h^{\mu\nu}+Q^{\mu} {\pounds}_{\xi}\tau_{\mu}+J^{\mu}{\pounds}_{\xi} A_{\mu}+R_{\mu}{\pounds}_{\xi} u^{\mu}]\label{genlievar}
\end{align}
where $\pounds_{\xi}$ is the Lie derivative (of the object) along the field $\xi^{\mu}$. After a bit more calculation (\ref{genlievar}) gives,
\begin{align}
0=\delta S=\int \sqrt{h}d^4x~ \xi^{\nu}[\nabla_{\mu}(-T^{\mu}{}_{\nu})+2J^{\mu}\nabla_{[\nu}A_{\mu]}+R_{\mu}\nabla_{\nu}u^{\mu}]
\end{align}
where
\begin{align}
T^{\mu}{}_{\nu}=P_{\nu\rho}h^{\mu\rho}+Q^{\mu}\tau_{\nu}-R_{\nu}u^{\mu}\label{nctmix}
\end{align}
and
\begin{equation}
\nabla_{\mu}(T^{\mu}{}_{\nu})=2J^{\mu}\nabla_{[\nu}A_{\mu]}+R_{\mu}\nabla_{\nu}u^{\mu}=
J^{\mu}K_{\nu\mu}+R_{\mu}\nabla_{\nu}u^{\mu}
\label{nabT}
\end{equation}
Using the expression of $R_{\mu}$ and $K_{\nu\mu}$, (\ref{nabT}) take the following form,
\begin{equation}
\nabla_{\mu}T^{\mu}{}_{\nu}=\rho h_{\nu\gamma}a^{\gamma}
\label{cons}
\end{equation}
This relation differs from the usual relation in relativistic fluid systems. The Newton-Cartan background accounts for additional external forces. $a^{\gamma}$ has been defined previously, and was shown to be the only basic fluid variable which differs in going from inertial to general frames.
The expression for $T^{\mu}{}_{\nu}$, in general, is given by Eq. (\ref{nctmix}). By setting $P_{\nu\rho}=-Ph_{\nu\rho}$ and $Q^{\mu}=2 \epsilon u^{\mu}$ Eq.~(\ref{nctmix}) simplifies to,
\begin{align}
T^{\mu}{}_{\nu}=(P+ 2\epsilon)u^{\mu}\tau_{\nu}-P\delta^{\mu}_{\nu}
\label{tupp}
\end{align}
Equation (\ref{tupp}) is the constitutive relation which expresses the stress tensor in terms of the energy density $\epsilon$, pressure $P$ and velocity $u^{\mu}$, and is the closest analogue which one has in the Newton-Cartan background of the usual expression of the relativistic stress energy tensor of an ideal fluid. The factor of $2$ in the definition of $Q^{\mu}$ is required in order to make the trace free condition match the well known non-relativistic condition. By setting $T^{\mu}_{\mu}=0$ in Eq.~(\ref{tupp} we find the condition
\begin{equation}
2\epsilon=(n-1)P
\end{equation}
as required. In addition, while there is no formal way to define $J^{\mu}$ from the action, it is natural to interpret it as some mass flow, which is proportional to the fluid velocity. We can thus write
\begin{equation}
J_i^{\mu} = \rho_i u^{\mu}
\label{curr}
\end{equation}
where $\rho_i$ represents the conserved charge density.
This follows the zeroth order result of derivative expansion.
For an ideal fluid, another conservation equation holds for the local entropy current. It follows from the second law of thermodynamics as a derived notion. The requirement that entropy should be non-decreasing during hydrodynamic evolution can be
expressed in a covariant way in terms of an entropy current whose divergence is non-negative.
\begin{equation}
\nabla_{\mu}J^{\mu}_S\geq 0\label{enc}
\end{equation}
In (\ref{enc}) the equality holds for ideal fluids.
The entropy current $J_s^{\mu}$ can be expressed as,
\begin{equation}
J_s^{\mu} = s u^{\mu}
\end{equation}
where `$s$' is the entropy density of the fluid.
\subsection{Fluid dynamics on Weyl-extended Newton-Cartan background} \label{fluid}
In this subsection, we first introduce a manifestly Weyl-covariant formalism suited to the study of non-relativistic conformal incompressible fluids. An important feature of incompressible fluids is that the Euler equations are invariant under the scale transformation but not under the special conformal transformation \cite{Fouxon:2008ik}. Thus conformal incompressible fluids are only scale invariant.
The various conformal observables like expansion, acceleration and shear related to non-relativistic conformal fluids will be,
\begin{align}
\tilde{\theta}&=\tilde{\nabla}_{\mu}u^{\mu}=\theta+(n+1)u^{\mu}b_{\mu}\notag\\
\tilde{a^{\nu}}&=u^{\mu}\tilde{\nabla}_{\mu}u^{\nu}=a^{\nu}+2u^{\mu}b_{\mu}u^{\nu}\notag\\
\tilde{\sigma}^{\mu\nu}&=\sigma^{\mu\nu}+u^{\rho}b_{\rho}h^{\mu\nu}+b_{\rho}h^{\rho(\nu}u^{\mu)}\notag\\\tilde{\omega}^{\mu\nu}&=b_{\rho}h^{\rho[\nu}u^{\mu]}
\label{scalevar}
\end{align}
where
\begin{equation}
\tilde{\nabla}_{\mu} V^{\lambda}=\nabla_{\mu}V^{\lambda}+(b_{\mu}\delta_{\nu}^{\lambda}
+b_{\nu}\delta_{\mu}^{\lambda}-b_{\sigma}h^{\lambda\sigma}h_{\nu\mu})V^{\nu}\label{nabv}
\end{equation}
Now we require a conformally invariant derivative `${\cal{D}}$', such that if a tensor $\tilde{Q}^{\alpha...}_{\beta...}$ obeys
$\tilde{Q}^{\alpha...}_{\beta...}=e^{ws}Q^{\alpha...}_{\beta...}$, the derivative will act on it as,
\begin{equation}
{\mathcal{D}}\tilde{Q}^{\alpha...}_{\beta...}=e^{-ws}{\mathcal{D}}Q^{\alpha...}_{\beta...}
\end{equation}
Following this the corresponding covariant derivative can be defined as,
\begin{equation}
{\mathcal{D}}_{\mu}=\tilde{\nabla}_{\mu}+wb_{\mu}\label{wcov}
\end{equation}
where `$w$' is the conformal weight of the quantity.
Note that the above covariant derivative is metric compatible.
\begin{equation}
{\mathcal{D}}_{\mu}h^{\mu\nu}=0,{\mathcal{D}}_{\mu}\tau_{\mu}=0
\end{equation}
For relativistic conformal fluid dynamics, additionally, the conformal acceleration ($u^{\mu}{\cal{D}}_{\mu}u^{\alpha}$) and expansion (${\cal{D}}_{\mu}u^{\mu}$) are assumed to vanish, which leads to a condition on $b_{\mu}$. We will first treat the action of the conformally invariant derivative `$\mathcal{D}$' on the rescaled tensors as they are before imposing any conditions. Given that the fluid velocity satisfies Eq.~(\ref{vel}), we find the following expression for the conformal acceleration on the Weyl-rescaled Newton-Cartan background
\begin{equation}
u^{\mu}{\cal{D}}_{\mu}u^{\nu} = u^{\mu}\nabla_{\mu}u^{\nu}= a^{\nu}
\label{acc}
\end{equation}
which follows from the fact that the scaling dimension of $u^{\mu}$ is 2.
We thus see that $u^{\mu}{\cal{D}}_{\mu}u^{\nu} = 0$ when there is no acceleration. Further, the requirement of $\mathcal{D}_{\mu} u^{\mu} = 0$ directly leads to the following condition
\begin{equation}
b_{\mu} u^{\mu} = -\frac{\theta}{n-1}
\label{shear}
\end{equation}
As can be seen from \ref{acc} and \ref{shear}, the conformally invariant derivative is useful in casting the variables and equations of non-relativistic incompressible fluid mechanics in a manifestly conformal language.
These derivatives also define a curvature tensor through their commutator,
\begin{equation}
[{\cal{D}}_{\mu},{\cal{D}}_{\nu}]V^{\lambda}=\tilde{R}^{\lambda}_{\mu\nu\sigma}V^{\sigma}+\omega F_{\mu\nu}V^{\lambda}
\label{fieldstrength}
\end{equation}
where $F_{\mu\nu}=\nabla_{\mu}b_{\nu}-\nabla_{\nu}b_{\mu}$, and $\tilde{R}^{\lambda}_{\mu\nu\sigma}$ is as given in (\ref{WRiem}). Note that should the usual symmetries of the Riemann tensor be assumed in Eq.~(\ref{WRiem}) which was discussed in detail following this equation in section \ref{wrncg}, the field strength for the scale gauge field $b_{\mu}$ would necessarily vanish. This would in turn affect the derivative expansion and the dissipative terms that would result \cite{Loganayagam:2008is}.
Let us now use these concepts to describe the conservation equations of the fluid on the Weyl rescaled Newton-Cartan background. The guiding principle will be that the action of the conformally invariant derivative on the rescaled currents of the theory is that same as that of the Newton-Cartan covariant derivative on these currents, whose conservation equations are known. The action of the covariant derivative on the stress tensor defined in Eq.~(\ref{tupp}) is given by
\begin{equation}
\tilde{\nabla}_{\mu}T^{\mu}_{\nu}={\nabla}_{\mu}T^{\mu}_{\nu}+\Gamma'^{\mu}_{\mu\rho}T^{\rho}_{\nu} - \Gamma'^{\rho}_{\mu\nu}T^{\mu}_{\rho}
\end{equation}
This explicitly leads to the result that, \begin{equation}{\cal{D}}_{\mu}T^{\mu}_{\nu} = \nabla_{\mu}T^{\mu}_{\nu}
\label{wten}
\end{equation}
provided $T^{\mu}_{\nu}$ has weight `$n$' and is traceless, i.e. $T^{\mu}_{\mu} = 0$.
It thus follows from (\ref{tupp}) that the conformal weights of `P' and `$\epsilon$' are both `$n$'. These are the same conditions as applicable to the relativistic case.
Likewise, the covariant derivative acting on $J^{\mu}$ gives,
\begin{equation}
\tilde{\nabla}_{\mu}J^{\mu} = (b_{\mu}\delta_{\nu}^{\mu}
+b_{\nu}\delta_{\mu}^{\mu}-b_{\sigma}h^{\mu\sigma}h_{\nu\mu})J^{\nu}= (n+1) b_{\mu}J^{\mu}
\label{cur}
\end{equation}
If the conformal weight of all conserved currents $J_i^{\mu}$ of the theory are $(n+1)$, we then have
\begin{equation}
\mathcal{D}_{\mu}J^{\mu} = 0
\label{wcur}
\end{equation}
This result differs from the relativistic case where the weight is always required to be equal to the dimension of the spacetime, i.e. `$n$'. It however does imply that the weight of the density $\rho_i$ be $(n-1)$ just as in the relativistic case, following (\ref{curr}) and that the weight of $u^{\mu}$ is $2$. For arbitrary anisotropic scaling `$z$', Eq.~(\ref{wcur}) will be satisfied if the conformal weight of the current $J^{\mu}_i$ is $(n+z-1)$.
Apart from the conventional mass flow, we will now consider the conformal incompressible fluid as a thermodynamic system and assume that in strict analogy to the mass current, there exists a local `entropy current' $J^{\mu}_s$ of the fluid which also has a conformal weight equal to the $(n+1)$. In addition to these equations, we can define an inequality for the entropy current
\begin{equation}
\tilde{\nabla}_{\mu}J_S^{\mu} \ge 0
\end{equation}
which follows from the second law of thermodynamics.
Similarly, the first law of thermodynamics for this system can be written in terms of the conformally invariant derivative,
\begin{equation}
Tu^{\lambda}{\cal{D}}_{\lambda}s= \frac{(n-1)}{2}u^{\lambda}{\cal{D}}_{\lambda}P-\mu^i u^{\lambda}{\cal{D}}_{\lambda}{\rho}_i
\label{therm}
\end{equation}
Requiring the conformal weights on both sides of this equation to be equal leads to the result that the weights of temperature `$T$' and chemical potential `$\mu^i$' be 1. This general expression simplifies in the case of an ideal fluid. It follows from \ref{curr} and \ref{wcur} that $\mu^i u^{\lambda}{\cal{D}}_{\lambda}{\rho}_i = 0$. Likewise, Eq.~(\ref{cons}) shows that $u^{\nu}{\cal{D}}_{\mu}T^{\mu}_{\nu} = 0$ which establishes that $u^{\lambda}{\cal{D}}_{\lambda}P = 0$ on substituting Eq.~(\ref{tupp}). Thus, the entropy density for an ideal fluid on the Weyl- rescaled Newton-Cartan background satisfies the following relation
\begin{equation}
T u^{\alpha} {\mathcal{D}}_{\alpha} s=0
\end{equation}
In considering the case of the conformal incompressible fluid on the Weyl rescaled Newton-Cartan background, we have thus found that the conformal weights of the energy, pressure, matter and entropy densities, as well as the temperature, are the same as in the relativistic case. The difference in the conformal weight of the matter and entropy currents from that of the relativistic case were identified in this section to be due to the anisotropic scaling. It will be interesting to study how these results differ in the case of more general fluid and velocity choices.
We would like to emphasize the relevance of the construction given in this subsection. In particular, the construction of the conformal covariant derivative in Eq.~(\ref{wcov}), the general Riemann tensor (which need not possess the usual symmetries) in Eq.~(\ref{WRiem} - \ref{tildeRiccs}), and the field strength of the conformal covariant derivatives in Eq.~(\ref{fieldstrength}), are all necessary in order to further determine contributions from dissipative terms through a derivative expansion. In this regard, the determination of the relation of the gauge field for scale transformations and fluid variables in Eq.(\ref{shear}) is also an essential relation to derive.
None of the above derived results have been considered in the Newton-Cartan literature prior to the present work. As indicated in this subsection, the treatment of non-relativistic fluid dynamics differs in many ways from the relativistic treatment, in large part due to the relations of the degenerate metrics for this background, and the spatial and temporal dependence of the gauge field for scale transformations. The detailed investigation of viscosity and dissipation, and the derivative expansion in general, promises to be considerably more interesting.
\section{Contributions of scale symmetry to the Quantum Hall Effect}
In this section, we will be interested in the consequences of non-relativistic anisotropic scale symmetry in describing quantum Hall fluids. The Hall viscosity results from the Berry phase term in the effective action \cite{Cho:2014vfl}. More specifically, it is the response to spatial stress in the corresponding term in stress energy tensor. The effective field theory consists of the Schr\"odinger field minimally coupled to a background electromagnetic field $A_{\mu}$, and a ``dynamical" statistical field $a_{\mu}$. The inclusion of the Chern Simons term involving the field $a_{\mu}$ follows from the need to study perturbations about a mean field of a strongly coupled, anyonic system. The statistical field term in effect fixes the statistics of the system to be either bosonic or fermionic, and enables the study of responses to the system. After the perturbation has been taken into account, one can then integrate out this field to have the effective field theory description of Quantum Hall Effect. In this context, the field $\Phi$ represents either a composite boson or a composite fermion, and since we are interested in the consequences of curved backgrounds on the system, we will investigate the former. We can use the result of Eq.~(\ref{diffschrodinger2}) to express the Chern Simons Landau Ginzburg (CSLG) effective action of the Quantum Hall effect \cite{Zhang:1992eu} in the following way
\begin{align}
S =\int dt d^2x \sqrt{h} &\bigg[ \frac{i}{2} \tau^{\mu} \left(\Phi(x) D_\mu \Phi(x)^{*} - \Phi^{*}(x) D_\mu \Phi(x) \right)- \frac{1}{2m}h^{\mu\nu} (D_{\mu}\Phi (x))^{*}(D_{\nu}\Phi (x) ) \notag\\&+ \frac{\varepsilon^{\mu\nu\lambda}}{8\pi s} a_{\mu}\nabla_{\nu}a_{\lambda} \bigg]
\label{cbaction}
\end{align}
where $\varepsilon^{\mu \nu \lambda}$ is the Levi Civita tensor, and the covariant derivative on the curved background `$D_{\mu}$' is,
\begin{align}
D_{\mu} &= \partial_{\mu} + ieA_{\mu}+ia_{\mu}+ isB_{\mu}+ is'C_{\mu} \notag\\
& = \partial_{\mu} + i \alpha_{\mu} + ia_{\mu} \, ,
\label{cbcov}
\end{align}
In (\ref{cbcov}) `$A_{\mu}$' is the external electromagnetic field, `$a_{\mu}$' is the statistical gauge field, `$B_{\mu}$' was introduced at the time of localization of the Galilean symmetry and similarly '$C_{\mu}$' for the scale transformation in (\ref{firstcov}). Since we will integrate out the statistical field $a_{\mu}$ before our final result, we have written the covariant derivative as in the second equality in Eq.~(\ref{cbcov}). The hydrodynamic version of (\ref{cbaction}) is derived by expressing the complex field $\Phi$ in polar variables \cite{Cho:2014vfl, Zhang:1988wy, Stone},
\begin{equation}
\label{a}
\Phi = \sqrt{\rho} e^{i \theta}
\end{equation}
where $\rho$ is the matter density, $\rho=\Phi^{*}\Phi$.
The transformation (\ref{a}) leads to the following action,
\begin{align}
S = \int dt d^2x \sqrt{h} [ \rho\tau^{\mu} &\left(\partial_{\mu}\theta +{\alpha}_{\mu}+a_{\mu}\right)-\frac{\rho}{2m}h^{\mu\nu}\left(\partial_{\mu}\theta + {\alpha}_{\mu}+a_{\mu}\right)\left(\partial_{\nu}\theta + {\alpha}_{\nu}+a_{\nu}\right)\notag\\& -\frac{1}{8m\rho}h^{\mu\nu}\partial_{\mu}\rho\partial_{\nu}\rho+ \frac{\varepsilon^{\mu\nu\lambda}}{8\pi s} a_{\mu}\nabla_{\nu}a_{\lambda}]
\label{calscintaction}
\end{align}
The response of the FQH state to probe fields can be considered through the following variations of the fields
\begin{align}
\rho &\rightarrow \bar{\rho}+\delta \rho\notag\\A_{\mu} &\rightarrow \bar{A}_{\mu}+\delta A_{\mu}\notag\\ a_{\mu} &\rightarrow \bar{a}_{\mu}+\delta a_{\mu}\label{pertur}
\end{align}
where the barred values represent the mean field values, and the variations correspond to probe fields. The FQH state of the electrom corresponds to the superfluid state of the boson $\Phi$, where ${\bar A}_{\mu}$ is completely cancelled by ${\bar a}_{\mu}$. Further, the average density, ${\bar \rho}$, is related to the fields ${\bar A}_{\mu}$ through the quantum Hall effect,
\begin{align}
{\bar \rho} = \frac{1}{4\pi s} \varepsilon^{0 i j} \nabla_{i}{\bar A}_{j} = -\frac{1}{4 \pi s} \varepsilon^{0 i j} \nabla_{i}{\bar a}_{j}
\label{QHE}
\end{align}
where the filling fraction in Eq.~(\ref{QHE}) is written in terms of the intrinsic orbital spin `$s$' through the relation $\nu = \frac{1}{2s}$. With these considerations at hand, we can study the response in the effective action, where we will retain terms that are at most quadratic in variations and derivatives.
\begin{align}
&{\mathcal L} = \sqrt{h}\bigg[ \tau^{\mu}(\partial_{\mu} \theta + \delta \alpha_{\mu}){\bar \rho} + \tau^{\mu}(\partial_{\mu} \theta + \delta \alpha_{\mu}+\delta a_{\mu})\delta \rho \nonumber\\
&\quad - \frac{{\bar \rho}h^{\mu\nu}}{2m} (\partial_{\mu} \theta + \delta \alpha_{\mu} +\delta a_{\mu}) (\partial_{\nu} \theta + \delta \alpha_{\nu}+\delta a_{\nu}) + \frac{\varepsilon^{\mu\nu\lambda}}{8\pi s} \delta a_{\mu}\nabla_{\nu} \delta a_{\lambda} \bigg]\label{perac}
\end{align}
We can now introduce a field $j^{\mu}$ through a Hubbard-Stratonovich transformation on the kinetic term of the action in Eq.~(\ref{perac}), to rewrite the action as,
\begin{align}
{\mathcal L} =& \sqrt{h}\bigg[ \tau^{\mu}(\partial_{\mu} \theta
+ \delta \alpha_{\mu}){\bar \rho} + \tau^{\mu}(\partial_{\mu} \theta + \delta \alpha_{\mu}+\delta a_{\mu})\delta \rho - (\partial_{\mu} \theta + \delta \alpha_{\mu}+\delta a_{\mu})h^{\mu\nu}j_{\nu} + \frac{m}{2{\bar \rho}} j_{\mu} h^{\mu\nu}j_{\nu} \bigg] \nonumber\\
&+ \sqrt{h} \frac{\varepsilon^{\mu\nu\lambda}}{8\pi s} \delta a_{\mu}\nabla_{\nu} \delta a_{\lambda},
\label{CB:duality}
\end{align}
In the absence of the vortex excitation, we can integrate out the phase variable $\theta$ in Eq.~(\ref{CB:duality}) to find the following conservation equation,
\begin{equation}
\partial_{\mu} (\sqrt{h}J^{\mu}) = \sqrt{h} \nabla_{\mu}J^{\mu} =0
\label{jcon}
\end{equation}
where we have defined $J^{\mu} = \delta \rho \tau^{\mu} - j_{\nu}h^{\nu \mu}$. Given Eq.~(\ref{jcon}) holds, we can further express it as,
\begin{equation}
J^{\mu} = \varepsilon^{\mu\nu\lambda} \frac{1}{2\pi} \nabla_{\nu}f_{\lambda} \, ,
\end{equation}
where $f_{\lambda}$ are the new hydrodynamic gauge variables. Clearly, $J^{\mu}$ remains invariant under $U(1)$ transformations of the field $f_{\lambda}$. By substituting this expression for $J^{\mu}$ back in Eq.~(\ref{CB:duality}), we find,
\begin{align}
{\mathcal L} =& \sqrt{h}\bigg[ {\bar \rho}\tau^{\mu} \delta \alpha_{\mu} + \varepsilon^{\mu\nu\lambda} \frac{1}{2\pi} \nabla_{\nu}f_{\lambda}(\delta \alpha_{\mu}+\delta a_{\mu}) +\frac{m}{2{\bar \rho}} j_{\mu} h^{\mu\nu}j_{\nu} + \frac{\varepsilon^{\mu\nu\lambda}}{8\pi s} \delta a_{\mu}\nabla_{\nu} \delta a_{\lambda} \bigg] \,
\label{CB:dual}
\end{align}
We can now integrate out $\delta a_{\mu}$ and obtain the effective action for the FQH state on the Weyl rescaled Newton-Cartan background,
\begin{align}
{\mathcal L} =& \sqrt{h} \bigg[ {\bar \rho} \tau^{\mu} \delta \alpha_{\mu} + \frac{1}{2\pi} \varepsilon^{\mu\nu\lambda} \delta \alpha_{\mu} \nabla_{\nu}f_{\lambda} -\frac{s}{2 \pi} \varepsilon^{\mu\nu\lambda} f_{\mu}\nabla_{\nu} f_{\lambda} + \frac{m}{2{\bar \rho}} j_{\mu} h^{\mu \nu} j_{\nu} \bigg]
\end{align}
Expanding this effective theory to the leading order in gauge fields, we find,
\begin{align}
{\mathcal L} = & \sqrt{h} \left[ e \tau^{\mu} \delta A_{\mu} \bar \rho + s \tau^{\mu} B_{\mu} {\bar \rho} -\frac{s}{2\pi} \varepsilon^{\mu\nu\lambda} f_{\mu}\partial_{\nu} f_{\lambda} + \frac{e}{2\pi} \varepsilon^{\mu\nu\lambda} \delta A_{\mu} \partial_{\nu}f_{\lambda} + \frac{s}{2\pi} \varepsilon^{\mu\nu\lambda} B_{\mu} \partial_{\nu}f_{\lambda} \right. \notag\\ & \left. \qquad + s'\tau^{\mu} C_{\mu} {\bar \rho} + \frac{s'}{2\pi} \varepsilon^{\mu\nu\lambda} C_{\mu} \partial_{\nu}f_{\lambda} +\cdots \right]
\label{FQHE}
\end{align}
The first line of Eq.~(\ref{FQHE}) represents the low energy effective Lagrangian of the FQHE in the Newton-Cartan background, while the second line completes the contribution due to the Weyl rescaled Newton-Cartan background. In particular for the first line of Eq.~(\ref{FQHE}),we draw attention to the second term , which is the Berry phase term, and the last term, which is one of the Wen Zee terms. These terms provide a contribution to the Hall viscosity through the stress tensor upon considering variations in the metric about flat space. In what follows, we will be concerned with comparing the second term of the first line of Eq.~(\ref{FQHE}) with the first term in the second line, insofar as their contribution to the stress tensor is concerned. The vierbein postulate for the Weyl rescaled Newton-Cartan background, Eq.~(\ref{P}), leads the following relations,
\begin{align}
B_{\mu} = \frac{1}{2}\epsilon_{a b} \Lambda_{\nu}^{a} \nabla_{\mu} \Sigma^{\nu b} \notag\\
C_{\mu} = \frac{1}{d+2} \Lambda^{a}_{\nu} \nabla_{\mu} \Sigma_{a}^{\nu}
\label{BC}
\end{align}
where the covariant derivatives in Eq.~(\ref{BC}) act only on global indices. In considering variations about flat space we have,
\begin{align}
\Lambda^{\alpha}_{\mu} = \delta^{\alpha}_{\mu} + \delta \Lambda^{\alpha}_{\mu} \notag\\
\Sigma_{\alpha}^{\mu} = \delta_{\alpha}^{\mu} + \delta \Sigma_{\alpha}^{\mu}
\label{vp2}
\end{align}
Considering the time dependent variations of the spatial metric and its inverse, which we will label as $h_{\mu \nu}(t)$ and $h^{\mu \nu}(t)$ respectively, we find from Eqs.~(\ref{spm2}) and (\ref{spm}) that
\begin{align}
\tau^{\mu} B_{\mu} = \frac{1}{8} \epsilon_{a b} \delta_{\lambda}^b \delta^{\mu a} \, h^{\nu \lambda}(t) \dot{h}_{\nu \mu}(t) \notag\\
\tau^{\mu} C_{\mu} = \frac{1}{d+2} \, h^{\nu \lambda}(t) \dot{h}_{\nu \lambda}(t)
\label{rel}
\end{align}
where the overdot indicates the time derivative. In Eq.~(\ref{rel}) we made use of the fact that $\tau^{\mu} = \delta^{\mu}_{0}$ in flat space. However, its variations need not vanish. In order to retain a Newton-Cartan structure following spatial variations, it in fact is necessary to have temporal variations as well. This follows from the projection operator and the orthogonality relations Eq.~(\ref{ortho}).This however will not affect the calculation of the Hall viscosity, which requires expanding the Lagrangian to quadratic order in variations of the spatial metric.
With this subtle point behind us, we can now proceed to derive the contribution to the stress tensor due to the terms in Eq.~(\ref{FQHE}). Following the discussion in section \ref{varis}, only now for the case where the Lagrangian depends on $h^{\mu \nu}$ and $h_{\mu \nu}$, we find that the relevant form of $T^{\mu}_{\nu}$ is given by,
\begin{equation}
T^{\mu}_{\nu} = 2 \frac{\partial L}{\partial{h_{\mu \rho}}} h_{\rho \nu} - 2 \frac{\partial L}{\partial{h^{\nu \rho}}} h^{\rho \mu}
\label{tress}
\end{equation}
We can now use this for the following Lagrangian,
\begin{equation}
L = \frac{1}{8} s \bar{\rho} \, \epsilon_{a b} \delta^{a \mu} \delta^{b}_{\nu} \left(h_{\mu \rho} \dot{h}^{\rho \nu} \right) + \frac{1}{4} s' \bar{\rho} h_{\mu \rho}\dot{h}^{\mu \rho} + \cdots
\label{eff}
\end{equation}
This Lagrangian is the the second term of the first line, and the first term of the second line of Eq.~(\ref{FQHE}), expanded to second order in variations of the spatial metrics under the presence of a constant magnetic field ($\bar{\rho} =$ const.). We will simply concern ourselves with these two terms to illustrate the form of the Hall viscosity in the Newton-Cartan background, and the relevance of the scale term which has entered in due to the Weyl rescaling of the background. Using Eq.~(\ref{tress}) with the Lagrangian in Eq.~(\ref{eff}), we find,
\begin{equation}
T^{\mu}_{\nu} = \left( \frac{s \bar{\rho}}{2}\right) \left(\frac{1}{2}\epsilon_{a b} \delta^{a \mu} \delta^{b}_{\sigma} h_{\lambda \nu} \dot{h}^{\lambda \sigma} - \frac{1}{2}\epsilon_{a b} \delta^{a \sigma} h_{\sigma \lambda} \delta^{b}_{\nu} \dot{h}^{\lambda \mu} \right) + \frac{s' \bar{\rho}}{2} \partial_t\left(h^{\mu \sigma}h_{\sigma \nu} \right)
\label{stress2}
\end{equation}
In deriving Eq.~(\ref{stress2}) we made use of the fact that $\epsilon_{0 b} = 0$. The first two terms, in the parenthesis, are contributions from the first term of Eq.~(\ref{eff}), and gives the Hall viscosity for this background. The third term is the contribution due to scale transformations. These may be cast in a more familiar form by contracting Eq.~(\ref{stress2}) with $h_{\mu \rho}$. This leads to,
\begin{equation}
T_{\rho \nu} = \frac{\eta_H}{2} \left(\epsilon_{\rho \sigma} h_{\lambda \nu} + \epsilon_{\nu \sigma} h_{\lambda \rho} \right) \dot{h}^{\lambda \sigma} + \frac{\theta_H}{2} \tau_{\nu}\tau^{\mu} \dot{h}_{\mu \rho} + \cdots
\label{final}
\end{equation}
The first two terms is Eq.~(\ref{final}) are now easily recognizable as the usual Hall viscosity term, where we have defined $\frac{s \bar{\rho}}{2} = \eta_H$ in the usual way. The third term in Eq.~(\ref{final}) is a non-trivial contribution due to scale transformations. In particular, this term does not vanish insofar as a shift $\tau^{i}$ exists, where the $\tau^{\mu}$ in this equation are the corresponding variations in the temporal metric resulting from the variations of the spatial metric, about flat space. Its form is not that of a viscosity, but of an expansion in time. The hall viscosisty and scale contribution in Eq.~(\ref{final}) having vanishing trace, and require the consideration of the Wen-Zee term to determine the corrections to each which arise out of coupling with the curved background.
There are many other interesting aspects which would result due to the Weyl rescaled Newton-Cartan background, which we will not be able to consider in greater detail within this subsection, but which should be mentioned. Chief among these are the consideration of Galilean boosts in the context of response functions, and the role of torsion in the calculation of the torsional Hall viscosity.
In addition, a detailed study of variations in the temporal 1-form and its inverse is needed. As indicated above, the consideration of purely spatial metric variations would not result in a background with the properties of a Newton-Cartan spacetime. The necessary inclusion of variations in $\tau^{\mu}$ and $\tau_{\mu}$ has led to a contribution to scale transformations as indicated in Eq.~(\ref{final}). A detailed investigation of these topics in the phenomenology of the Quantum Hall effect lies beyond the scope of the present paper, and we look forward to addressing these in future work.
\section{Conclusion}
In this paper, the scale transformation was successfully included in the localization scheme for a non-relativistic field theoretic model. As a consequence, we can now consistently couple massive Galilean plus dilatation invariant fields to the Weyl rescaled Newton-Cartan background, which was derived as the general curved non-relativistic background possessing these symmetries. The connection on this background differs from that of the Newton-Cartan one by terms which involve both the spatial metric and temporal vierbein. This results due to the anisotropic scaling of the field theory which was considered. We also demonstrated that while the Riemann and Ricci tensors scale quite differently than their relativistic counterparts, the definitions of the Weyl and Schouten tensors are of the same form as in the relativistic case.
As an application, we provided the description of a conformal incompressible ideal fluid in its rest frame on this background, and derived the entropy current. The weights of the energy, entropy and matter current densities were derived, through which we were further able to demonstrate the first law of thermodynamics. An interesting feature of describing fluids on this background is the inclusion of external forces in a geometric framework, by changing the form of the field $A_{\mu}$. While we have considered the case of an ideal fluid, it is clear that understanding conformal fluids on this background requires further study of viscous and dissipative fluids as well. The entropy current in these cases is non-trivial, and a second order derivative expansion must be carried out. Further, it may be that the scaling properties we determined in the previous section gets altered with more general choices of velocity. The investigation of these topics requires the relations and derivatives we introduced in section \ref{fluid}, and the expressions of the Riemann tensor provided in Eqs.~(\ref{WRiem}-\ref{tildeRiccs}). We look forward to investigating these topics in future work.
As an additional application, we considered the consequences of scale invariance in the effective field theory of the Quantum Hall Effect. The inclusion of the gauge field for scale transformations led to a modified low energy hydrodynamic effective field theory. By looking into the response function to metric perturbations, we determined that the additional terms contribute in the form of an expansion. In addition, we discussed that in considering the Weyl rescaled Newton-Cartan background, it is inconsistent to merely introduce spatial metric variations without the considerations of corresponding variations in the temporal metric, so as to preserve the orthogonality relations between the two. This property is what enables the scale corrections to the effective action to contribute through metric perturbations. Additional aspects of the Weyl rescaled Newton-Cartan background, particularly the form of the Torsion tensor and its dependence on the temporal metric and scale gauge field, promise additional geometric terms which could be introduced in the low energy effective action. A detailed investigation of these terms and their phenomelogical consequences lies beyond the scope of the present work.
Scale invariance in condensed matter systems, emphasized in the introduction, can be broken at the quantum level. One of the most striking examples of this is the presence of anomalies. The curved background which we have derived allows for the study of these anomalies. Scale anomalies most prominently studied in the context of second quantized field theories are those related to the introduction of the quartic self interaction (and other interaction) terms to the Schr\"odinger field theory. The free quantum theory is normally taken to be scale invariant. In considering the Weyl-rescaled Newton-Cartan background, this aspect need not be so. We intend on using this background to address some of these topics in future work.
As an additional application, it should be noted that the Weyl rescaled Newton-Cartan background can be considered as a non-relativistic limit of the Weyl-Cartan theory, obtained through the localization of the Poincare and Weyl symmetries in relativistic systems. In the relativistic case, the field introduced at the time of localization of the scale symmetry has received some consideration as a probable dark matter constituent \cite{Barvinsky:2013mea}. For the non-relativistic case, the introduced gauge field ($b_{\mu}$) could likely provide additional insight into this topic, owing to the presently understood non-relativistic nature of dark matter.
\section{Acknowledgments}
I would like to thank Prof. Rabin Banerjee, Prof. Pradip Mukherjee and Karan Fernandes for useful discussions. |
1705.07302 | \section{Introduction}
One of the many unexpected results to emerge from studies of
exo\-planets this century has been the discovery of orbits
that are not even approximately coplanar with the stellar equator
(cf., e.g., \citealt{Winn15}).
The tool traditionally most commonly used to investigate the
relative orientations of orbital and stellar-rotation angular-momentum
vectors is the Rossiter--McLaughlin (\mbox{R--M}) effect
(\citealt{Holt1893, Schlesinger1910}\footnote{An example of Stigler's
law \citep{Merton57,Stigler80}.}) -- the apparent displacement of rotationally broadened stellar
line profiles arising from a body occulting part of the stellar disk.
Long established in eclipsing-binary
studies \citep[e.g.,][]{Rossiter24, McLaughlin24}, the R--M effect
took on new significance following its detection in the archetypal
transiting exo\-planetary system HD~209458 \citep{Queloz00}. The
discovery of misaligned planetary orbits in other systems followed \citep{Hebrard08,
Winn09}, and sample sizes are now large enough\footnote{$\sim$120 at
the time of writing;
e.g.,\newline \texttt{http://www.astro.keele.ac.uk/jkt/tepcat/rossiter.html}}
to suggest that stars with thick convective envelopes generally have planets
with small orbital misalignments, while a broader spread of
values is found in hotter stars
\citep{Winn10,Schlaufman10,Albrecht12,Mazeh15}.
The R--M effect is an essentially spectro\-scopic phenomenon, being studied through
radial-velocity measurements.
In principle there is a corresponding
photo\-metric signature, arising through Doppler boosting (e.g.,
\citealt{Groot12}), but the signal is too small for any reliable
detections to date. Transit photo\-metry does, however, offer
potential diagnostics of spin-orbit alignment if the
surface-brightness distribution over the occulted parts of the stellar
disk is not circularly symmetric. In particular, if the stellar
rotation is sufficiently rapid, it can introduce both an equatorial
extension and, through gravity darkening, a characteristic
latitude-dependent surface-intensity distribution; these effects are
capable of defining the relative direction of the stellar rotation
axis, and hence of diagnosing misaligned transits (e.g., \citealt{Barnes09}).
The first system to be recognized as having a misaligned orbit from
photo\-metry alone, without supporting evidence from the R--M effect,
was KOI-13 \citep{Szabo11,Barnes11}. Other systems in which asymmetry
in the transit light-curve has been interpreted as arising through
rotationally-induced gravity darkening include KOI-89 \citep{Ahlers15}
and HAT-P-7 (KOI-2; \citealt{Masuda15}), while the same approach has
been used to argue for good alignment of orbital and rotational
angular-momentum vectors for KOI-2138 \citep{Barnes15}.
In other cases,
modelling of lower-quality data has led to less compelling claims;
e.g., PTFO~8-8695 (cp.\ \citealt{Barnes13, Howarth16}) and CoRot-29
(cp.\ \citealt{Cabrera15, Palle16}).
In the present paper we re-examine \textit{Kepler} photo\-metry of transits of
KOI-13, using a more complete physical model than previous studies.
Our intention is to stress-test the model against data of remarkable
quality, and to demonstrate its power to establish \textit{absolute}
numerical values for key stellar and planetary parameters.
Following a selective review of the literature on KOI-13
($\S$\ref{sec:koi}), we summarize the model ($\S$\ref{sec:mod}) and the
data preparation ($\S$\ref{sec:dprep}). Results are presented and
discussed in $\S\S$\ref{sec:fit},~\ref{sec:syspar}. Appendix~\ref{appx:one}
demonstrates how to put the modelling on an absolute scale,
given the star's projected equatorial rotation speed.
\section{The KOI-13 system}
\label{sec:koi}
Kepler Object of Interest no.\ 13 (KOI-13; historically cata\-logued as
BD~+46$^\circ$~2629) was identified as the host of a transiting
exo\-planet by \citet{Borucki11}. \citet{Aitken04} had previously noted
BD~+46$^\circ$~2629 as visual binary with components of comparable
brightness, separated by $\sim$1{\farcs}1 \citep{Howell11, Law14},
which \citet{Szabo11} showed share a common proper motion. The latter
authors identified the marginally brighter component as the transiting
system, a result confirmed by \citet{Santerne12}, who found
the fainter component, KOI-13B, to be itself a spectro\-scopic binary.
The basic transit light-curve was modelled
by \citet{Barnes11}, who showed that its small asymmetry
arises from stellar gravity darkening coupled to
spin--orbit misalignment. Subsequent tomo\-graphy yielded results
inconsistent with the obliquity inferred in this first analysis
\citep{Johnson14}, but by imposing the constraint afforded by the
spectroscopy,
\citet{Masuda15} was able to identify a geometry that reconciled
the spectro\-scopic and light-curve solutions.
The exquisite quality of the \textit{Kepler} data has inspired a number of
ancillary studies. In particular, the system clearly shows
out-of-transit orbital variations arising from Doppler beaming,
ellipsoidal distortion, and reflection effects (\textsc{`beer'}
effects; \citealt{Shporer11, Mislis12, Mazeh12}). A further,
25.43-hr, periodic signal has been identified in the photometry, and
has been suggested as arising either from tidally induced pulsation
\citep{Shporer11,Mazeh12} or from rotational
modulation \citep{Szabo12}.
\section{Modelling}
\label{sec:mod}
The \citet{Barnes11} and \citet{Masuda15} analyses
of the transit light-curve
were both based on
a simple oblate-spheroid stellar geometry, and utilised black-body
fluxes coupled to a global two-parameter limb-darkening `law'.
These are reasonable approximations for initial investigations, especially since KOI-13's
rotation is substantially subcritical (cf.\ Table~\ref{params_tab}), but we undertook
our work in the hope that a somewhat more physically-based model would
better constrain the system with fewer ad hoc adjustments.
The basic model is as described by \citeauthor{Howarth16}
(\citeyear{Howarth16}; \citealt{Howarth01}). Appropriate values for
model parameters, and their probability distributions, are determined
through Markov-chain Monte-Carlo (MCMC) sampling, with uniform priors
unless stated otherwise.
\subsection{Star}
The star's rotationally distorted surface is approximated as a Roche
equipotential.\footnote{Mass distributions from polytropic models give
negligibly different results \citep{Plavec58, Martin70}. By default,
surface angular velocity is assumed to be independent of latitude.}
Latitude-dependent values of surface gravity, $g$,
and \textit{local} effective temperature, \ensuremath{T^{\ell}_{\rm eff}}, are calculated
self-consistently, taking into account gravity darkening.
The stellar flux is then computed as a
numerical integration of emitted intensities over visible surface elements.
\subsubsection{Intensities}
Specific intensities (radiances), $I(\lambda, \mu, \ensuremath{T^{\ell}_{\rm eff}}, g)$,
are interpolated from a grid of line-blanketed, solar-abundance LTE
models \citep{Howarth11a}, integrated over the \textit{Kepler}
passband.
The interpolation in angle ($\mu =\cos\theta$, where $\theta$
is the angle between the surface normal and the line of sight)
is performed using an analytical
4-parameter characterization
\begin{align}
I(\mu)/I(1) = 1 - \sum_{n=1}^4{a_n (1 - \mu^{n/2})}
\label{eq:cl4}
\end{align}
\citep{Claret00}, which reproduces individual numerical values to
$\sim$0.1\%\ \citep{Howarth11a}.
\subsubsection{Modelled effective temperature, gravity}
\label{sec:teffl}
Surface distributions of temperature and gravity are needed in order
to evaluate model-atmosphere emergent intensities (and for no other
reason). These parameters are completely specified by the adopted
gravity-darkening law ($\S$\ref{sec:gd}), plus any suitable
normalizations; we use the base-10 logarithm of the polar gravity in
c.g.s. units, \ensuremath{\log{g}_{\rm p}}, and the stellar effective temperature,
\begin{align*}
\ensuremath{T_{\rm eff}} = \sqrt[4]{\frac{\int{\sigma(\ensuremath{T^{\ell}_{\rm eff}})^4\,\text{d}A}}{{\int{\sigma\,\text{d}A}}}}
\end{align*}
(where $\sigma$ is the Stefan--Boltzmann constant and the integrations
are over surface area).
While the use of model-atmosphere intensities removes the need for ad
hoc limb-darkening parameters, this is at the expense of assumptions
that, first, the effective temperature and polar gravity are known
with adequate precision to give a sufficiently faithful representation of limb
darkening, and secondly, that the model-atmosphere calculations
predict the emergent intensities reliably.
Anticipating that
neither assumption need necessarily be valid (e.g., \citealt{Howarth11b}),
we draw an explicit distinction between the actual physical
quantities \ensuremath{T_{\rm eff}}, \ensuremath{\log{g}_{\rm p}}\ and their model-parameter counterparts
\ensuremath{T^{\rm L}_{\rm eff}}, \ensuremath{\log{g}^{\rm L}_{\rm p}}\ (where the superscript
is intended to indicate a `light-curve', or `limb-darkening',
determination; cf.~$\S$\ref{sec:fit}).
\subsubsection{Gravity darkening}
\label{sec:gd}
It is not immediately obvious whether gravity darkening in KOI-13
should be modelled according to a recipe appropriate for radiative or
convective envelopes. While the literature documents a surprising
large dispersion for estimates of its effective temperature
(7650--9107~K; \citealt{Shporer14}, \citealt{Szabo11},
\citealt{Brown11}, \citealt{Huber14}, with claimed precisions that are
considerably smaller than the spread of results), the more detailed
studies tend towards values at the lower end of the range. This puts
\ensuremath{T_{\rm eff}}\ not very far from the boundary between convective and radiative
regimes, around $\ensuremath{T_{\rm eff}} \simeq 7000$~K (e.g.,
\citealt{Claret98}). Because of this, we ran several sequences of
models using a generic gravity-darkening law,
\begin{align}
\ensuremath{T^{\ell}_{\rm eff}} \propto g^\beta,
\label{eq:gdark}
\end{align}
with the
gravity-darkening exponent $\beta$ as a free parameter. These models
all migrated to solutions with exponents very close to the
\citet{vonZeipel24}
value
of $\beta=0.25$, as was also found by
\citet{Masuda15}.
For most model runs, we actually used the parameter-free
gravity-darkening model proposed by \citet{Espinosa11}, which is close
to von~Zeipel gravity darkening at the subcritical rotation
appropriate to KOI-13.
This `ELR' formulation has a somewhat firmer physical foundation than
the original von~Zeipel analysis, and gives better agreement with, in
particular, optical interferometry of rapid rotators
(e.g., \citealt{DomdeSou14}).
\subsection{Transit}
Transits are modelled by assuming a completely dark occulting body of
circular cross-section, in a misaligned circular orbit;
although an orbital eccentricity $e = (6\pm 1) \times 10^{-4}$ has
been inferred from out-of-transit photo\-metry of KOI-13 by
\citet{Esteves15}, this has negligible consequences for our study.
The contamination of the transit light-curve by KOI-13B (spatially
unresolved in the \textit{Kepler} beam) is characterized by its
fractional contribution to the total signal, or `third light' ($L_3$)
in the nomenclature of traditional eclipsing-binary
studies.\footnote{Of course, the exo\-planetary `second light' is
extremely small.}
\subsection{Parameters}
Table~\ref{params_tab} lists one set of basic parameters that fully
specify the model (other combinations are possible). We stress that
the geometry of the model is fundamentally scale-free; all linear
dimensions are expressed in units of the orbital semi-major axis,
while times are implicitly in units of the orbital period. The extent
of effects arising from rotational distortion is determined by
$\Omega/\Omega_{\rm c}$, the ratio of the rotational angular velocity
to the critical value at which the effective equatorial gravity is
zero; a value for the stellar mass, often assumed in similar studies,
is not required.
\begin{table*}
\caption{Model parameters and illustrative fitted values. Model M1 has \ensuremath{T^{\rm L}_{\rm eff}}\ as
a free parameter (cf.~$\S$\ref{sec:teffl}), with $\ensuremath{\log{g}^{\rm L}_{\rm p}} \equiv \ensuremath{\log{g}_{\rm p}}$; model M2 additionally has
\ensuremath{\log{g}^{\rm L}_{\rm p}}\ free; model M3 has \prot\ fixed.
The errors (on the last quoted significant figure of the parameter
values) are the quadratic sum of 95-percentile ranges on solution M1
(initial $L_3 = 0.45$) and the maximum deviation of corresponding
solutions with initial $L_3$ values in the range 0.41--0.49
($\S$\ref{sec:l3}).}
\begin{tabular}{llllrlll}
\hline
\multicolumn{3}{c}{Parameter} &
\multicolumn{5}{c}{{\leavevmode\leaders\hrule height 0.7ex depth \dimexpr0.4pt-0.7ex\hfill\kern0pt}\;Best-fit value\;\leavevmode\leaders\hrule height 0.7ex depth \dimexpr0.4pt-0.7ex\hfill\kern0pt}
\\
&&\multicolumn{1}{r}{Model:}&\multicolumn{1}{c}{M1}
&\multicolumn{1}{c}{$\pm$} &$\qquad$&
\multicolumn{1}{c}{M2}&
\multicolumn{1}{c}{M3}\\
\hline
\multicolumn{3}{l}{{Stellar:}} \\
$\;$& \ensuremath{T^{\rm L}_{\rm eff}} & Effective-temperature parameter$^*$ (K)&
\ensuremath{\phantom{0}}\ 8084& 186&&\ensuremath{\phantom{0}}\ 7987&$\phantom{\equiv}$\ensuremath{\phantom{0}}\ 8046\\
$\;$& \ensuremath{\log{g}^{\rm L}_{\rm p}} & Polar-gravity parameter$^*$ (dex cgs)&
\;\;\;$\cdots$& &&\ensuremath{\phantom{0}} 4.27&$\phantom{\equiv}$\ensuremath{\phantom{0}} 4.32\\
& \ensuremath{ \Omega/\Omega_{\rm c} } &
\begin{minipage}[t]{0.7\columnwidth}
Angular rotation rate\newline (in units of the critical rate)
\end{minipage}&
\ensuremath{\phantom{0}} 0.341& 15&&\ensuremath{\phantom{0}} 0.343&$\phantom{\equiv}$\ensuremath{\phantom{0}} 0.320\\
& $\istar$ &
\begin{minipage}[t]{0.7\columnwidth}
Inclination of stellar rotation axis to line of sight \mbox{(0--90$^\circ$)}
\end{minipage}&
81.137&16&&81.135&$\phantom{\equiv}$81.134\\
&$\rpole/a$&
\begin{minipage}[t]{0.7\columnwidth}
Polar radius\newline(in units of the orbital semi-major axis)
\end{minipage}&
\ensuremath{\phantom{0}} 0.2219&4&&0.2217&$\phantom{\equiv}$\ensuremath{\phantom{0}} 0.2219\\
&$L_3$&`Third light'&
\ensuremath{\phantom{0}} 0.451&39&&\ensuremath{\phantom{0}} 0.451&$\phantom{\equiv}$\ensuremath{\phantom{0}} 0.451\\
&g.d.&
\begin{minipage}[t]{0.7\columnwidth}
Gravity darkening: ELR
\end{minipage}&
\\
\multicolumn{3}{l}{{Planetary:}} \\
&$R_{\rm P}/a$ &
\begin{minipage}[t]{0.7\columnwidth}
Planetary radius\\(in units of the orbital semi-major axis)
\end{minipage}&
\ensuremath{\phantom{0}} 0.0190&7&&\ensuremath{\phantom{0}} 0.0190&$\phantom{\equiv}$\ensuremath{\phantom{0}} 0.0190\\
\multicolumn{3}{l}{{Orbital:}} \\
&\iorb &
\begin{minipage}[t]{0.7\columnwidth}
Inclination of orbital angular-momentum vector to line
of sight \mbox{(0--180$^\circ$)}
\end{minipage}&
93.319&22&&93.316&$\phantom{\equiv}$93.316\\
&$\lambda$ &
\begin{minipage}[t]{0.7\columnwidth}
Angle between the projections onto the plane of the sky
of the orbital and stellar-rotational angular-momentum vectors,
measured counter-clockwise from the former \mbox{(0--360$^\circ$)}
\end{minipage}
&59.19&5&&59.20&$\phantom{\equiv}$59.20\\
\hline
\multicolumn{3}{l}{Imposed:} \\
&$\porb$ &
\begin{minipage}[t]{0.7\columnwidth}
Orbital period (d)
\end{minipage}&\multicolumn{5}{c}{{\leavevmode\leaders\hrule height 0.7ex depth \dimexpr0.4pt-0.7ex\hfill\kern0pt}{\;1.76358799\;}{\leavevmode\leaders\hrule height 0.7ex depth \dimexpr0.4pt-0.7ex\hfill\kern0pt}}\\
& \ensuremath{v_{\rm e}\sin{i_*}} & Projected equatorial rotation speed$^\dagger$ (\kms)&
\multicolumn{5}{c}{{\leavevmode\leaders\hrule height 0.7ex depth \dimexpr0.4pt-0.7ex\hfill\kern0pt}\;$76.6 \pm 0.2$\;{\leavevmode\leaders\hrule height 0.7ex depth \dimexpr0.4pt-0.7ex\hfill\kern0pt}}\\
&\prot&Rotation period (d)&
$\;\;\;\cdots$&&&$\;\;\;\cdots$&$\phantom{\equiv}$\ensuremath{\phantom{0}} 1.0596\\
\multicolumn{3}{l}{Derived stellar parameters:} \\
& \ensuremath{\log{g}_{\rm p}} & True polar gravity (dex cgs)&
\ensuremath{\phantom{0}} 4.209&19&&\ensuremath{\phantom{0}} 4.21&$\phantom{\equiv}$\ensuremath{\phantom{0}} 4.24\\
&\rpole/\ensuremath{\mbox{R}_{\odot}}& Polar radius&
\ensuremath{\phantom{0}} 1.49&7&&\ensuremath{\phantom{0}} 1.48&$\phantom{\equiv}$\ensuremath{\phantom{0}} 1.61\\
&\reqtr/\ensuremath{\mbox{R}_{\odot}}&Equatorial radius&
\ensuremath{\phantom{0}} 1.52&7&&\ensuremath{\phantom{0}} 1.51&$\phantom{\equiv}$\ensuremath{\phantom{0}} 1.63\\
&Oblateness&$1-\rpole/\reqtr$&
\ensuremath{\phantom{0}} 0.0178&17&&\ensuremath{\phantom{0}} 0.0181&$\phantom{\equiv}$\ensuremath{\phantom{0}} 0.0156\\
&\tpole/\ensuremath{T_{\rm eff}}&Relative polar temperature&
\ensuremath{\phantom{0}} 1.0118&11&&\ensuremath{\phantom{0}} 1.0119&$\phantom{\equiv}$\ensuremath{\phantom{0}} 1.0103\\
&\teqtr/\ensuremath{T_{\rm eff}}&Relative equatorial temperature&
\ensuremath{\phantom{0}} 0.9939&6&&\ensuremath{\phantom{0}} 0.9938&$\phantom{\equiv}$\ensuremath{\phantom{0}} 0.9947\\
&$(1+q)\ensuremath{M_*}/\ensuremath{\mbox{M}_{\odot}}$&System mass$^\ddagger$&
\ensuremath{\phantom{0}} 1.31&17&&\ensuremath{\phantom{0}} 1.29&$\phantom{\equiv}$\ensuremath{\phantom{0}} 1.64\\
&$\log(L^{\rm L}/\ensuremath{\mbox{L}_{\odot}})$&$\text{luminosity} \times(\ensuremath{T_{\rm eff}}/\ensuremath{T^{\rm L}_{\rm eff}})^4$
(dex solar)&
\ensuremath{\phantom{0}} 0.94&3&&\ensuremath{\phantom{0}} 0.92&$\phantom{\equiv}$\ensuremath{\phantom{0}} 1.00\\
&$\rho_*$&Mean density (g~cm$^{-3}$)&
\ensuremath{\phantom{0}} 0.5373&11&&\ensuremath{\phantom{0}} 0.5380&$\phantom{\equiv}$\ensuremath{\phantom{0}} 0.5397\\
&\veq&Equatorial rotation speed (\kms)&
77.3&4&&77.5&$\phantom{\equiv}$78.0\\
&\prot&Rotation period (d)&
\ensuremath{\phantom{0}} 0.994&23&&\ensuremath{\phantom{0}} 0.987&$\phantom{\equiv}\;\ensuremath{\phantom{0}} \cdots$\\
\multicolumn{3}{l}{Other derived parameters:} \\
& $R_{\rm P}/\ensuremath{\mbox{R}_{\jupiter}}$ & Planetary radius
$(\ensuremath{\mbox{R}_{\jupiter}} = ({\mathcal{R}^{\rm N}_{e{\rm
J}} \mathcal{R}^{\rm N}_{p{\rm J}}})^{1/2})$
&
\ensuremath{\phantom{0}} 1.28&5&&\ensuremath{\phantom{0}} 1.28&$\phantom{\equiv}$\ensuremath{\phantom{0}} 1.38\\
&$\psi$&
\begin{minipage}[t]{0.7\columnwidth}
Angle between orbital and stellar-rotational angular-momentum vectors (0--180$^\circ$)
\end{minipage}&
60.24&5&&60.25&$\phantom{\equiv}$60.25\\
&$b$&Impact parameter (\rpole)&
\ensuremath{\phantom{0}} 0.2609&12&&\ensuremath{\phantom{0}} 0.2609&$\phantom{\equiv}$\ensuremath{\phantom{0}} 0.2607\\
\hline
\multicolumn{8}{l}{
\begin{minipage}[t]{0.86\textwidth}
Additional model parameters include $e$,
the orbital eccentricity ($e=0$ assumed here)
and longitude of periastron (0--360$^\circ$; undefined when $e=0$).
Best-fit (minimum-$\chi^2$) parameter sets are listed; median values
of MCMC runs are extremely close to these values.
\end{minipage}}\\
\multicolumn{8}{l}
{\begin{minipage}[t]{0.86\textwidth}
$^*$Used only to evaluate model-atmosphere intensities, and
constrained in the present study only by limb darkening; cf.\
$\S$\ref{sec:teff}\\
$^\dagger$Derived radii scale linearly with \ensuremath{v_{\rm e}\sin{i_*}}, and the mass as
$(\ensuremath{v_{\rm e}\sin{i_*}})^3$; Appendix~\ref{appx:one}.\\
$^\ddagger$Mass ratio $q \equiv M_{\rm P}/\ensuremath{M_*} \simeq 4 \times
10^{-3}$ (\citealt{Shporer14};
\citealt{Esteves15};
\citealt{Faigler15}).\normalsize\end{minipage}}\\
\end{tabular}
\label{params_tab}
\end{table*}
\section{Data preparation}
\label{sec:dprep}
We used the full set of short-cadence Pre-search Data
Conditioning Simple Aperture Photometry (PDCSAP) data, which are
publicly available through the \textit{Kepler} Input Catalog
(KIC; \citealp{Brown11}). The PDCSAP results are produced by the
standard \textit{Kepler} pipeline, which removes
instrumental artifacts, and span 2009 June to 2013 May.
The sampling step of 58.9~s corresponds to $\sim4\times10^{-4}$ of the
1.7-d orbital period. The maximum difference between `instantaneous'
and exposure-integrated model fluxes in the parameter space of
interest is 6~parts per million (ppm), which is small enough to
be neglected (deviations exceed 1~ppm for a phase range of <0.001).
The system shows out-of-transit orbital variations arising
from \textsc{beer} effects ($\S$\ref{sec:koi}). Even over the limited
phase range that we model, $\pm{0.1}$\porb\ around conjunction, the
amplitude of these effects is $\sim$40~ppm, which is far from
negligible. We treated these effects as a perturbation on the basic
model, and corrected for them by using the empirical three-harmonics
model\footnote{The model defined by eqtn.~(11) and Table~5
of \citeauthor{Shporer14} has to be reversed in both $x$ and
$y$.} described by \citet{Shporer14}.
The 25.43-hr signal has a semi-amplitude variously reported as
12--30~ppm (\citealt{Shporer11,Mazeh12,Szabo12}); in the limited
out-of-transit phase range of our data we find a semi-amplitude of
only 6~ppm, suggesting that the amplitude may be variable. Although the period is close to a 3:5 resonance with the
orbital period \citep{Shporer11}, the ratio is not exact.
Consequently this signal is `mixed out' over the $\sim$4-year span of
the observations when phased on transits, and effectively becomes only
a minor source of additional stochastic noise.
In order to reduce the 299\,423 individual observations down to a
manageable subset for MCMC modelling, for each of 577 separate transits
the data were first phased (according to the ephemeris used in the current
MCMC cycle); corrected for \textsc{beer} effects; and rescaled to give
a median out-of-transit flux of one.\footnote{`Out of
transit' was taken as $0.045 \le |\phi| \le 0.1$, where orbital phase
$\phi$ is measured in the range $-0.5:+0.5$ about conjunction.} In
principle, any free parameters in the adopted functional form for the
ephemeris could be allowed to `float' in the fitting process; in
practice, we adopted a linear ephemeris with a fixed period
($\porb = 1.76358799$~d; \citealt{Shporer14}), but
allowed the time of conjunction to vary.
We then compressed the resulting data by taking median normalized
fluxes in phase bins of 0.0002 (about half the integration time of
individual observations), whence each bin contained $\sim$300 data
points. The maximum change in normalized flux between the central
times of bins is $1.2 \times 10^{-4}$, which is comparable to the
dispersion of the individual data points ($\sim{1.6}\times{10}^{-4}$
out of transit), but large compared to the precision of the binned
data ($\sim{1.0}\times{10}^{-5}$); consequently, we tagged the median
flux in each bin with the mean time of all observations in that bin
(invariably close to the mid-bin time) rather than its original,
individual phase.
\section{Fit Results}
\label{sec:fit}
As a basis for subsequent discussion, we first present the results of
an initial `maximally constrained' model, in which only (effectively)
geometric parameters were adjusted. ELR gravity darkening
\citep{Espinosa11} and model-atmosphere
limb-darkening were used, along with fixed values for \ensuremath{T_{\rm eff}}\ (7650~K;
\citealt{Shporer14}) and $L_3$ (0.45; \citealt{Szabo11}).
The results of this `model~0' are illustrated in Fig.~\ref{fig:LC}, and
show relatively large residuals during ingress
and egress ($\sim$50 ppm).
We investigated the origin of these residuals through extensive
exploration of model parameters. Adopting eqtn.~\eqref{eq:gdark}
with $\beta$ free essentially reproduced von~Zeipel's law, which in
turn gives sensibly identical results to the ELR model
(unsurprisingly, since the latter is known to reproduce von~Zeipel at
low to moderate rotation). Moderate adjustments to $L_3$ had
similarly small consequences for the quality of the model fits. These
experiments identified errors in the limb darkening as the principal
cause of the discrepancies.
\begin{figure}
\includegraphics[width=0.47\textwidth]{Fig01.jpg}
\caption[jpeg]{Phase-folded \textit{Kepler} photo\-metry.
In the top panel, the small black dots represent individual
observations, and large red dots (which blend into a continuous band)
are the median values in phase bins of 0.002. The white line through
the medians is from model~M2 ($\S$\ref{sec:fit}); any other gravity-darkened model is virtually
indistinguishable at the scale of this plot.\newline
The lower panel shows O$-$C residuals for different models (cf.~$\S$\ref{sec:fit}).
Model~0 is for $\ensuremath{T_{\rm eff}} = 7650$~K, $L_3 = 0.45$; model~M1 is as
model~0 but with \ensuremath{T^{\rm L}_{\rm eff}}\ free;
model~M2 is as model~M1, but with \ensuremath{\log{g}^{\rm L}_{\rm p}}\ also free;
model~M3 is as model~M2, but with the rotation period fixed. Vertical
dashed lines are intended simply as a visual aid to identifying transit phases.}
\label{fig:LC}
\end{figure}
We addressed this issue in three ways. First, we replaced the near-exact
represention of the angular dependence of the model-atmosphere
intensities afforded by eqtn.~\eqref{eq:cl4} with a simple quadratic
limb-darkening law,
\begin{align}
I(\mu)/I(1) = 1 - u_1(1-\mu) -u_2(1-\mu)^2,
\end{align}
with the coefficients $u_1$, $u_2$ as free parameters. In applying
this law globally (in common with, e.g., \citealt{Masuda15}), we
abandon any latitudinal temperature dependence of the coefficients.
Secondly, in a gesture towards retaining
temperature-dependent limb-darkening while introducing only a single
additional free parameter, we investigated scaling the
linear ($a_2$) term in the
4-coefficient characterization.\footnote{There is a minor
inconsistency in
both the first and second approaches, in that the integral of intensity
over angle
will, in general, no longer \textit{exactly} match the
model-atmosphere flux, but this is unimportant for our application.}
Thirdly, recognizing that there is a temperature dependence of the
model limb darkening, we allowed the effective-temperature parameter
to float; that is, we characterize the model-atmosphere intensities
by \ensuremath{T^{\rm L}_{\rm eff}}\ rather than \ensuremath{T_{\rm eff}}\ ($\S$\ref{sec:teffl}).
\begin{figure}
\includegraphics[width=0.47\textwidth]{Fig03}
\caption[jpeg]{Upper panel: normalised model-atmosphere limb
darkening at $\ensuremath{T_{\rm eff}} = 8.0$~kK, $\ensuremath{\log{g}} = 4.2$, close to values
for our best-fit models (which take into account the latitude
dependence of these parameters). Lower panel: differences in limb
darkening for adjusted values, as indicated (in the sense reference
minus adjusted; note the 10-fold change in
$y$-axis scale).}
\label{fig:LDC}
\end{figure}
Unsurprisingly, all three approaches gave improved model fits, but it
is noteworthy that quite small adjustments to the model effective
temperature have significant consequences at the $\sim$10~ppm level of
precision, solely through the modest sensitivity of $I(\mu)/I(1)$ to
this parameter. In practice, allowing \ensuremath{T^{\rm L}_{\rm eff}}\ to float also led to
smaller residuals than the other approaches in our numerical
experiments; we adopt the corresponding results for this reason, and
to avoid introducing additional ad hoc parameters. Numerical values
for this `model~M1' are included in Table~\ref{params_tab}, and it is
confronted with the observations in Fig.~\ref{fig:LC}.
Fig.~\ref{fig:view} is a simple cartoon illustrating the implied geometry
of the system.
Model-atmosphere intensities are a function of not only temperature,
but also surface gravity (as well as abundances and microturbulence).
The true polar gravity, \ensuremath{\log{g}_{\rm p}}\ (which, with \ensuremath{ \Omega/\Omega_{\rm c} }, characterizes the
overall surface-gravity distribution) is not a free parameter in our
model ($\S$\ref{sec:syspar}). However, we can allow the value used in
obtaining
the model-atmosphere intensities, \ensuremath{\log{g}^{\rm L}_{\rm p}}, to `float' as,
effectively, an additional limb-darkening parameter. Doing this
naturally affords further, albeit slight, improvement in the model fit
(model~M2 in Table~\ref{params_tab} and Fig.~~\ref{fig:LC}).
The remaining systematic residuals (peaking at
$<$10~ppm) may arise from orbital evolution over the duration
of the \textit{Kepler} observations \citep{Szabo12, Szabo14, Masuda15}, since
the time-averaged light-curve will not correspond to any single-epoch
photo\-metry. Modelling the time-dependent behaviour is beyond the
scope of the current paper, partly because of the substantial
computing requirements required to model necessarily less compacted
datasets (we may return to this in future work), but also because our
discussion of third light ($\S$\ref{sec:l3})
emphasizes that the uncertainties on
fundamental parameters (our main interest here) are
likely to be dominated by other factors.
\begin{figure}
\includegraphics[width=0.47\textwidth]{Fig02}
\caption[]{Cartoon view of the system. The origin of the
co-ordinates is the stellar centre of mass, and
the projected stellar-rotation axis
is
arbitrarily orientated along the $y$ axis; the
exoplanet orbit extends to $a \simeq 4.5 \rpole$. The approaching and
receding stellar hemispheres are colour-coded blue and red (in the
on-line version); note that the star is \textit{slightly} oblate.
The exoplanet is shown at orbital phase $-0.03$ (thereby indicating the
direction of orbital motion).
The model is degenerate with its mirror image about the $y$ axis.}
\label{fig:view}
\end{figure}
\begin{figure*}
\includegraphics[width=0.97\textwidth]{Johnson14fig6RV2.png}
\caption[jpeg]{Tomographic transit map, from \citeauthor{Johnson14}
(\citeyear{Johnson14}, slightly contrast enhanced), overlaid with the
prediction of the light-curve model (dashed line). To make the
comparison we assume that the \citeauthor{Johnson14}
`transit phase' runs from first to fourth contact, and adopt
their value of 76.6~\kms\ for \ensuremath{v_{\rm e}\sin{i_*}}\ (which directly determines the $x$-axis scaling). }
\label{fig:tomo}
\end{figure*}
\subsection{Effective temperature and limb darkening}
\label{sec:teff}
We recall that the effective-temperature `determination' in the
model is not a traditional, direct measurement of the actual stellar
effective temperature, \ensuremath{T_{\rm eff}}; rather, \ensuremath{T^{\rm L}_{\rm eff}}\ is simply a parameter
which optimises model-atmosphere limb darkening (over the \textit{range} of
surface temperatures) to give a best match to the transit
data.\footnote{The same caveat
applies to \ensuremath{\log{g}^{\rm L}_{\rm p}}; the actual value of \ensuremath{\log{g}_{\rm p}}\ is fixed by other model parameters;
($\S$\ref{sec:syspar}).}
Only if the calculated model-atmosphere intensies are
sufficiently accurate will \ensuremath{T^{\rm L}_{\rm eff}}\ correspond to the actual effective
temperature.
However, it is noteworthy that, in practice, the optimised value
of \ensuremath{T^{\rm L}_{\rm eff}}\ falls
well within the range of direct $T_{\rm eff}$ determinations;
while adopting only a
moderately different fixed value gives relatively large residuals.
This highlights the importance of establishing the correct value
of \ensuremath{T_{\rm eff}}\ when comparing empirical and theoretical limb-darkening
coefficients (or when adopting the latter). Figure~\ref{fig:LDC}
shows the limb darkening for a model atmosphere at $\ensuremath{T_{\rm eff}} = 8.00$~kK,
$\ensuremath{\log{g}} = 4.2$, representative of the parameter space within which our
solutions fall. The maximum difference in normalized intensity,
$I(\mu)/I(1)$, between this model and one at 7.65~kK is less than
2\%, and yet this difference accounts for almost all of the residuals for
Model~0 shown in Fig.~\ref{fig:LC}.
\subsection{Third light}
\label{sec:l3}
The third light of the unresolved optical companion KOI-13B is (literally) a nuisance
parameter in our modelling. For our MCMC runs we experimented with
initial values of $L_3 = 0.41$--0.49 ($\Delta{m} \simeq 0.40$--0.04),
which bracket most observational determinations in the
literature\footnote{\citet{Howell11} report notably discordant values of
$\Delta{m} \simeq 0.8$--1.1 at $\sim$600--700~nm.
Although the literature values are for diverse
wavebands, the KOI-13A and~B components are of similar spectral
types and colours \citep{Szabo11}, so any wavelength dependence of
$L_3$ should be small in the optical regime.}
\citep{Fabricius02, Adams12, Law14, Shporer14}, at steps of 0.02.
We found that the adopted third light always clung very close to the
initial estimate in our MCMC modelling, rather than converging onto a
value representing the global minimum in $\chi^2$ hyperspace. This
contrasts with the behaviour of other parameters, whose values
freely migrated over relatively large ranges during `burn-in'.
Adjusting the proposal distribution did not alleviate this issue.
We believe that this outcome may arise because the transit light-curve
contains almost no information on the extent of third-light dilution
(cf., e.g., Fig.~8 of \citealt{Seager03}). Although we might
anticipate that this should be reflected in a \textit{wide}
distribution in acceptable $L_3$ values, rather than a narrow one, in
practice the set of other parameters essentially locks in $L_3$, which
can therefore be regarded, in a limited sense, as a `derived'
parameter, given the system geometry, rather than a free one.
The inferred numerical values for other parameters therefore depend
somewhat on $L_3$, to a degree that typically exceeds the formal errors on any
given model. For example, smaller $L_3$ means a shallower true transit
depth, and hence implies smaller $R_{\rm P}/R_*$
($\Delta(R_{\rm P}/R_*)
\simeq 0.08\Delta{L_3}$).
In recognition of this,
while we adopt solutions with input $L_3 = 0.45$
(which yield the smallest residuals),
we give errors in Table~\ref{params_tab} which are the quadratic sum of the
95\%-percentile ranges on those models and the
maximum differences with the
`best-fit' parameters from models with $L_3\text{(init.)} = 0.41$--0.49
(where the latter term dominates).
\subsection{Rotation period}
\label{sec:rotper}
Our initial solutions (e.g., models~M1 and~M2) yielded rotation
periods close to 24~hr,
only $\sim$5\%\ from the 25.43-hr period found in the
\textit{Kepler} photometry
\citep{Shporer11,Mazeh12,Szabo12}.
Although rotational modulation had not been widely anticipated for
stars hotter than the `granulation boundary' marking the transition
from radiative to convective envelopes (e.g., \citealt{Gray89}), evidence is
beginning to accumulate for starspots, of some nature, in A-type stars
\citep{Balona11, Balona17, Bohm15}, encouraging consideration of the
possibility that we are seeing a rotational signature in
KOI-13
($\ensuremath{T_{\rm eff}} \simeq 8$~kK corresponds to spectral type A5--A7), as
suggested by \citet{Szabo12}.
We can impose the constraint of fixed \prot\ on the model,
which links \ensuremath{ \Omega/\Omega_{\rm c} }\ to $\rpole/a$ in the MCMC chains
(Appendix~\ref{appx:one}, eqtn.~\ref{eq:app2}). The results of this
model~M3 are reported in Table~\ref{params_tab}; the fit quality is quite reasonable
(Fig.~\ref{fig:LC}).
Because the transit
depth essentially fixes $R_{\rm P}/R_*$, the main effect of imposing
a longer rotation period is to decrease the angular rotation rate,
which for given \ensuremath{v_{\rm e}\sin{i_*}}\ leads to a larger stellar radius, and hence,
for $\sim$fixed density, a higher stellar mass, as discussed in the
Section~\ref{sec:syspar}.
\subsection{Tomo\-graphy}
There are no published Rossiter--McLaughlin investigations of KOI-13, but
\citet{Johnson14} conducted a detailed tomo\-graphic study,
providing a velocity-resolved map of the transit.
Our model allows stellar velocities (R--M effect or tomo\-graphic
counterpart) to be evaluated directly. This can be accomplished by
synthesizing the spectrum as a function of orbital phase, and
subjecting the ensemble of synthetic spectra to the same analysis as
the observations (e.g., cross-correlation, or tomo\-graphy). However, for
the present study we simply take the intensity-weighted average radial
velocity,
\begin{align*}
v(\lambda) = \frac{\int{v \times
I(\lambda, \mu, \ensuremath{T^{\ell}_{\rm eff}}, g)\,{\rm d}A}} {\int{
I(\lambda, \mu, \ensuremath{T^{\ell}_{\rm eff}}, g)\,{\rm d}A}}
\end{align*}
where the integration is over area, and the (weak) wavelength
dependence of the model velocity comes about because of the
wavelength dependence of intensities on
limb darkening and temperature. To evaluate the R--M
effect the integration is conducted over all visible elements, while
taking the velocity of all occulted elements models the tomo\-graphic
signature.
The predicted locus of velocity vs.\ phase from the light-curve
solution is compared to the \citeauthor{Johnson14} map in
Fig.~\ref{fig:tomo}. The agreement is very
satisfactory, arising from the accord between the values of projected
obliquity $\lambda$ and impact parameter $b$ obtained from the
\textit{independent} tomo\-graphic and photometric solutions
($\Delta\lambda = 0{\fdg}6 \pm 2{\fdg}0$, \mbox{$\Delta{b} = 0.01\pm 0.03$}).
\section{System parameters}
\label{sec:syspar}
Any fundamentally geometric transit model, such as employed here, is
of necessity scale free. Consider Fig.~\ref{fig:view}; there is no
indication of whether this is a small, nearby system, or a large,
distant one.
Nevertheless, for given orbital period, a large, distant system must have
greater orbital velocities, and hence greater masses, than a smaller,
nearby system. This relationship between scale and mass is codified
in Kepler's third law, which leads directly to a constraint on
$a^3/(\ensuremath{M_*}+M_{\rm P})$, and hence, given the dimensionless radius
$\ensuremath{R_*}/a$, to the
stellar density (e.g., \citealt{Seager03}) -- but not the mass and radius separately.
\citet{Barnes11} suggested that rotational
effects, and specifically gravity darkening, can, in principle, lift
the ``density degeneracy'', through the dependence of $\Omega$ on mean
stellar radius $R_*$. However, in the Roche approximation the light-curve
depends on rotational effects only through the ratio $\ensuremath{ \Omega/\Omega_{\rm c} }$; to get to $\Omega$ requires
calculation of $\Omega_{\rm c}$, which itself has an $M/R^3$
dependence. Consequently, $\Omega$ is actually scale-free (as shown
analytically in Appendix~\ref{appx:one}), and a Roche-model analysis of
the transit light-curve alone cannot break the mass/radius
degeneracy in $M/R^3$.
Of course, if the orbital velocities can be established for both
components, these determine the absolute scale -- the standard
`double-lined eclipsing binary' approach. However, an alternative,
independent means of establishing the orbital semimajor axis (and
hence other system parameters) is available if \prot, the stellar
rotation period, \istar, the axial inclination, and \ensuremath{v_{\rm e}\sin{i_*}}, the
line-of-sight component of the equatorial rotation speed, can be
determined; these immediately yield the equatorial radius,
\begin{align*}
\reqtr = (\prot \ensuremath{v_{\rm e}\sin{i_*}})/(2 \pi \sin{i})
\end{align*}
The quantities
\prot\ and \istar\ can be estimated if the circular symmetry
of the projected stellar disk is broken. A familiar example is when
starspots are present, but gravity-darkened stars have the same
potential (since \ensuremath{ \Omega/\Omega_{\rm c} }\ relates, indirectly, to \prot). Introducing
the observed projected equatorial rotation speed, \ensuremath{v_{\rm e}\sin{i_*}}, as a
constraint on the light-curve solution therefore affords usefully
tight limits on the absolute dimensions of the system. The
straightforward algebra is set out in Appendix~\ref{appx:one}.
There are two precise determinations of projected rotation speed of
KOI-13A in the literature, in good mutual agreement: $\ensuremath{v_{\rm e}\sin{i_*}} =
76.96 \pm 0.61$~\kms\ and $76.6 \pm
0.2$~\kms\ \citep{Johnson14,Santerne12}. We adopt the latter, more
precise value
in order to calculate the system dimensions
reported in Table~\ref{params_tab}.
[Our referee raised the point that the precision of these results may
not reflect their accuracy, an observation with which we fully concur
(cf., e.g., \citealt{Howarth04}). However, as shown in
Appendix~\ref{appx:one} (eqtn.~\ref{eq:app2}), the semi-major axis
scales linearly with \ensuremath{v_{\rm e}\sin{i_*}}; radii converted from normalized to
absolute values scale in the same way, while the absolute system mass
scales as $(\ensuremath{v_{\rm e}\sin{i_*}})^3$, from Kepler's third law. Hence the results,
or uncertainties, are readily reassessed if another value for the
projected equatorial rotation speed is preferred.]
\subsection{Distance}
The effective temperature determines the
surface brightness;
given the size of the star the
absolute magnitude follows, and hence the
distance. We find
\begin{align*}
M(V) \simeq 2.44 + 0.51\left({ 8.0 - \frac{\ensuremath{T_{\rm eff}}}{\text{kK}} }\right)
- 5\log\left({ \frac{\rpole}{1.49\ensuremath{\mbox{R}_{\odot}}}}\right)
\end{align*}
where the second term is an empirical fit to models with $7.5
< \ensuremath{T_{\rm eff}}/\text{kK} < 8.5$; model-atmosphere Johnson $V$-band fluxes are
from \citet{Howarth11a}; and we neglect the further, unimportant,
dependences of $M(V)$ on \ensuremath{ \Omega/\Omega_{\rm c} }\ and \istar.
There is a surprisingly large dispersion in the photometry of KOI-13 catalogued in
the \textit{Vizier} system of the
Centre de Donn\'ees astronomiques de Strasbourg, most of which clearly
refers to the combined light of the visual binary.
We adopt the spatially resolved Tycho-2
photometry, which transforms to $V = 10.33$ for \mbox{KOI-13A} (with
an uncertainty of $\sim$0.05; \citealt{Hog00}). Foreground reddening
is estimated as $E(B-V) \simeq 0{\fm}02$ from \citet{Green15},
whence
\begin{align*}
\log\left({ \frac{d}{\text{pc}} }\right) &=
2.566 + 0.2\left[{
(V - 10.33) - (A(V) - 0.06)
}\right]
\\
&\quad -0.102\left({ 8.0 - \frac{\ensuremath{T_{\rm eff}}}{\text{kK}} }\right)
+ \log\left({ \frac{\rpole}{1.49\ensuremath{\mbox{R}_{\odot}}}}\right);
\end{align*}
i.e., $d\simeq 370$~pc, with an uncertainty of perhaps $\sim$25~pc.
\section{Conclusions}
We have conducted a new solution of \textit{Kepler} photometry of
transits of KOI-13b, obtaining results that are substantially in
agreement with those found by \citet{Masuda15}, and in accord with the
tomo\-graphy reported by \citet{Johnson14}. The solution yields both
the projected and true angular separations of the orbital and
stellar-rotation angular-momentum vectors. We emphasize that any
photometric solution is necessarily scale-free (e.g., does not require
a stellar mass to be assumed); but demonstrate that,
by adopting a value for \ensuremath{v_{\rm e}\sin{i_*}}, the absolute system dimensions and
mass can be established. Allowing for the full range of solutions
(Table~\ref{params_tab}; third light $L_3 = 0.41$--0.49, free or fixed
stellar rotation period), we obtain a planetary radius
$R_{\rm P}/\ensuremath{\mbox{R}_{\jupiter}} = 1.33 \pm 0.05$,
stellar polar radius
$\rpole/\ensuremath{\mbox{R}_{\odot}} = 1.55 \pm 0.06$, and a
combined mass
$\ensuremath{M_*} + M_{\rm P} (\simeq{\ensuremath{M_*}}) = 1.47 \pm 0.17$~\ensuremath{\mbox{M}_{\odot}}.
All solutions place KOI-13 in an unremarkable location in the
main-sequence mass--radius plane (e.g., \citealt{Eker15}).
\bibliographystyle{mnras} |
cond-mat/9907362 | \section{Introduction}
Magnetic excitation spectra of colossal magnetoresistance (CMR) manganites
in the ferromagnetic metal phase attract our attention in the point whether
they can be understood by the conventional double-exchange (DE) mechanism.
For (La,Sr)MnO$_3$ and (La,Pb)MnO$_3$ where $T_{\rm c}$ is relatively high,
a cosine-band type magnon dispersion
is observed \cite{Perring96,Martin96,Moudden98}. At low temperature,
Magnon linewidth $\Gamma$ is
narrow enough throughout the Brillouin zone, which makes it
possible to observe well-defined magnon branches,
and it becomes broad at finite temperature.
The DE model explains the cosine-band dispersion \cite{Furukawa96}
as well as the temperature dependence of the linewidth
in the form $\Gamma \propto (1-M^2)\, \omega_q$,
where $M$ is the magnetization normalized by the saturation value
and $\omega_q$ is the magnon
dispersion \cite{Furukawa98}.
The origin of the magnon broadening is the Stoner absorption,
which disappears at $T\to0$ (or $M\to 1$) due to the half-metallic nature
of the system.
For compounds with lower $T_{\rm c}$,
Doloc {\em et al.}\ \cite{VasiliuDoloc98} observed
broadening of magnon dispersion.
They claimed that the abrupt increase of linewidth near the zone boundary
can
not be explained by DE mechanism alone.
One of the possible explanations is that
the broadening is caused by
the magnon-phonon interaction \cite{Furukawa99bx}.
A strong coupling between magnons and phonons
are through the modulation of the exchange coupling
by the lattice displacement.
Anomalous broadening of magnon linewidth is also observed in the
double-layered manganite La$_{1.2}$Sr$_{1.8}$Mn$_2$O$_7$ \cite{Fujioka99}.
Intra double-layer coupling creates optical and acoustic branches
of magnons. Two-dimensional dispersion of both branches
indicates that the inter double-layer coupling is sufficiently weak.
Magnon broadening near the zone boundary is also observed in this compound.
In this paper we investigate the possibility of
this broadening caused by the magnon-phonon interaction.
\section{Comparison between theory and experiment}
As for dispersionless optical phonon
with frequency $\Omega_0$, the magnon linewidth
due to magnon-phonon interaction is
given by $\Gamma(q) \propto D(\omega_q - \Omega_0)$,
where $D(\omega)$ is the magnon density of states \cite{Furukawa99bx}.
In a two dimensional system, we have step-function like behavior
\begin{equation}
\Gamma(q) = \left\{
\begin{array}{ll}
\Gamma_0 \qquad & \omega_q > \Omega_0 \\
0 & \omega_q < \Omega_0
\end{array}
\right. .
\end{equation}
When a magnon with momentum $q$ has energy $\omega_q >\Omega_0$,
it is possible to find an elastic channel to decay into
a magnon-phonon pair with momentum
$q'$ and $q-q'$, respectively,
which satisfies $\omega_q = \omega_{q'} + \Omega_0 $.
This is the reason why magnon linewidth abruptly becomes
broad as magnon branch crosses that of the phonon.
Let us now compare the theoretical results with experimental data.
We show inelastic neutron scattering
intensities for La$_{1.2}$Sr$_{1.8}$Mn$_2$O$_7$
in Fig.~1, where a contour map is plotted
in the $\omega$-$q$ plane. Scattering vector is taken as
$(1+q,0,5)$ in the reciprocal lattice units. Details of experimental are
given
in ref.~\cite{Fujioka99}.
A well-defined acoustic magnon branch is observed near the zone center.
We also see optical phonon which is nearly dispersionless at
$\omega\sim 20{\rm meV}$.
Above $q \sim 0.3$ where magnon branch and phonon branch crosses,
we see an abrupt increase of the magnon linewidth.
A weak trace of the dispersion is observed above the crossing point.
The data is consistently explained as follows.
Magnon dispersion is cosine-band like with the zone boundary
energy $\sim 40 {\rm meV}$,
which crosses with the optical phonon with $\Omega_0\sim 20{\rm meV}$.
A strong coupling between magnons and phonons creates
abrupt magnon broadening above the crossing point.
\section{Discussion}
Magnon dispersions so far observed in the ferromagnetic metal phase
of manganites are
well defined near the zone center regardless of compounds and
dimensionalities.
Zone boundary broadening is, however, strongly compound dependent.
The present result suggests that the zone-boundary magnon broadening
is influenced by the strength of the magnon-phonon interactions.
Although magnon-phonon dispersion
crossing is also reported in three-dimensional
manganites \cite{Moudden98,Dai99x}, zone-boundary broadening is
observed only in low $T_{\rm c}$ compounds. This implies a
relation between $T_{\rm c}$ and spin-lattice interaction strength.
Strong damping of the zone-boundary magnons might also
explain the ``zone-boundary softening'' of magnons in low $T_{\rm c}$
manganites \cite{Hwang98}, if we assume that the zone-boundary
flat dispersion observed by neutron inelastic scattering
is allocated as an optical phonon branch, while the real zone-boundary
magnon branch at higher frequency
is wiped out above the magnon-phonon crossing point.
Further detailed studies of
the relations between the magnon linewidth broadening
above the magnon-phonon crossing point and the other magneto-elastic
behaviors will clarify the role of the spin-lattice interactions
to various physical properties.
N.F. thanks J. Fernandez-Baca for discussion. K.H. acknowledges H. Fujioka,
M. Kubota, H. Yoshizawa, Y. Moritomo and Y. Endoh for experimental
collaborations.
This work is partially supported by Mombusho Grant-in-Aid
for Priority Area. |
cond-mat/9907191 | \section{Introduction}
Proteins are heteropolymers that exbihit surprising
thermodynamic and kinetic properties.
The first aspect is that the lowest free energy conformation of a protein
is assumed to be the unique native structure and to be thermodynamically stable
\cite{Anfinsen1961}.
A major challenge in theoretical protein folding is to understand the
second aspect or in other words, how does a
protein find its native structure in biologically reasonnable
times under physiological conditions \cite{Levinthal1968}.
The lattice model is one class of models that is used to study
theoretically the folding of protein \cite{Lau1989,Sali1994a,Sali1994b}
and Monte Carlo (MC) algorithms \cite{Metropolis1953} are widely
used to study dynamics \cite{Chan1993,Chan1994,Socci1994,Gutin1998}.
In this Letter, we show that the commonly used
MC procedure converges poorly towards thermal equilibrium.
An attempt to refine the procedure has been recently proposed
by Cieplak et al. \cite{Hoang1998,Cieplak1999}, but
even if this procedure converges towards equilibrium,
the parameters of the Arrhenius law that they found disagree with the
value of the main potential barrier obtained independently
by a study of the phase space of the systems.
We introduce, here, a more rigorous treatment of the dynamics.
Our method fulfil the detailed balance condition, and, then,
converges, indeed, towards the thermal equilibrium.
For the first time, it also shows a good efficiency in the
calculation of kinetics parameters and the determination
of the Arrhenius law.
The model used is a two-dimensional lattice polymer. The chains are
composed of $N$ monomers that are connected and constrained to be on a
square lattice and the chains are self avoiding walk.
The energy of a sequence in a given conformation $m$ is given by:
\begin{equation}
E^{(m)}=\sum_{i > j+1} (B_{ij}+B_0) \Delta^{(m)}_{ij}
\end{equation}
where the function $\Delta^{(m)}_{ij}$ equals 1 if the $i^{th}$ and $j^{th}$
monomers interact i.e. if they are nearest neighbors on the lattice. The
$B_{ij}'s$ are the contact energy values. They are chosen randomly in
a gaussian distribution centered on $0$, and
$B_0$ is a negative parameter which favors the compact conformations
\cite{Shakhnovich1990a,Dinner1994}.
The set of $B_{ij}$ gives a sequence of the chain.
The sets of connections between conformations, used for the MC procedure,
are those used by Chan and Dill \cite{Chan1994} :
the corner flip and the tail moves are referred to as the
move set 1 (MS1), the crankshaft move is referred to as the move set 2 (MS2) and
at each MC step, a move of MS1 is chosen with a probability $r$
and a move of MS2 is chosen with a probability $1-r$ \cite{Sali1994a}.
Now, the problem is to find a correct algorithm of Metropolis \cite{Metropolis1953}
which
guarantees that the simulation converges towards thermal equilibrium
imposed by the condition of the detailed balance :
\begin{equation}
P_{eq}^{(m)} W(m\rightarrow n) = P_{eq}^{(n)} W(n \rightarrow m)
\end{equation}
where $P_{eq}^{(m)} \propto \exp(-E^{(m)}/T)$ is the equilibrium probability
of the conformation $m$, $T$ is the temperature,
and $W(m\rightarrow n)$ is the probability of transition from the state $m$
to the state $n$. Let us note :
\begin{equation}
\label{wmn}
W(m\rightarrow n) =
W^{(0)}(m\rightarrow n) \, \, a(m\rightarrow n)
\end{equation}
where $W^{(0)}(m\rightarrow n)$ is the a priori transition probability.
A convenient choice for the acceptance ratio is~:
\begin{equation}
a(m\rightarrow n) = \frac {1} {1 + \exp(\Delta E_n^m / T)}
\end{equation}
with $\Delta E_n^m = E^{(n)} - E^{(m)}$.
Let us note $N_m^{(1)}$ and $N_m^{(2)}$ the number of allowed
transitions from $m$ to any conformation by performing a move of the MS1 or of
the MS2 and $N_{max}^{(1)}=\max_m\{N_m^{(1)}\}$ and $N_{max}^{(2)}
=\max_m\{N_m^{(2)}\}$. One can easily see that
$N_{max}^{(1)} = N+2$ and $N_{max}^{(2)} = N-7$.
In order to have symmetric a priori transition probabilities~:
$W^{(0)}(m\rightarrow n) = W^{(0)}(n \rightarrow m)$, one assumes
that the probability to attempt a move from conformation $m$ to conformation
$n$ related by a connection of respectively MS1 and MS2 during one MC step are then~:
\begin{equation}
\label{w01}
W^{(0)}_1 (m\rightarrow n) = \frac{r} {N_{max}^{(1)}}= \frac{r}{N+2}
\end{equation}
\begin{equation}
\label{w02}
W^{(0)}_2 (m\rightarrow n) = \frac{(1-r)} {N_{max}^{(2)}}=\frac{1-r}{N-7}
\end{equation}
\begin{figure}
\epsfxsize=3.2in
\epsfysize=0.7in
\centerline{\epsffile{fig1.ps}}
\vskip 0.5cm
\caption{A part of the connection graph of the 12 monomers chain.
The conformations (a), (b) and (c) are connected to respectively one, two and five
neighbors by MS1.
In the classical MC procedure, the a priori transition probabilities
are not symmetric.
They depend on the number of neighbors.
With the proposed method, all these transitions are attempted
with the same a priori transition probability (not show).
}
\label{neigh}
\end{figure}
Then, the probability to attempt any move from the conformation $m$ using the
MS1 is $r N^{(1)}_m /(N+2)$ (and $(1-r) N^{(2)}_m /(N-7)$ using the MS2).
And therefore, it appears a probability of null transition :
\begin{equation}
w_m^{(0)} = 1 - \left( r \frac{N_m^{(1)}} {N+2}
+ (1-r) \frac{N_m^{(2)}} {N-7} \right)
\end{equation}
In contrast with rigid rotation which can involve movements of a lot of monomers,
the one and two monomers moves are local modifications. One assumes, then, that they
have the same affinity. Then, it comes from equations \ref{w01} and \ref{w02} :
\begin{equation}
r = \frac{N+2}{2N-5}
\end{equation}
In this particular case, the previous equations simplify :
\begin{equation}
W^{(0)}_1 (m\rightarrow n) = W^{(0)}_2 (m\rightarrow n) =
\frac{1}{2N-5}
\end{equation}
\begin{equation}
w_m^{(0)} = 1 - \frac{N_m^{(1)} + N_m^{(2)}}{2N-5}
\end{equation}
In order to check the accuracy of the proposed procedure, we applied it on
12 monomers chains. These chains can adopt 15037 different self avoiding walk
conformations non equivalent by symmetry. The following results are obtained
for the sequence A defined elsewhere \cite{Cieplak1998}.
Such a short chain is used to check the method
because a convergence test can be applied to this chain in a reasonable
computational time.
For this chain, we performed 300 billions steps MC trajectories.
A convergence factor $C(t) = \sqrt{\langle (P_{eq}^{(m)} - {\rm occ}^{(m)}(t))^2
\rangle }$ is computed
each 100000 MC steps ;
$t$ is for number of the MC step and ${\rm occ}^{(m)}(t)=N^{(m)}(t)/t$ where
$N^{(m)}(t)$ is the number of steps corresponding to the occurences
of the conformation $m$.
The brackets denote the average over all the conformations.
If a simulation checks well the detailed balance, the $C(t)$ quantities
should tend towards 0 when $t \rightarrow \infty$.
\begin{figure}
\epsfxsize=3.2in
\epsfysize=3.2in
\centerline{\epsffile{fig2.ps}}
\caption{
Log-Log plots of the convergence factor $C(t)$ versus the
number of MC step $t$ for different temperatures.
Dashed lines : for the commonly used method for which the
$W^{(0)}(m \rightarrow n)$
prefactor and $w^{(0)}_m$ parameter are omitted
in the MC procedure.
Solid lines : for the proposed method.
}
\label{converg_A}
\end{figure}
Figure ~\ref{converg_A} shows clearly that the
commonly used procedure present limits of convergence depending on the
temperature. On the other hand, the proposed method shows a power law of
the convergence factor versus the MC steps.
This result shows very well that the factor $w^{(0)}_m$
cannot be omitted in a lattice simulation for
protein folding.
In what follows, we focus on the properties of the $w^{(0)}_m$ factor.
One must notice that this factor is only a topological factor and
therefore is sequence independent.
If one looks now at the simulation only at the topological point of
view, by removing for a while the energetic contribution (let suppose
for a while that all conformations have the same energy), one sees
that, the larger the factor $w^{(0)}_m$, the longer the simulation
stays in the $m$ conformation when it reaches it.
One must note that it is not only unprobable to escape from the conformation
$m$ if $w^{(0)}_m$ is large, but it is also unprobable to reach it.
On the contrary, conformations with small values of $w^{(0)}_m$ are often
reach but the simulation doesn't stay in this conformation.
Then, the larger $w^{(0)}_m$, the more rigid the conformation $m$
and the smaller $w^{(0)}_m$, the more flexible the conformation $m$.
Therefore, let us call in what follows $w^{(0)}_m$ the rigidity of the
conformation $m$.
Fig.~\ref{distribution}(b) show how the $w^{(0)}_m$
prefactors are distributed for each subset of conformations with the same number of
contacts.
No conformation have a value of $w^{(0)}_m$ equal to 1(fig.
\ref{distribution}).
This guarantees that, no conformation is totally rigid,
then each one is related to at least, another one. But, one must note that
this condiction is not enough strong to fulfil the ergodic hypothesis.
\begin{figure}
\vskip -2.5cm
\epsfxsize=3.2in
\epsfysize=4.8in
\centerline{\epsffile{fig3.ps}}
\vskip-4.5cm
\caption{
Distribution of the number of conformations as a function of
the rigidity $w_m^{(0)}$ and the number of intrachain contacts
$N_c$ divided by the number of conformations which have
$N_c$ contacts.
One must note that the smaller the number of contacts of the
conformations of a subset the larger the normalisation factor.
}
\label{distribution}
\end{figure}
It appears clearly that, the more compact conformations
present the larger values of the rigidity.
The more flexible are the more extended.
Only one move of MS1 is allowed for the two more rigid conformations.
One can see that, there is no conformation which have a value of $w^{(0)}_m$
which tends towards 0. Hence no conformation is totally flexible.
This is a consequence of that no conformation present the maximum
number of neighbors with both the MS1 and the MS2.
The native conformations of protein has not only very low energy but
also they are very compact. Hence, they have
of large Boltzmann weights but also they are very rigid conformations.
Both effects favorise the stability of the native conformations, but
the folding dynamics is slowed down by the topology of the native structures.
The trap conformations are also very compact and are conformation of
local energy minima \cite{Cieplak1998}.
Then to exit the trap valley the chain has to first escape from a
stable and rigid conformation.
One computed many kinetic ways from the trap conformation to the native
structure of the sequence A. The trap conformation has been determined
by solving the master equation of the system using the way described
by Cieplak et al \cite{Cieplak1998} for the particular choice of $r$
used in the present paper.
The conformation trap found here is the same that the conformation
found by Cieplak et al. and it is chosen as the first conformation
of the MC trajectories.
The kinetic ways exhibit all similar properties.
The native and the trap conformations are
compact and then very rigid ($w^{(0)}_{natif} = w^{(0)}_{trap}= 0.894$)
and have low energies.
\begin{figure}
\epsfxsize=2.4in
\epsfysize=3.0in
\centerline{\epsffile{fig4.ps}}
\vskip 0.5cm
\caption{
Energy (top) and rigidity (bottom) versus the MC steps
of a typical trajectory of folding simulation for the sequence A
at $T=0.4$.
}
\label{reaction_way}
\end{figure}
At low temperature the system spends a lot
of MC steps in the trap conformation. The system escapes uneasily
from the trap
by passing through transition states which exhibit common properties :
high energies, few intrachain contacts and then great flexibility.
Therefore, even if the transition states are energetically unfavorable,
they are easily accessible at a topological point of view and the
MC trajectories spend a very few steps in these conformations.
A major problem of the protein folding investigation is namely to
calculate kinetic properties at low temperature \cite{Cieplak1996,Gutin1998},
where the rejected move ratio of a MC procedure is
very large. The efficiency of the procedure is increased at low temperature
using a Bortz-Kalos-Lebowitz (BKL) type algorithm \cite{Bortz1975}.
The idea is the following : let us note, $w_m$ the probability not to
accept a move from the conformation $m$ during one step~:
\begin{equation}
w_m = 1 - \frac{1}{2N-5} \sum_{n \neq m} \frac{1}{1+\exp (
\Delta E_{n}^m/T)}
\end{equation}
then, the probability not to accept a move from the conformation
$m$ during exactly $k$ steps is :
\begin{equation}
P(k) = w_m^{k-1} (1-w_m)
\end{equation}
Then for each move, the number of MC steps $k$, during which the chain
stays in the current conformation, say $m$, is chosen at random in
the density of probability $P(k)$ and a move chosen with the following
probability of transition :
\begin{equation}
t(m\rightarrow n) = {{\frac{1}{1+\exp (\Delta E_{n}^m/T)}}
\over
{\sum_{n' \neq m} \frac{1}{1+\exp (\Delta E_{n'}^m/T)}}}
\end{equation}
is always performed. This procedure permits to carry out
MC simulations at very low temperature.
All the values of $w_m$ and $t(m\rightarrow n)$ are computed for each
temperature before performing the MC trajectories.
The folding times ($t_{fold}$) have been computed using the BKL type
algorithm for low temperature.
The folding time is the average over 500 trajectories
of the number of MC steps needed to reach the
conformation of lowest energy.
Three different simulations have been
carried out depending on the choice of the first conformation set ;
the simulation "T" for which the trap conformation is
chosen as the first conformation ;
the simulation "E" for which the first conformation is an extended
conformation chosen at random ;
the simulation "R" for which the first
conformation is chosen at random among all the conformational space.
The transition state of lowest energy between the trap and the
native structure has been determined elsewhere for this
sequence \cite{Cieplak1999}
and the difference of energy between the trap and the transition state had been
computed and equals $\Delta E = 4.53$. The Monte-Carlo folding time
found by Cieplak et al. follows an Arrhenius law $t_{fold}(T) = A \exp (\delta E /T)$,
with $\delta E = 2.76$ which is in poor agreement with $\Delta E$.
For the three simulations, we also find Arrhenius laws
at very low temperature ($T=$ 0.24, 0.22, 0.20, 0.18).
\vbox{
\begin{table}[t]
\begin{center}
\begin{tabular}{ c c c }
& $\delta E$ & A \\ \hline
simulation "T" & 4.51 & 33.25 \\
simulation "E" & 4.40 & 8.58 \\
simulation "R" & 4.34 & 12.55 \\
\end{tabular}
\caption{Value of the parameters $\delta E$ and $A$ of the Arrhenius laws
$t_{fold}(T) = A \exp (\delta E /T)$ for the "T", "E" and "R"
simulations (see text)
}
\end{center}
\end{table}
}
\vskip -1cm
The results of $\delta E$, shown in table 1, are in very good agreement
(less than $1 \%$ for the "T" simulation) with the value of
$\Delta E$ and strongly support the proposed method for
the calculation of the parameters of the Arrhenius laws.
If a first conformation is chosen at random, it can fall in the trap
valley (TV), in the native conformation valley (NV) or in less important
valleys. At low temperature, whatever the set of first conformations, the
conformations which fall in TV govern the kinetics. Then, the dominant
term in the exponential function of the Arrhenius law tends always towards
$\Delta E$. The ratio of the $A$ coefficients gives the proportion
of conformations which falls in TV :
a random conformation has a probability equal
to $12.55 / 33.25 = 0.38$ to fall in TV and an extended
conformation has a probability equal to 0.26 to be attracted by TV.
Then, these ratios give an insight of the attraction strenght of the basin of TV.
The results presented in this Letter show clearly that the proposed MC method
is well adapted to the study of the dynamics of protein folding.
It has been showed that not only the difference
of energy between the conformations has to be taken into account in
the MC simulations but also the rigidity of the conformations.
The method had been applied only on a short chain in order to check
its efficiency, but it is
easily applicable to longer chain on two- or three-dimensional lattices and
moreover the BKL algorithm should permit to elucidate low temperature properties
of protein-like chains.
{\bf Acknowledgments} to Aaron Dinner, Bertrand Berche, Christophe Chatelain,
Trinh Xuan Hoang and Marek Cieplak
for helpful discussions. |
hep-ph/9907431 | \section{Introduction}
\vspace*{-0.5pt}
High experimental accuracy achieved in the last years
allows to test the Standard Model on the level of quantum corrections.
Therefore to improve the accuracy of predictions within this model
is of urgent need. The calculation of
mass-dependent radiative corrections is complicated but can be performed
to a large extent by using computer algebra.
In recent years many algorithms have been developed
and huge program packages were elaborated for this purpose
(for a review of existing packages see Ref.\cite{review}).
Due to different mass scales of the
particles in the Standard Model the method of asymptotic expansion
\cite{asymptotic} - the expansion of Feynman diagrams w.r.t.
the ratio of different scale parameters - is becoming more and more
popular. The calculation of radiative corrections to low energy processes e.g.,
in particular those with light external fermions, in many cases reduces
to the calculation of self-energy diagrams with external momentum at
different scales. This is one of the reasons why the evaluation of
two-loop self-energy diagrams is worth special attention. From the point
of view of approximation methods two-loop self-energy diagrams can be
divided into several classes:
\begin{itemize}
\item
Only one non-zero mass enters internal lines and the
external momentum is on the same mass shell.
The calculation of diagrams of this type occurring in QED and QCD has
been implemented
\footnote{One of the first calculations of this type
for the Standard Model and QED was performed in Refs.\cite{S-M,1david}.}
as the package SHELL2 \cite{SHELL2}.
\item
There are heavy particles in internal lines and the external
momentum is on the mass-shell of a light particle \cite{small1,small2,small3}.
Diagrams of this type occurring in the Standard Model have been
collected in the package TLAMM \cite{TLAMM}.
\item
All internal particles are light or massless and the external momentum
is on a heavy mass shell (see, for example, Ref. \cite{large}).
\item
Several different heavy masses occur in internal lines and the external
momentum is on the mass shell of a heavy particle \cite{CS,AK}.
\item
Diagrams close to thresholds
(see Ref.\cite{threshold} and references therein).
\end{itemize}
A general recipe of reduction of arbitrary two-loop self-energy
diagrams to a set of master integrals has been suggested by Tarasov
\cite{Tarasov1}. The algorithm was implemented in FORM and then
in MATHEMATICA \cite{OS} . However, this method suffers from
the drawback that in processing the reduction of scalar master
integrals with shifted dimension to the generic dimension of space-time,
powers of $1\over\varepsilon$ may arise which require the
expansion of master integrals as series in $\varepsilon$,
which is a difficult task. This problem is avoided in our approach.
We present a FORM \cite{FORM} based package that allows
to calculate arbitrary two-loop self energy diagrams
with one non-zero mass and the external
momentum on the (nonvanishing) mass shell. Our algorithm concerning the V-type
diagrams is very similar to the one described in Ref.\cite{similar}.
The paper is organized as follows. In Sect.2 the full set
of recurrence relations is presented which allows to express the initial diagrams
with scalar product in the numerator in terms of diagrams with
positive ($>1$) indices. Sect.3 is devoted to the description of how to use
the package. In appendix A (appendix B) we give all needed recurrence
relations to reduce the scalar F-prototypes (V-prototypes) with
arbitrary positive indices to a set of master-integrals.
In appendix C the analytical results for all integrals shown in
Fig.\ref{joint} with indices 1 are collected.
Even though not all of these are considered as master integrals, they
can be used for comparison. We are working in Euclidean
space-time with dimension $N = 4 - 2 \varepsilon$.
\section {The recurrence relations.}
\begin{figure}[ht]
\centerline{\vbox{\epsfysize=155mm \epsfbox{joint.eps}}}
\caption{\label{joint} The F, V and J topologies.
Bold and thin lines correspond to the mass and
massless propagators, respectively.}
\end{figure}
The full set of two-loop self energy diagrams with one mass and external
momentum on the same mass shell is given in Fig.\ref{joint}. We distinguish
three basic topologies which in accordance with notations in
Ref.\cite{Tarasov1} we call F, V and J prototypes with five, four and
three lines, respectively. Our notation is given in Fig.\ref{notation}.
The diagrams implemented in the package SHELL2 (F01101, F00110, V1110,
V0011, V1000 in our notation)
and those considered in detail in Refs.\cite{alvladim}-\cite{program}
(F00000, V0000, J001, J000) are not discussed here
\footnote{The procedures for the calculation of all diagrams of the topologies
shown in Fig.\ref{joint} are implemented in our package.}.
\begin{figure}[ht]
\centerline{\vbox{\epsfysize=30mm \epsfbox{notations.eps}}}
\caption{\label{notation} Notations used in present paper.}
\end{figure}
The general prototype involves arbitrary integer powers of the scalar
denominators $c_L = k_L^2 + m_L^2$
\footnote{Their explicit expressions are
$c_1 = k_1^2 + m_1^2,~~~c_2 = k_2^2 + m_2^2,
~~~c_3 = (k_1-p)^2+m_3^2,~~~c_4 = (k_2-p)^2+m_4^2,
~~~c_5 = (k_1-k_2)^2 + m_5^2$}.
Their powers
$j_L$ are called indices of the lines. The mass-shell condition
for the external momentum now is $p^2=-m^2$.
Any scalar products of the momenta in the numerator arising from
projection or expansion are reduced to powers of the scalar propagators
(in case of V and J topologies the corresponding
lines are added). Thus, the indices may sometimes become negative.
Recurrence relations are derived via the integration-by-parts
method \cite{rec} and applied to the massive case as in
Ref.\cite{kotikov}. They allow to reduce all lines with negative indices
to zero and the positive indices to one or zero. Further we use the
shorthand notation $\{123\}$ of Ref.~\cite{avdeev} to denote
the relation for the triangle formed of lines $1$, $2$, and $3$:
\newcommand\eqnum[1] {\eqno{#1}}
$$ \int \frac{d^N k }{c_1^{j_1} c_2^{j_2} c_3^{j_3}}
\Big( N -2 j_1 -j_2 -j_3 +j_1 \frac{2 m_1^2}{c_1}
+j_2\frac{m_1^2+m_2^2-m_{12}^2+c_{12}-c_1}{c_2}
$$
$$
+j_3\frac{m_1^2+m_3^2-m_{13}^2+c_{13}-c_1}{c_3} \Big) = 0 ,
\eqnum{\{123\}} $$
\noindent where a double index like $\{ 12 \}$ refers to a line that starts at
the point where lines $1$ and $2$ meet (see Fig.\ref{triangle}).
For an external line on the mass shell, the value of $c_L$ is equal
to zero.
\begin{figure}[ht]
\centerline{\vbox{\epsfysize=60mm \epsfbox{triangle.eps}}}
\caption{\label{triangle} ``Triangle'' rule. }
\end{figure}
\subsection{F-topology}
To exclude the numerator (for example, $j_1 <0,~~~c_1^{|j_1|}$ in the
numerator) of F-type diagrams we use the following set of
recurrence relations:
\begin{enumerate}
\item $j_5 \neq 1$
\begin{eqnarray}
\{245\} &&
\frac{j_5}{c_5} c_1 =
- \frac{j_2}{c_2} 2 m_2^2
- \frac{j_4}{c_4} \left( m_4^2 +m_2^2 -m^2 - c_2 \right)
\nonumber \\
&&
- \frac{j_5}{c_5} \left( m_5^2 +m_2^2 -m_1^2 - c_2 \right)
- N+ 2 j_2+j_4+j_5 ,
\nonumber
\end{eqnarray}
\item $j_2 \neq 1$
\begin{eqnarray}
\{524\} &&
\frac{j_2}{c_2} c_1 =
- \frac{j_5}{c_5} 2 m_5^2
- \frac{j_4}{c_4} \left( m_5^2 +m_4^2 -m_3^2 + c_3 - c_5 \right)
\nonumber \\
&&
- \frac{j_2}{c_2} \left( m_5^2 +m_2^2 -m_1^2 - c_5 \right)
- N+ 2 j_5+j_4+j_2 ,
\nonumber
\end{eqnarray}
\item $j_3 \neq 1$
\begin{eqnarray}
\{245\} + \{135\} &&
\frac{j_3}{c_3} c_1 =
\frac{j_3}{c_3} \left( m_3^2 +m_1^2 -m^2 \right)
\nonumber \\
&&
+ \frac{j_4}{c_4} \left( m_2^2 +m_4^2 -m^2 \right)
\nonumber \\
&&
- \frac{j_4}{c_4} c_2
+ \frac{j_1}{c_1} 2 m_1^2
+ \frac{j_2}{c_2} 2 m_2^2
+ \frac{j_5}{c_5} 2 m_5^2
\nonumber \\
&&
+ 2N -2j_1 -2j_2 -j_3 -j_4 -2j_5 ,
\nonumber
\end{eqnarray}
\item $j_2 = j_3 = j_5 = 1$
\begin{eqnarray}
\{315\} &&
N -2j_3 -j_1 -j_5 =
- \frac{j_1}{c_1} \left( m_1^2 +m_3^2 -m^2 - c_3 \right)
\nonumber \\
&&
- \frac{j_5}{c_5} \left( m_3^2 +m_5^2 -m_4^2 + c_4 - c_3 \right)
- \frac{j_3}{c_3} 2 m_3^2 ,
\nonumber
\end{eqnarray}
\end{enumerate}
\noindent
where both sides of these relations are understood to be multiplied by
\newline
$
\int \frac{d^N k_1 d^N k_2}{c_1^{j_1} c_2^{j_2} c_3^{j_3} c_4^{j_4} c_5^{j_5}}.
$
The relations for $(j_2, j_3, j_4)<0$ are obtained from symmetry
properties of the integral under consideration:
$$
(j_1,m_1) \leftrightarrow (j_3,m_3),
~~~(j_2,m_2) \leftrightarrow (j_4,m_4),
$$
and
$$
(j_1,m_1) \leftrightarrow (j_2,m_2),
~~~(j_3,m_3) \leftrightarrow (j_4,m_4).
$$
\noindent
$c_5$ from the numerator can be eliminated by a general
projection-operator method \cite{rec}. Using the decomposition
$$
k_1 k_2 = A(k_1, k_2, p) + \frac{(k_1 p) (k_2 p)}{p^2},
$$
\noindent
where
$ A(k_1, k_2, p) = k_1^\mu
\left(\delta_{\mu \nu} - \frac{p_\mu p_\nu}{p^2} \right) k_2^\nu, $
and the property that odd powers of $A(k_1, k_2, p)$ drop out after
integration and for even powers we have
\begin{eqnarray}
&\displaystyle \int {\rm d}^N k_1 ~ {\rm d}^N k_2 ~
f_1[k_1,p]~ f_2[k_2,p]~ A^{2n}(k_1,k_2,p) \,=&
\nonumber\\*
&\displaystyle
\FR { \Gamma(n+\fr 1 2)\, \Gamma \big[ \fr 1 2 (N-1) \big] }
{ \Gamma(\fr 1 2)\, \Gamma \big[ n +\fr 1 2 (N-1) \big] }
\prod_{j=1}^2 \int {\rm d}^N k_j~ f_j[k_j,p]~ A^n(k_j,k_j,p)
\, ,& \label{int}
\nonumber
\end{eqnarray}
\noindent
it is possible to reduce the initial integral to a product of one-loop
integrals.
Using the above relations, F-type integrals with arbitrary
indices are reduced to F-type integrals with only positive
indices or V-type integrals with arbitrary indices.
For the former case a proper arrangement of recurrence
relations in general reduces the sum of all indices by 1.
These relations are given in appendix A. Only eight diagrams
({\bf F11111, F00111, F10101, F10110, F01100, F00101, F10100, F00001})
with all indices equal to 1 form the basis for F-type diagrams.
\subsection{V-topology}
Consider now the V-type diagrams. The recurrence relations
we use are $\{ 425\}$, $\{423 \}$ and the following set:
\begin{eqnarray}
\{530\} && N- 2j_5-j_3
+ \frac{j_5}{c_5} 2 m_5^2
+ \frac{j_3}{c_3} \left( m_3^2 +m_5^2 -m_4^2 + c_4 - c_5 \right)
= 0,
\nonumber \\
\{350\} && N - 2j_3-j_5
+ \frac{j_3}{c_3} 2 m_3^2
+ \frac{j_5}{c_5} \left( m_5^2 +m_3^2 -m_4^2 +c_4 - c_3 \right)
= 0,
\nonumber \\
\{B\} &&
\frac{j_5}{c_5}
\frac{
\left( m_4^2 -m_2^2 +m^2 +c_2-c_4 \right)
\left( m_5^2 -m_3^2 -m_4^2 +c_3+c_4-c_5 \right)}{2 c_6}
\nonumber \\
&&
+ \frac{j_2}{c_2} 2 m^2
- \left( \frac{j_2}{c_2} + \frac{j_4}{c_4} + \frac{j_5}{c_5}
\right)
\left( m_4^2 -m_2^2 +m^2 +c_2-c_4 \right)
= 0,
\nonumber
\end{eqnarray}
\noindent
where $c_6 = c_4 - m_4^2$ (see Ref.\cite{similar}). If $m_4^2 \neq 0$,
the expression $\frac{1}{c_4 c_6}$ can be simplified later by partial
fraction decomposition.
For all cases $(j_2,j_3,j_5)<0$, except $j_4 < 0$,
the initial diagram can be reduced to two-loop tadpole-like integrals
by means of \cite{small1}:
$$
\int d^N k_2 \left( k_2p \right)^{2j} f(k_2,k_1) =
\frac{(2j)!}{(j)! }
\left( \frac{p^2}{4} \right)^j
\frac{\Gamma \left(\frac{N}{2} \right)}{ \Gamma \left( \frac{N}{2}+j \right)}
\int d^N k_2 (k_2^2)^j f(k_2,k_1). $$
\noindent
For $j_4 <0$ we write $c_4 = \overline{c}_4 + m_4^2$ and redefine
$\overline{c}_4 = c_4 $, which allows to
consider only the massless case.
Then the following recurrence relations are needed:
\begin{enumerate}
\item
$j_5 \neq 1$
\begin{eqnarray}
\{350\} &&
\frac{j_5}{c_5} c_4 =
\frac{j_5}{c_5} \left( c_3 - m_3^2 -m_5^2 \right)
- 2 \frac{j_3}{c_3} m_3^2 - N +2 j_3 +j_5,
\nonumber
\end{eqnarray}
\item
$j_3 \neq 1$
\begin{eqnarray}
\{530\} &&
\frac{j_3}{c_3} c_4 =
\frac{j_3}{c_3} \left(c_5 -m_3^2 -m_5^2 \right)
- 2 \frac{j_5}{c_5} m_5^2 - N +2 j_5 +j_3,
\nonumber
\end{eqnarray}
\item
$j_2 \neq 1$
\begin{eqnarray}
\{425\} + \{350\} &&
\frac{j_2}{c_2} c_4 =
\frac{j_3}{c_3} 2 m_3^2
+ \frac{j_4}{c_4} 2 m_4^2
+ \frac{j_5}{c_5} 2 m_5^2
\nonumber \\
&&
+ \frac{j_2}{c_2} \left( m_2^2 -m^2 \right)
+ 2N -j_2 -2j_3 -2j_4 -2j_5 .
\nonumber
\end{eqnarray}
\item
$j_2 = j_3 = j_5 = 1$
\begin{eqnarray}
2 B + 2 \{425\} + \{350\} &&
3 N - 4 j_2 -2 j_3 - 2 j_4 - 2 j_5 =
-2 m_3^2 \frac{j_3}{c_3} - 4 m_2^2 \frac{j_2}{c_2}
\nonumber \\
&&
+ \frac{j_5}{c_5} \frac{c_2}{c_4} \left( c_4 - c_3 +m_3^2-m_5^2\right)
+ \frac{j_5}{c_5} \frac{c_3}{c_4} \left(m_2^2-m^2 \right)
\nonumber \\
&&
+ \left(j_5 + 2j_4 \right) \frac{c_2 + m^2 - m_2^2}{c_4}
+ \frac{j_5}{c_5} \left(m^2 - m_2^2 - 2 m_5^2 \right)
\nonumber \\
&&
+ \frac{j_5}{c_5} \frac{ \left(m_3^2-m_5^2 \right) \left(m^2-m_2^2 \right)}{c_4}.
\nonumber
\end{eqnarray}
\end{enumerate}
\noindent
The result of application of the above recurrence relations are V-type
diagrams with only positive indices or J-type integrals with arbitrary
indices. The full set of recurrence relations for the former case is
given in appendix B. The complete set of basic integrals is just given
by {\bf V1111} and {\bf V1001} with indices equal to 1.
\subsection{J-topology}
The integrals of this type are discussed in detail in
Refs.\cite{threshold,Tarasov1}. We only mention here, that to reduce
the numerator the following recurrence relation, suggested
by Tarasov \cite{Tarasov2}, is needed:
\begin{eqnarray}
(N+\nu_1+\nu_2\ -2) v(\nu_1,\nu_2)=p^2 \{ (\nu_1-1)k_1^2{\bf 1^-}
\!+\nu_2(k_1k_2){\bf 2^-}\} {\bf 1^-} v(\nu_1,\nu_2),
\nonumber
\end{eqnarray}
\noindent
where
$
v(\nu_1,\nu_2) =
\int d^Nk_1 d^Nk_2 f(k_1,k_2) (k_1p)^{\nu_1}(k_2p)^{\nu_2}$
and $f(k_1,k_2)$ is an arbitrary scalar function; $\nu_1, \nu_2 >0$
and ${\bf 1^{\pm}}v(\nu_1,\nu_2)\equiv v(\nu_1 \pm 1,\nu_2),etc.$
The master integrals are the following: one prototype {\bf J111} with all indices
equal to 1, and two integrals of {\bf J011}-type: with indices 111 and 112,
respectively.
\subsection{Master-integrals}
To obtain the finite part of two-loop physical results one needs
to know the finite part of F-type integrals, V- and J-type
integrals up to order $\varepsilon$, and one-loop integrals up to
order $\varepsilon^2$. A detailed discussion of the calculation of
master-integrals is given in \cite{our}. Here me mention only,
that the calculation of the $\varepsilon$ ($\varepsilon^2$) parts
has been performed by the differential equation method
\cite{kotikov}. The results are collected in Appendix C.
\section{Use of the package}
The package consists of a set of procedures for the calculation of all
two-loop integrals, presented in Fig.\ref{joint}
(f11111.prc, $\cdots,$ on3.prc, on2.prc, etc),
two-loop tadpoles (vl111.prc, vl011.prc, vl001.prc)
and one-loop integrals (vl1.prc, on1.prc, ons11.prc),
where ``1''(``0'') in the name of the procedure stands for massive (massless)
lines, respectively. on3 and on2 are two-loop integrals from the SHELL2
package. vl1 is the one-loop massive bubble. ons11 and on1 denote the
one-loop self-energy on-shell integrals with two and one massive lines,
respectively. The integration momenta in the package are
denoted by K1 and K2 for two-loop integrals and by K1 for one-loop
integrals, P is the external momentum. All scalar products in the initial
diagram must be rewritten in terms of propagators:
\begin{eqnarray}
k_1 p & = & \frac{c_1-c_3+m_3^2-m_1^2-m^2}{2},
\nonumber \\
k_2 p & = & \frac{c_2-c_4+m_4^2-m_2^2-m^2}{2},
\nonumber \\
k_1 k_2 & = & \frac{c_1+c_2-c_5+m_5^2-m_1^2-m_2^2}{2},
\nonumber \\
k_1^2 & = & c_1-m_1^2,
\nonumber \\
k_2^2 & = & c_2-m_2^2.
\nonumber
\end{eqnarray}
\noindent
To specify the type of two- (one-) loop diagrams, the products of
scalar propagators must be substituted by the proper functions of
F-, V-, J- and ON-type with arguments denoting the indices
and a symbol for the mass shell.
To work with fractions of N-dimensional numbers, two functions, SS and
NN (originating from the package ``LEO'' \cite{avdeev} ) are used:
$$
NN(a,j) = \left( N+a \right)^j,~~~~~j > 0,
$$
\noindent
and
$$
SS(a,j) = \frac{1}{\left( N+a \right)^j}, ~~~~~j > 0.
$$
\noindent
After application of each recurrence relation the procedure ``ration''
for the simplification of products of SS's and NN's must be called.
The procedure ``finitem'' substitutes the values of master
integrals and performs the expansion of the functions NN and SS
in $\varepsilon$.
The integration procedure starts with F-type integrals. We apply
the recurrence relations given explicitly in appendix A. After applying
them several times, the integrand is reduced to the master integrals or
to new, more simple, integrals like V-type, e.g. Then we apply several times
the recurrence relations of Appendix B for the V-types. One needs to call all
procedures step by step to reduce the initial diagram to the set of master
integrals. The recommended sequence for calling the procedures is the
following one:
\begin{verbatim}
#call f11111{'TIMES'}
#call f01111{'TIMES'}
#call f11110{'TIMES'}
#call f00111{'TIMES'}
#call f10101{'TIMES'}
#call f10110{'TIMES'}
#call f01100{'TIMES'}
#call f00101{'TIMES'}
#call f10100{'TIMES'}
#call f00100{'TIMES'}
#call f00001{'TIMES'}
#call f00000{'TIMES'}
#call v1111{'TIMES'}
#call v0111{'TIMES'}
#call v1011{'TIMES'}
#call v1010{'TIMES'}
#call v0110{'TIMES'}
#call v1001{'TIMES'}
#call v0010{'TIMES'}
#call v0001{'TIMES'}
#call v0000{'TIMES'}
#call j011{'TIMES'}
#call on3{'TIMES'}
#call on2{'TIMES'}
#call vl111{'TIMES'}
#call vl011{'TIMES'}
#call vl001{'TIMES'}
#call on1{2*'TIMES'}
#call ons11{2*'TIMES'}
#call vl00{'TIMES'}
#call vl1{2*'TIMES'}
\end{verbatim}
All programs of the package are realized with the help of FORM
procedure facilities. To perform the integration, one needs to call
the procedures with the name of the corresponding prototypes and one
argument which determines how often the recurrence relations are to be
called. This number depends on the complexity of the calculated
diagram. In most cases it is equal to the sum of indices of the integrand.
\begin{figure}[ht]
\centerline{\vbox{\epsfysize=50mm \epsfbox{example1.eps}}}
\caption{\label{example} }
\end{figure}
Let us consider as an example, the physical diagram shown in Fig.\ref{example} in more
detail. The FORM input is created automatically by {\it DIANA} \cite{DIANA}.
For two-loop self-energy diagrams {\it DIANA} generates all necessary information,
e.g. identifying symbols for the particles of the diagram and their masses ,
distribution of integration momenta, Feynman rules (linear/nonlinear gauges),
number of fermion loops, symmetry factors, etc. To calculate the transverse part
it is sufficient to make the following substitutions:
\begin{verbatim}
multiply, (d_(mu,nu)-p(mu)*p(nu)/p.p)*SS(-1,1);
.sort
id k1.p = (k1.k1-c3)/2;
id k2.p = (k2.k2-c4+mmZ-mmW)/2;
id k1.k2 = -(c5 - k1.k1 - k2.k2 - mmW)/2;
id k1.k1 = c1 - mmZ;
id k2.k2 = c2 - mmW;
id p.p^j? = (-mmW)^j;
.sort
id mmZ^j? = mmW^j;
id 1/c1^j1?/c2^j2?/c3^j3?/c4^j4?/c5^j5? = F11111(j1,j2,j3,j4,j5,mmW),
\end{verbatim}
\noindent
where we have explicitly set $ m_W^2 = m_Z^2 $
Calling the above routines (not all of them are needed in this special
case) yields the result
\begin{eqnarray}
&&
{\it Example1} =
m_W^2 {\bf F11111}(1,1,1,1,1,m_W^2)
\Biggl( 33 - \frac{11}{N-1} \Biggr)
\nonumber \\
&&
+ {\bf V1111}(1,1,1,1,m_W^2) \Biggl(
-42 + \frac{2}{N-1} + \frac{27}{4(N-1)^2}
\Biggr)
\nonumber \\
&&
+ \frac{{\bf J111}(1,1,1,m_W^2)}{m_W^2} \Biggl(
\frac{4}{3} - \frac{26}{3(N-1)} + \frac{17}{4(N-1)^2}
\Biggr)
\nonumber \\
&&
+ \frac{{\bf VL111}(1,1,1,m_W^2)}{m_W^2} \Biggl(
20 - \frac{5}{2(N-1)} - \frac{9}{4(N-1)^2}
\Biggr)
\nonumber \\
&&
+ \left[ {\bf ONS11}(1,1,m_W^2) \right]^2
\Biggl( \frac{5}{(N-1)} -\frac{21}{2} \Biggr)
\nonumber \\
&&
+ {\bf ONS11}(1,1,m_W^2)
\Biggl(
-\frac{17}{3(N-4)} - \frac{1}{(N-2)} + \frac{20}{3(N-1)} - \frac{6}{(N-1)^2}
\Biggr)
\nonumber \\
&&
+
\Biggl(
\frac{64}{3(N-4)} - \frac{24}{(N-4)^2} - \frac{32}{(N-2)}
- \frac{8}{(N-2)^2} + \frac{32}{3(N-1)}
\Biggr),
\nonumber
\end{eqnarray}
\noindent
where the overall factor is $\frac{g^4 m_W^2}{(16 \pi^2)^2}$.
After calling ``finitem'' we have:
\begin{eqnarray}
{\it Example1} & = & -\frac{1417}{24 \epsilon^2}
- \frac{1}{\varepsilon} \Biggl(
\frac{10375}{48} - \frac{665}{12} \frac{\pi}{\sqrt{3}} \Biggr)
- \frac{21187}{32} + \frac{21}{8} \zeta(2) - \frac{88}{3} \zeta(3)
\nonumber \\
&&
+ \frac{4007}{18} \frac{\pi}{\sqrt{3}}
- \frac{665}{12} \frac{\pi}{\sqrt{3}} \ln 3
+ \frac{16449}{16} S_2 + 132 \frac{\pi}{\sqrt3} S_2
\nonumber \\
&&
\approx
- \frac{59.0}{\varepsilon^2} - \frac{115.6}{\varepsilon} -69.6.
\nonumber
\end{eqnarray}
So far we have considered only the case of merely one non-zero mass.
Of course it is obvious that as further application of our package
we use it for the expansion of diagrams in terms of mass differences.
In general a `standard' expansion of the scalar propagators in terms of
the mass difference, i.e. in the above case in terms of $ m_W^2 - m_Z^2 $,
yields as expansion coefficients again integrals, which can be handled
by our package. In order to demonstrate this possibility, we extend the
calculation of the diagram in Fig.\ref{example} up to the second order
in $\Delta \equiv 1-m_W^2/m_Z^2 = \sin^2 \theta_W$ with the result
\begin{eqnarray}
&&
{\it Example1} =
\frac{1}{\varepsilon^2} \Biggl(
- \frac{1417}{24} + \frac{667}{8} \Delta - \frac{95}{4} \Delta^2
\Biggr)
\nonumber \\ &&
- \frac{1}{\varepsilon} \Biggl(
\frac{10375}{48} - \frac{24197}{72} \Delta + \frac{17957}{144} \Delta^2
\Biggr)
+ \frac{1}{\varepsilon} \frac{\pi}{\sqrt{3}} \Biggl(
\frac{665}{12} - \frac{137}{3} \Delta - \frac{635}{72} \Delta^2
\Biggr)
\nonumber \\
&&
+ \frac{\pi}{\sqrt{3}} \Biggl(
\frac{4007}{18} - \frac{21881}{72} \Delta + \frac{45067}{432} \Delta^2 \Biggr)
- \frac{\pi}{\sqrt{3}} \ln 3 \Biggl(
\frac{665}{12} - \frac{137}{3} \Delta - \frac{635}{72} \Delta^2
\Biggr)
\nonumber \\
&&
+ S_2 \Biggl(
\frac{16449}{16} - \frac{7461}{8} \Delta + \frac{2195}{32} \Delta^2
\Biggr)
+ \frac{\pi}{\sqrt{3}} S_2 \Biggl(
132 - \frac{351}{4} \Delta + \frac{699}{4} \Delta^2
\Biggr)
\nonumber \\
&&
- \Biggl(
\frac{21187}{32} - \frac{216049}{216} \Delta + \frac{403637}{864} \Delta^2
\Biggr)
+ \zeta(2) \Biggl(
\frac{21}{8} - \frac{1477}{36} \Delta + \frac{23179}{432} \Delta^2
\Biggr)
\nonumber \\
&&
- \zeta(3) \Biggl(
\frac{88}{3} - \frac{39}{2} \Delta + \frac{233}{6} \Delta^2
\Biggr)
\approx
\frac{1}{\varepsilon^2} \left(
-59.0 + 83.4 \Delta - 23.8 \Delta^2
\right)
\nonumber \\ &&
+
\frac{1}{\varepsilon}
\left(
-115.6 + 253.2 \Delta - 140.7 \Delta^2
\right)
-69.6 + 211.6 \Delta - 118.4 \Delta^2.
\nonumber
\end{eqnarray}
With the numerical value $\Delta \simeq 0.23$, we see that the convergence
in $\Delta$ of the above series in the three contributions is quite good and
it can be expected that it will even improve due to `gauge cancellations'
if a complete gauge invariant subset of diagrams is taken into account.
\section{Conclusion}
The presented package has been developed for the calculation of two-loop
self energy diagrams with only one non-zero mass. All mass combinations
of the Standard Model with heavy masses are included.
In this sense it is an extension of the existing
package {\bf SHELL2}, which takes into account only diagrams occurring
in QED and QCD. We have also shown that this package can be
used for the case of different masses in the SM by expanding in terms of
mass differences. Thus we have provided a program for the evaluation
of at least a large class of on-shell two-loop self energy diagrams in the SM.
The time of calculation of one diagram,
depending of course on the order of expansion in the mass differences, is
not very large in general.
\vspace{1cm}
{\bf Acknowledgments}
We are grateful to A.~Davydychev, A.~V.~Kotikov, O.~V.~Tarasov,
M.~Tentyukov and O.~Veretin for useful comments. M.K.~'s
research has been supported by the DFG project FL241/4-1 and in
part by RFBR $\#$98-02-16923.
\pagebreak
\begin{center}
{\Large\bf Appendix}
\end{center} |
hep-ph/9907562 | \section{Neutrinogenesis}
Let's consider in more detail how the quantity of left- and right- handed
$B$ and $L$ change in the early Universe, as illustrated in figure
\ref{bild} for a Universe with $( B \! - \! L )=0$. Initially, some process
(e.~g.\ at the GUT-scale) produces a total baryon number $B$ and lepton
number $L$ which are distributed between the left-handed (L)
and right-handed (R) sectors. Both left-handed and right-handed
particles are exposed to LR-equilibration processes (marked in
figure \ref{bild} by \textcircled{e}) which interconvert
left- and right-handed particles and conserve $B$ and $L$. Sphaleron
processes (marked in figure \ref{bild} by \textcircled{s}) affect only
left-handed particles and violate $B$ and $L$ by moving left-handed
$B_\mathrm{L}$ and $L_\mathrm{L}$ along a line of constant $( B \! - \! L )_{\textrm{L}}$.
The interplay of the LR-equilibration and the sphaleron washout
is essentially a comparison of their time scales. For all
Standard Model particles, the equilibration processes are in
equilibrium during the epoch in which the
sphalerons are active. A baryon asymmetry
is therefore completely washed out in a theory with $( B \! - \! L )=0$,
as illustrated in the insert of figure \ref{bild}. The situation is
different with a very weakly coupled right-handed neutrino. Since
LR-conversion \textcircled{e} is not in equilibrium for these particles,
not all $L_\mathrm{R}$ can be depleted; LR-equilibration occurs only after the
sphalerons are ineffective. Thus only the left-handed components
are washed out while the right-handed components are preserved,
as shown in the main diagram in figure \ref{bild}.\\
\begin{center}
\begin{figure}[htb]
\includegraphics*[angle = 0, width = 7cm]{Figure1.eps}
\caption{\small \label{bild}
Comparison of sphaleronic \textcircled{s} and LR-equilibration
\textcircled{e} processes affecting $B$ and $L$. For Standard
Model particles (insert) LR-equilibration occurs completely
before or during the sphaleron washout. Thus no baryon
asymmetry can be generated if in total $( B \! - \! L )=0$. For sufficiently
small Yukawa couplings the LR-equilibration time scale becomes
longer than the washout period; $\Delta B$ is then generated by
the sphalerons, even in a theory with $B=L=0$ initially.}
\end{figure}
\end{center}
In a more detailed, quantitative picture, we use the
chemical equilibrium of sphaleron and Higgs-mediated reactions
\begin{subequations}
\begin{eqnarray}
S &\leftrightarrow& 3 q + \ell \\
\phi &\leftrightarrow& q + \bar{u} \\
\bar{\phi} &\leftrightarrow& q + \bar{d} \\
\bar{\phi} &\leftrightarrow& \ell + \bar{e}
\end{eqnarray}
\end{subequations}
and the condition of charge or hypercharge neutrality of the plasma
to relate the chemical potentials of all the SM and $\nu_{R}$ fields
\cite{Harvey:1990qw}.
One then obtains for $( B \! - \! L )=0$
\begin{equation}
\label{BtoL}
n_{B} = n_{L} = - \frac{28}{79} n_{\nu_\mathrm{R}}
\end{equation}
in the case of three generations and one Higgs doublet. This
confirms the qualitative considerations.\\
\section{How (not) to equilibrate $\nu_\mathrm{R}$}
For the success of this scenario, it is necessary that
right-handed neutrinos not be equilibrated quickly above the
electroweak phase transition. Schematic Feynman diagrams for the processes
contributing to their equilibration are shown in figure \ref{lr}.
These processes include Higgs decay and inverse decay, s- and t-channel
scattering off SM fermions, and s- and t-chanel scattering off Higgs
bosons in combination with the emission or absorption of an electroweak
gauge boson.
The rate of these processes at a temperature $T$ above the electroweak phase
transition is easily estimated on dimensional grounds to be
\begin{equation}
\Gamma \sim \lambda^{2} g^{2} T\;,
\end{equation}
where $\lambda$ is the neutrino Yukawa coupling appearing in the
$\lambda H \ell \bar{\nu}$ term of the Lagrangian and $g$ is a
gauge or top Yukawa coupling of $\mathcal{O}(1)$. This rate should be
compared to the expansion rate of the Universe
\begin{equation}
H \sim \frac{T^{2}}{M_{\mathrm{Pl}}}\;.
\end{equation}
If $\Gamma > H$ at a temperature $T$ above the electroweak phase
transition $T_{c}$, left- and right-handed species are equilibrated.
By demanding that this not occur, we obtain the condition
\begin{equation}
\lambda \lesssim \sqrt{\frac{T_{c}}{M_{\mathrm{Pl}}}}
\sim 10^{-8}\;,
\qquad
m \sim \lambda T_{c} \lesssim 1 \, \mathrm{keV}\;.
\end{equation}
Detailed numerical computation of the corresponding collision terms
in the Boltzmann equations refines this bound to $\sim 10 \, \textrm{keV}$.
Although this condition is not fulfilled by electrons, it is easily
fulfilled by Dirac neutrinos with masses in the range necessary to
explain Super-Kamiokande, solar neutrino, and LSND data.
\begin{figure}
\begin{center}
\begin{fmfgraph*}(1,1)
\fmfleft{n1,n2} \fmfright{h}
\fmf{plain,label=$\nu_\mathrm{R}$,label.side=right}{n1,x}
\fmf{plain,label=$\ell_\mathrm{L}$,label.side=right}{x,n2}
\fmf{dashes}{x,h}
\fmfdot{x}
\end{fmfgraph*}
\\[\baselineskip]
\begin{fmfgraph*}(1,1)
\fmfleft{n1,n2} \fmfright{l,r}
\fmf{plain,label=$\nu_\mathrm{R}$}{n1,x}
\fmf{plain,label=$\ell_\mathrm{L}$}{x,n2}
\fmf{dashes}{x,y}
\fmf{plain}{l,y,r}
\fmfdot{x,y}
\end{fmfgraph*}
\qquad
\begin{fmfgraph*}(1,1)
\fmfleft{n1,n2} \fmfright{h,w}
\fmf{plain,label=$\nu_\mathrm{R}$}{n1,x}
\fmf{plain,label=$\ell_\mathrm{L}$}{x,n2}
\fmf{dashes}{x,y,h}
\fmf{photon}{y,w}
\fmfdot{x,y}
\end{fmfgraph*}
\end{center}
\caption{\small \label{lr}Processes contributing to the equilibration of $\nu_{R}$.
Dashed lines are Higgs bosons, wavy lines are gauge bosons, and
unlabeled solid lines are SM fermions.}
\end{figure}
\section{Producing $\nu_{R}$: A Toy Model}
The neutrinogenesis mechanism allows the revival of GUT-scale baryogenesis
by having heavy particles decay into right-handed neutrinos
instead of directly into particles carrying $B$.
We present here a simple toy model for $B$-generation via this scenario,
which is loosely based on an old GUT-scale scenario \cite{Fry:1980bc}. In
this model, two very heavy SU(2)-doublet scalars (which carry the
same quantum numbers as the SM Higgs, but will not get vev's)
couple to the SM fields via the Lagrangian
\begin{eqnarray}
\mathcal{L} &=&
F(\ell_\mathrm{L}\cdot \Phi)\nu_\mathrm{R}^{c}+
F'(\ell_\mathrm{L}\cdot \Phi^{c})e_\mathrm{R}^{c}\nonumber\\
&& G(\ell_\mathrm{L}\cdot \Psi)\nu_\mathrm{R}^{c}+
G'(\ell_\mathrm{L}\cdot \Psi^{c})e_\mathrm{R}^{c}+\mathrm{h.c.}
\end{eqnarray}
They then decay according to
\begin{equation}
\left. \begin{array}{l} \Phi \\ \Psi \end{array} \right\}
\rightarrow
\left\{ \begin{array}{l}
\bar\ell_\mathrm{L} + \nu_\mathrm{R} \\
\ell_\mathrm{L} + \bar e_\mathrm{R}
\end{array}\right.
\end{equation}
Since the elements of $F$, $F'$, $G$ and $G'$ can have relative
phases, interference between the tree-level decay amplitude and
the one-loop corrections shown in figure \ref{gut} gives rise to a
small CP-violating effect \cite{Nanopoulos:1979gx,Liu:1993tg}:
\begin{eqnarray}\label{DeltaGamma}
\lefteqn{\epsilon_{\Phi} = \frac{
\Gamma(\Phi \rightarrow \bar{\ell} \nu) -
\Gamma(\bar{\Phi} \rightarrow \ell \bar{\nu})}{
\Gamma(\Phi \rightarrow \bar{\ell} \nu) +
\Gamma(\bar{\Phi} \rightarrow \ell \bar{\nu})}} \\
&=& \frac{\im\tr (F^* G F' G'{}^*)}{16\pi \, \tr (F^* F)} \times\nonumber
\\ && \quad
\left[
1 - \frac{M_{\Psi}^2}{M_{\Phi}^2} \ln \left( 1 +
\frac{M_{\Phi}^2}{M_{\Psi}^2} \right) -
\frac{M_{\Phi}^2}{M_{\Phi}^2-M_{\Psi}^2} \right]\;.
\end{eqnarray}
When a bath containing an equal number of $\Phi$ and $\bar{\Phi}$
particles decays, a net
right-handed neutrino number $n_{\nu} = \epsilon n_{\Phi}$ is
produced, which is not equilibrated as long as the mass condition
derived in the previous section is fulfilled. An analogous result holds
for the decays of $\Psi$ particles.
\begin{figure}
\begin{center}
\begin{fmfgraph*}(1.2,1)
\fmfleft{phi} \fmfright{n,l}
\fmf{scalar,label=$\Phi$}{phi,x}
\fmf{fermion,label=$\ell_\mathrm{L}$}{l,y1}
\fmf{fermion,label=$e_\mathrm{R}$}{y1,x}
\fmf{fermion,label=$\ell_\mathrm{L}$}{x,y2}
\fmf{fermion,label=$\nu_\mathrm{R}$}{y2,n}
\fmffreeze
\fmf{scalar,label=$\Psi$,label.side=left}{y1,y2}
\fmfdot{x,y1,y2}
\end{fmfgraph*}
\qquad
\begin{fmfgraph*}(1.5,1)
\fmfleft{phi} \fmfright{n,l}
\fmf{scalar,tension=1.2,label=$\Phi$}{phi,y1}
\fmf{fermion,left,tension=0.7,label=$\ell_{L}$}{y1,y2}
\fmf{fermion,left,tension=0.7,label=$e_\mathrm{R}$}{y2,y1}
\fmf{scalar,tension=1.2,label=$\Psi$}{y2,x}
\fmf{fermion,label=$\ell_\mathrm{L}$}{l,x}
\fmf{fermion,label=$\nu_\mathrm{R}$}{x,n}
\fmfdot{x,y1,y2}
\end{fmfgraph*}
\end{center}
\caption{\small\label{gut}Production of $\nu_{R}$ via decay of GUT scalars.
The interference of these diagrams with the tree-level decay amplitude
produces the CP-violation necessary to produce a net neutrino number.}
\end{figure}
The formalism required to calculate the lepton number stored in
right-handed neutrinos from such an out-of-equilibrium decay has been
developed by \cite{Kolb:1980qa,Luty:1992un,kt}, who derive an approximate
expression for the neutrino number to entropy ratio
\begin{equation}
\label{Ynu}
Y_{\nu} = \frac{n_{\nu}}{s} \sim
\frac{\epsilon_{\Phi} + \epsilon_{\Psi}}{g_{*}}
\end{equation}
produced in out-of-equilibrium decays.
Here $g_{*} \sim \mathcal{O}(100)$
is the total number of relativistic degrees of freedom in the early
Universe.
To make sure that inverse decays not erase the asymmetry produced in the
decays, we require
\begin{equation}
K_{\Phi}=\frac{\Gamma(\Phi)}{2H(M_{\Phi})} \sim
\frac{\lambda^{2}}{g^{1/2}_{*}}
\frac{M_\mathrm{Pl}}{M_{\Phi}}
\lesssim 1
\end{equation}
and similarly for the analogously defined $K_{\Psi}$.
In a simplified analysis, one assumes that $M_{\Phi} \sim M_{\Psi} \sim
\mathcal{O}(M)$ and that the largest Yukawa couplings of these scalars are
all $\mathcal{O}(\lambda)$. In this case
\begin{equation}
\label{simple}
\epsilon \sim \frac{\lambda^2}{16\pi}\;,
\qquad\qquad
\frac{\lambda^2}{g_{*}^{1/2}} \frac{M_\mathrm{Pl}}{M} \lesssim 1\;.
\end{equation}
Combining this estimate (\ref{simple}) with equations (\ref{Ynu}) and
(\ref{BtoL}) with the observed baryon asymmetry
$Y_B = 6$-$8 \cdot 10^{-11}$ \cite{Burles:1999zt}
implies $\lambda \sim 10^{-3}$ and $M \gtrsim 10^{12} \, \mathrm{GeV}$.
This simple estimate shows that a e.~g.\ a weakly coupled GUT could produce the
observed baryon number of the Universe via the neutrinogenesis mechanism.
The results of a more detailed quantitative analysis are shown in figure
\ref{MassPlot}, which shows the mass parameters which produce a baryon
asymmetry consistent with observations for various choices
of $K = \max \{K_{\Phi} , K_{\Psi}\}$.
Note that the neutrinogenesis scenario does not require (although it does
allow) decays that violate $B$ or $L$ at the GUT scale; only an asymmetry
between right-handed neutrinos and anti-neutrinos is necessary, since it
alone determines the final baryon asymmetry via equation (\ref{BtoL}).
\begin{figure}
\begin{center}
\includegraphics*[angle = 0, width = 7cm]{Figure4.eps}
\end{center}
\caption{\small\label{MassPlot}Allowed masses of the scalars
of our toy model from the requirements
for the $K$ values and $n_B/s=8\cdot 10^{-11}$, where we assume
$M_\Phi <M_\Psi$. The $M_\Phi=M_\Psi$ line has to be excluded since there
eq. (\ref{DeltaGamma}) cannot be applied.}
\end{figure}
\section{Discussion and Conclusions}
We presented in this letter a ``neutrinogenesis'' mechanism by which
$( B \! + \! L )$-violating sphaleron processes produce a baryon asymmetry instead of
destroying it, provided that neutrinos have sufficiently small Dirac masses.
The key observation is that sphalerons couple only to left-handed particles
while right-handed particles ($SU(2)_\mathrm{L}$ singlets) participate
in the washout only indirectly via their Dirac Yukawa coupling to
their respective left-handed partners. The masses of ordinary
quarks and leptons imply Yukawa couplings for which left--right
equilibration occurs quickly compared to the duration of the sphaleronic
epoch in the early Universe. However, for Dirac masses below roughly 10~keV,
equilibration takes longer than the washout period.
Neutrinos with Dirac masses in the experimentally allowed range
store therefore part of the total lepton number long enough in
right-handed neutrinos. The sphaleronic washout affects in this
case only the left-handed neutrinos and $\Delta B$ can be generated
from the neutrino sector for a theory where initially $( B \! - \! L )=0$ or
even $B=L=0$.
In the neutrinogenesis mechanism, baryogenesis is the result of
an amusing conspiracy of GUT-scale and electroweak-scale effects.
The three Shakarov conditions \cite{Sakharov:1967dj} are realized
in the following way: CP violating is achieved at some large (e.~g.\ GUT) scale
to produce a neutrino asymmetry.
Baryon and lepton number are violated only by the electroweak sphalerons
and both the heavy (e.~g.\ GUT) parents and the light neutrinos are out of
equilibrium. We presented a toy model which illustrates some details
and which shows that the right amount of baryon asymmetry can be
produced for reasonable GUT mass scales. One can easily check that
constraints coming from Big Bang Nucleosynthesis and from the
matter density in the Universe are not violated.
Neutrinogenesis should in principle also work in the presence of
Majorana mass terms for sufficiently small Dirac mass entries.
One would expect in this case GUT-scale Majorana masses for
$\nu_\mathrm{R}$ and lepton number would be broken.
It is also conceivable that neutrinogenesis can be combined in
this way with the known leptogenesis mechanism.
In this case the see-saw mass relation might be used again to
explain the smallness of neutrino masses.
The most beautiful version of neutrinogenesis is however
the case of pure Dirac masses and for initial $B=L=0$.
This could e.~g.\ be realized in suitable GUTs.
A more detailed study of this mechanism and its phenomenology
will be presented in a longer paper.
\section*{Acknowledgments}
We thank E. Akhmedov, P. Arnold and M. Yoshimura for helpful discussions.
This work was supported by the ``Sonderforschungsbereich~375
f\"ur Astro-Teilchenphysik'' der Deutschen Forschungsgemeinschaft.
One of us (D.W.) gratefully acknowledges the hospitality of the
Aspen Center for Physics, at which a part of this work was completed.
\end{fmffile}
\bibliographystyle{revtex} |
1903.11855 | \section{Introduction}
The Cuntz-Pimsner $C^*$-algebras were first introduced by Pimsner in \cite{pimsner12class} and further studied by Katsura in \cite{katsura2004c}. The Cuntz-Pimsner algebra is constructed from a $C^*$-correspondence and comes equipped with a natural gauge action. In a recent article, Chirvasitu \cite{2018arXiv180512318C} obtained necessary and sufficient conditions for the gauge action to be free.
The \emph{(algebraic) Cuntz-Pimsner rings} were introduced by Carlsen and Ortega in \cite{carlsen2011algebraic} as algebraic analogues of the Cuntz-Pimsner algebras, and simplicity of Cuntz-Pimsner rings were studied in \cite{carlsen2012simple}. These rings are interesting to us since they generalize some very famous families of rings. Indeed, Carlsen and Ortega originally gave two important examples of rings realizable as Cuntz-Pimsner rings: \emph{Leavitt path algebras} (see \cite[Expl. 5.8]{carlsen2011algebraic} and Section \ref{sec:lpa}) and \emph{corner skew Laurent polynomial rings} (see \cite[Expl. 5.7]{carlsen2011algebraic} and Section \ref{sec:corner}). Recently, Clark, Fletcher, Hazrat and Li \cite{2018arXiv180810114O} showed that unperforated $\mathbb{Z}$-graded Steinberg algebras are also realizable as Cuntz-Pimsner rings. The Cuntz-Pimsner rings do not come with a gauge action but instead a natural $\mathbb{Z}$-grading. This grading is the main object of study in this article.
In the case of Leavitt path algebras, the natural $\mathbb{Z}$-grading was systematically investigated by Hazrat \cite{hazrat2013graded}. In particular, he obtained necessary and sufficient conditions for the Leavitt path algebra of a finite graph to be strongly $\mathbb{Z}$-graded (see \cite[Thm. 3.15]{hazrat2013graded}). The class of \emph{epsilon-strongly graded rings} was first introduced by Nystedt, Öinert and Pinedo in \cite{nystedt2016epsilon} as a generalization of unital strongly graded rings. This subclass of graded rings has been investigated further by the author in \cite{lannstrom2018chain, lannstrom2018induced}. Interestingly, the Leavitt path algebra of a finite graph was proved to be epsilon-strongly $\mathbb{Z}$-graded by Nystedt and Öinert (see \cite[Thm. 1.2]{nystedt2017epsilon}). Seeking to extend their result, they introduced the notion of a \emph{nearly epsilon-strongly graded ring} (see Definition \ref{def:nystedt_epsilon}) and proved that every Leavitt path algebra (even for infinite graphs) is nearly epsilon-strongly $\mathbb{Z}$-graded (see \cite[Thm. 1.3]{nystedt2017epsilon}). In other words, there are sufficient conditions in the literature for the natural $\mathbb{Z}$-grading of a Leavitt path algebra to be strong, epsilon-strong and nearly epsilon-strong respectively. These types of gradings have certain structural properties that help us understand the Leavitt path algebras. The present work began as an effort to generalize the previously mentioned results about Leavitt path algebras to a larger class of Cuntz-Pimsner rings. It turns out that we can obtain partial characterizations of nearly epsilon-strongly and epsilon-strongly graded Cuntz-Pimsner rings (see Theorem \ref{thm:1} and Theorem \ref{thm:epsilon}). For unital strongly graded Cuntz-Pimsner rings we obtain a complete characterization (see Theorem \ref{thm:2}). For that purpose, we obtain sufficient conditions for a Cuntz-Pimsner ring to be strongly graded (see Corollary \ref{cor:cuntz_strongly}). In particular, we recover Hazrat's results on Leavitt path algebras (see Corollary \ref{cor:lpa_strong}) and corner skew Laurent polynomial ring (see Corollary \ref{cor:fractional_strong}) as special cases.
\smallskip
Carlsen and Ortega \cite{carlsen2011algebraic} constructed the Cuntz-Pimsner rings using a categorical approach. Let $R$ be an associative but not necessarily unital ring. Recall (see \cite[Def. 1.1]{carlsen2011algebraic}) that an \emph{$R$-system} is a triple $(P, Q, \psi)$ where $P$ and $Q$ are $R$-bimodules and $\psi \colon P \otimes_R Q \to R$ is an $R$-bimodule homomorphism where $P \otimes_R Q$ denotes the balanced tensor product. A technical assumption called Condition (FS) (see Definintion \ref{def:cond_fs}) is generally imposed on the $R$-system $(P,Q,\psi)$. We will introduce two special types of $R$-systems called \emph{s-unital} and \emph{unital $R$-systems} (see Definition \ref{def:s-unital}). Given an $R$-system, Carlsen and Ortega considered representations of that system. This is the key definition in their construction:
\begin{definition}(\cite[Def. 1.2, Def. 3.3]{carlsen2011algebraic})
Let $R$ be a ring and let $(P,Q,\psi)$ be an $R$-system. A \emph{covariant representation} is a tuple $(S, T, \sigma, B)$ such that the following assertions hold:
\begin{enumerate}[(a)]
\begin{item}
$B$ is a ring;
\end{item}
\begin{item}
$S \colon P \to B$ and $T \colon Q \to B$ are additive maps;
\end{item}
\begin{item}
$\sigma \colon R \to B$ is a ring homomorphism;
\end{item}
\begin{item}
$S(pr)=S(p)\sigma(r), S(rp)=\sigma(r)S(p), T(qr)=T(q)\sigma(r), T(rq)=\sigma(r)T(q)$ for all $r \in R$, $q \in Q$ and $p \in P$;
\end{item}
\begin{item}
$\sigma(\psi(p \otimes q)) = S(p)T(q)$ for all $p \in P$ and $q \in Q$.
\end{item}
\end{enumerate}
The covariant representation $(S,T,\sigma, B)$ is \emph{injective} if the map $\sigma$ is injective. The covariant representation $(S,T,\sigma, B)$ is \emph{surjective} if $B$ is generated as a ring by $\sigma(R) \cup S(P) \cup T(Q)$.
A surjective covariant representation $(S,T,\sigma, B)$ is called \emph{graded} if there is a $\mathbb{Z}$-grading $\{ B_i \}_{i \in \mathbb{Z}}$ of $B$ such that $\sigma(R) \subseteq B_0$, $T(Q) \subseteq B_1$ and $S(P) \subseteq B_{-1}$.
\label{def:covariant_representation}
\end{definition}
\begin{remark}
Let $(S,T,\sigma,B)$ be a covariant representation and assume that $B$ is $\mathbb{Z}$-graded. Note that $(S,T,\sigma,B)$ is a graded covariant representation if and only if the grading of $B$ is compatible with the representation structure.
\end{remark}
Carlsen and Ortega \cite{carlsen2011algebraic} then considered the category of surjective covariant representations of $(P,Q,\psi)$ denoted by $\mathcal{C}_{(P,Q,\psi)}$. The maps between $(S, T, \sigma, B)$ and $(S', T', \sigma', B')$ are ring homomorphisms $\phi \colon B \to B'$ such that $\phi \circ S = S'$, $\phi \circ T = T'$ and $\phi \circ \sigma = \sigma'$. We write $(S,T,\sigma, B) \cong_{\text{r}} (S', T', \sigma', B')$ if the covariant representations are isomorphic as objects in $\mathcal{C}_{(P,Q,\psi)}$. In the case when $(P,Q,\psi)$ satisfies Condition (FS) (see Definition \ref{def:cond_fs}), they obtained a complete characterization of injective, graded, surjective covariant representations up to isomorphism in $\mathcal{C}_{(P,Q,\psi)}$ (see \cite[Sect. 7]{carlsen2011algebraic}).
The \emph{Cuntz-Pimsner rings} are defined as certain universal covariant representations (see Definition \ref{def:cp_ring}). Unlike in the $C^*$-setting, the Cuntz-Pimsner ring is not well-defined for all $R$-systems $(P,Q,\psi)$ (see \cite[Expl. 4.11]{carlsen2011algebraic}).
Let both $R$ and $(P,Q,\psi)$ vary. If a $\mathbb{Z}$-graded ring $B$ shows up in a graded covariant representation $(S,T,\sigma, B)$ of some $R$-system $(P,Q,\psi)$, then we call $B$ a \emph{representation ring}. Following Clark, Fletcher, Hazrat and Li \cite{2018arXiv180810114O}, we then say that $B$ is \emph{realized by} the representation $(S,T,\sigma,B)$ of the $R$-system $(P,Q,\psi)$.
The key new technique of this article is to consider a special type of graded covariant representations:
\begin{definition}
Let $R$ be a ring, let $(P,Q,\psi)$ be an $R$-system and let $(S,T,\sigma,B)$ be a graded covariant representation of $(P,Q,\psi)$. For $k \geq 0$, let $I_{\psi,\sigma}^{(k)}$ be the $B_0$-ideal generated by the set $\{ \sigma(\psi_k (p \otimes q)) \mid p \in P^{\otimes k}, q \in Q^{\otimes k} \} \subseteq B_0$. We call $(S,T,\sigma, B)$ a \emph{semi-full} covariant representation if $B_{-k} B_k = I_{\psi,\sigma}^{(k)}$ for every $k \geq 0$.
\label{def:semi-full}
\end{definition}
\begin{remark}
A $C^*$-correspondence $(A,E,\phi)$ is called \emph{full} if the closure of $\langle x, y \rangle$ for $x,y \in E$ spans $A$. One way to generalize this to the algebraic setting is to require that $\psi$ be surjective. Semi-fullness is a weaker condition. Indeed, if $R$ is unital and $\psi$ is surjective, then every graded covariant representation of $(P,Q,\psi)$ is semi-full.
\end{remark}
Below is an outline of the rest of this article:
\smallskip
In Section \ref{sec:prelim}, we recall the definitions of nearly epsilon-strongly graded rings and algebraic Cuntz-Pimsner rings.
In Section \ref{sec:necessary}, we prove that certain nearly epsilon-strongly $\mathbb{Z}$-graded Cuntz-Pimsner rings can be realized from semi-full covariant representations (see Corollary \ref{cor:reduction}). This is based on recent work by Clark, Fletcher, Hazrat and Li \cite{2018arXiv180810114O} and is the crucial reduction step in the characterization.
In Section \ref{sec:strongly}, we find sufficient conditions for an injective and graded covariant representation to be strongly $\mathbb{Z}$-graded (see Proposition \ref{prop:strong_suff}). Using our general theorems, we recover two results by Hazrat as special cases (see Corollary \ref{cor:lpa_strong} and Corollary \ref{cor:fractional_strong}).
In Section \ref{sec:epsilon}, we obtain sufficient conditions for an injective and semi-full covariant representation ring to be nearly epsilon-strongly $\mathbb{Z}$-graded and epsilon-strongly $\mathbb{Z}$-graded respectively (see Proposition \ref{prop:nearly_epsilon_suff} and Proposition \ref{prop:epsilon_suff}).
In Section \ref{sec:characterization}, we obtain partial characterizations of nearly epsilon-strongly and epsilon-strongly graded Cuntz-Pimsner rings (see Theorem \ref{thm:1} and Theorem \ref{thm:epsilon}). For unital strongly graded Cuntz-Pimsner rings we obtain a complete characterization (see Theorem \ref{thm:2}).
In Section \ref{sec:ex}, we collect some important examples. Notably, we give an example of a Leavitt path algebra realizable as a Cuntz-Pimsner ring in two different ways (see Example \ref{ex:1}). We also give an example of a trivial Cuntz-Pimsner ring that is not nearly epsilon-strongly $\mathbb{Z}$-graded (see Example \ref{ex:2}).
In Section \ref{sec:app}, we apply our results to characterize noetherian and artinian corner skew Laurent polynomial rings (see Corollary \ref{cor:artinian}).
\section{Preliminaries}
\label{sec:prelim}
All rings are assumed to be associative but not necessarily equipped with a multiplicative identity element. Let $R$ be a ring and let $A \subseteq R$ be a subset. The $R$-ideal generated by $A$ is denoted by $(A)$. Let $_R M$ be a left $R$-module and let $B \subseteq M$ be a subset. The \emph{$R$-linear span of $B$}, denoted by $\Span_R B$, is the $R$-submodule of $_R M$ generated by $B$. More precisely, $\Span_R B = \Big \{ \sum b_i + \sum r_j \cdot b_j \mid b_i, b_j \in B, r_j \in R \Big \},$ where the sums are finite.
\subsection{Nearly epsilon-strongly graded rings}
Recall that a ring $S$ is called \emph{$\mathbb{Z}$-graded} if there exists a family of additive subsets $\{ S_i \}_{i \in \mathbb{Z}}$ of $S$ such that $S=\bigoplus_{i \in \mathbb{Z}}S_i$ and $S_m S_n \subseteq S_{m+n}$ for all $m, n \in \mathbb{Z}$. If the stronger condition $S_m S_n = S_{m+n}$ holds for all $m,n \in \mathbb{Z}$, then the $\mathbb{Z}$-grading $\{S_i \}_{i \in \mathbb{Z}}$ is called \emph{strong}. The subsets $S_i$ are called the \emph{homogeneous components} of $S$. The \emph{support} of $S$ is defined to be the set $\text{Supp}(S) = \{ i \in \mathbb{Z} \mid S_i \ne \{ 0 \} \}.$ The component $S_0$ is called the \emph{principal component} of
$S$. It is straightforward to show that $S_0$ is a subring of $S$. Next, let $S=\bigoplus_{i \in \mathbb{Z}} S_i$ and $T=\bigoplus_{i \in \mathbb{Z}} T_i$ be two $\mathbb{Z}$-graded rings. A ring homomorphism $\phi \colon S \to T$ is called \emph{graded} if $\phi(S_i) \subseteq T_i$ for each $i \in \mathbb{Z}$. If $\phi \colon S \xrightarrow{\sim} T$ is a graded ring isomorphism, then we write $S \cong_{\text{gr}} T$ and say that $S$ and $T$ are \emph{graded isomorphic}.
Let $R$ be a ring. Recall that a left (right) $R$-module $_R M$ is called \emph{left (right) s-unital} if for every $x \in M$ there exists some $r_x \in R$ such that $r_x \cdot x = x$ ($x \cdot r_x = x$). A left (right) $R$-module $_R M$ is called \emph{left (right) unital} if there exists some $r \in R$ such that $r \cdot x = x$ ($x \cdot r = x$) for every $x \in M$. Let $R, S$ be rings. A bimodule $_R M _S$ is called \emph{s-unital} (\emph{unital}) if $_ R M$ is left s-unital (unital) and $M _S$ is right s-unital (unital). In particular, an ideal $I$ of $R$ is called \emph{s-unital} (\emph{unital}) if $_R I_R$ is s-unital (unital).
\begin{remark}
Let $R$ be a ring. It follows from \cite[Thm. 1]{tominaga1976s} that if $M$ is a left (right) s-unital $R$-module, then for any positive integer $n$ and elements $x_1, x_2, \dots, x_n \in M$ there exists some $r \in R$ such that $r \cdot x_i = x_i$ ($x_i \cdot r = x_i$) for all $i \in \{ 1, \dots, n \}$.
\label{rem:s-unital}
\end{remark}
If $S$ is a $\mathbb{Z}$-graded ring, then $S_i$ is an $S_0$-bimodule for every $i \in \mathbb{Z}$ (see \cite[Rmk. 1.1.2]{nastasescu2004methods}). Note that $S_i S_{-i}$ is an ideal of $S_0$ for every $i \in \mathbb{Z}$. Hence, in particular, $S_i$ is an $S_{i} S_{-i} \text{--} S_{-i} S_i$-bimodule for each $i \in \mathbb{Z}$. The following definitions were introduced by Nystedt and Öinert:
\begin{definition}(\cite[Def. 3.1, Def. 3.2, Def. 3.3]{nystedt2017epsilon})
Let $S=\bigoplus_{i \in \mathbb{Z}} S_i$ be a $\mathbb{Z}$-graded ring.
\begin{enumerate}[(a)]
\begin{item}
If $S_i$ is an s-unital $S_i S_{-i} \text{--} S_{-i} S_i$-bimodule for each $i \in \mathbb{Z}$, then $S$ is called \emph{nearly epsilon-strongly} $\mathbb{Z}$-graded.
\end{item}
\begin{item}
If $S_i$ is a unital $S_i S_{-i} \text{--} S_{-i} S_i$-bimodule for each $i \in \mathbb{Z}$, then $S$ is called \emph{epsilon-strongly} $\mathbb{Z}$-graded.
\end{item}
\begin{item}
(cf. \cite[Def. 4.5]{clark2018generalized}) If $S_i = S_i S_{-i} S_i$ for every $i \in \mathbb{Z}$, then $S$ is called \emph{symmetrically} $\mathbb{Z}$-graded.
\end{item}
\end{enumerate}
\label{def:nystedt_epsilon}
\end{definition}
\begin{remark}We make two remarks regarding Definition \ref{def:nystedt_epsilon}.
\begin{enumerate}[(a)]
\begin{item}
Nystedt and Öinert made these definitions for general group graded rings graded by an arbitrary group. However, in this article we will only consider the special case of $\mathbb{Z}$-graded rings.
\end{item}
\begin{item}
If $S$ is epsilon-strongly $\mathbb{Z}$-graded, then $S$ is a unital ring (see \cite[Prop. 3.8]{lannstrom2018induced}). In other words, only unital rings admit an epsilon-strong grading.
\end{item}
\end{enumerate}
\label{rem:unital_epsilon}
\end{remark}
We recall the following characterizations of nearly epsilon-strongly graded rings and epsilon-strongly graded rings.
\begin{proposition}
(\cite[Prop. 3.1, Prop. 3.3]{nystedt2017epsilon})
Let $S=\bigoplus_{i \in \mathbb{Z}} S_i$ be a $\mathbb{Z}$-graded ring. The following assertions hold:
\begin{enumerate}[(a)]
\begin{item}
$S$ is nearly epsilon-strongly $\mathbb{Z}$-graded if and only if $S$ is symmetrically $\mathbb{Z}$-graded and $S_i S_{-i}$ is an s-unital ideal for each $i \in \mathbb{Z}$;
\end{item}
\begin{item}
$S$ is epsilon-strongly $\mathbb{Z}$-graded if and only if $S$ is symmetrically $\mathbb{Z}$-graded and $S_i S_{-i}$ is a unital ideal for each $i \in \mathbb{Z}$.
\end{item}
\end{enumerate}
\label{prop:nearly_char}
\end{proposition}
Moreover, the following implications hold (see \cite[Rem. 3.4(a)]{lannstrom2018induced}):
\begin{equation}
\text{unital strongly graded} \Rightarrow \text{ epsilon strongly graded} \Rightarrow \text{nearly epsilon-strongly graded}.
\label{eq:implications}
\end{equation}
\subsection{The Toeplitz representation}
Let $(P,Q,\psi)$ be an $R$-system. Put $P^{\otimes 0} = Q^{\otimes 0}=R$ and $\psi_0(r_1 \otimes r_2) = r_1 r_2$. Let $\psi_1 = \psi$. For $n > 1$, recursively define $Q^{\otimes n} = Q^{\otimes {n-1}} \otimes Q$ and $P^{\otimes n} = P \otimes P^{\otimes {n-1}}$. Let $\psi_n \colon P^{\otimes n} \otimes Q^{\otimes n} \to R$ be defined by, $$\psi_n((p_1 \otimes p_2) \otimes (q_2 \otimes q_1)) = \psi(p_1 \cdot \psi_{n-1}(p_2 \otimes q_2), q_1),$$ for $p_1 \in P, p_2 \in P^{\otimes {n-1}}, q_1 \in Q,$ and $q_2 \in Q^{\otimes {n-1}}.$ Then, $(P^{\otimes n}, Q^{\otimes n}, \psi_n)$ is an $R$-system for each $n \geq 0$. Furthermore, by \cite[Lem. 1.5]{carlsen2011algebraic}, if $(S,T,\sigma,B)$ is a covariant representation of $(P,Q,\psi)$, then $(S^n, T^n, \sigma, B)$ is a covariant representation of $(P^{\otimes n}, Q^{\otimes n}, \psi_n)$ where $S^n \colon P^{\otimes n} \to B$ and $T^n \colon Q^{\otimes n} \to B$ are maps satisfying the equations $S^n(p_1 \otimes \dots \otimes p_n) = S(p_1)S(p_2)\dots S(p_n)$ and $T^n(q_1 \otimes \dots \otimes q_n) = T(q_1) T(q_2) \dots T(q_n)$ for $q_i \in Q$ and $p_j \in P$.
Carlsen and Ortega proved (see \cite[Thm. 1.7]{carlsen2011algebraic}) that there is an injective, surjective and graded covariant representation that satisfies a universal property. This covariant representation is called the \emph{Toeplitz representation} and is denoted by $(\iota_Q, \iota_P, \iota_R, \mathcal{T}_{(P,Q,\psi)})$. The ring $\mathcal{T}_{(P,Q,\psi)}$ is called the \emph{Toeplitz ring}. We recall (see \cite[Thm. 1.7, Prop. 3.1]{carlsen2011algebraic}) the canonical $\mathbb{Z}$-grading of the Toeplitz ring. The ring homomorphism $\iota_R \colon R \to \mathcal{T}_{(P,Q,\psi)}$ (cf. Definition \ref{def:covariant_representation}(c)), turns the ring $\mathcal{T}_{(P,Q,\psi)}$ into an $R$-algebra. For every pair $(m,n)$ of non-negative integers, consider the following additive subset of $\mathcal{T}_{(P,Q,\psi)}$,
\begin{align*}
\mathcal{T}_{(m,n)} &= \Span_R \{ \iota_{Q^{\otimes m}}(q) \iota_{P^{\otimes n}}(p) \mid q \in Q^{\otimes m}, p \in P^{\otimes n} \}.
\end{align*}
Carlsen and Ortega showed that $\mathcal{T}_{(P,Q,\psi)} = \bigoplus_{m,n \geq 0} \mathcal{T}_{(m,n)}$ is a semigroup grading of $\mathcal{T}_{(P,Q,\psi)}$ (see \cite[Def. 1.6]{carlsen2011algebraic}).
For every $i \in \mathbb{Z},$ define,
\begin{equation}
\mathcal{T}_i = \bigoplus_{\substack{i \in \mathbb{Z} \\ m-n=i}} \mathcal{T}_{(m,n)}.
\label{eq:grading}
\end{equation}
The canonical $\mathbb{Z}$-grading of the Toeplitz ring is then given by $\mathcal{T}_{(P,Q,\psi)} = \bigoplus_{i \in \mathbb{Z}} \mathcal{T}_i$. Moreover, the Toeplitz ring satisfies the following universal property:
\begin{theorem}(\cite[Thm. 1.7, Prop. 3.2]{carlsen2011algebraic})
Let $R$ be a ring and let $(P,Q,\psi)$ be an $R$-system. Let $\mathcal{T}_{(P,Q,\psi)} = \bigoplus_{i \in \mathbb{Z}} \mathcal{T}_i$ be the Toeplitz ring associated to $(P,Q,\psi)$ and let $(S,T,\sigma, B)$ be any graded covariant representation of $(P,Q,\psi)$. Then there is a unique $\mathbb{Z}$-graded ring epimorphism $\eta \colon \mathcal{T}_{(P,Q,\psi)} \to B$ such that $\eta \circ \iota_R = \sigma, \eta \circ \iota_Q = T,$ and $\eta \circ \iota_P = S$.
\label{thm:universal}
\end{theorem}
We relate morphisms in the category of graded covariant representations to morphisms in the category of $\mathbb{Z}$-graded rings:
\begin{lemma}
Let $R$ be a ring and let $(P,Q,\psi)$ be an $R$-system. Suppose that $(S,T,\sigma,B)$ and $(S', T', \sigma', B')$ are two graded covariant representations of $(P,Q,\psi)$. If $$\phi \colon (S,T,\sigma,B) \to (S', T', \sigma', B')$$ is a morphism in the category $\mathcal{C}_{(P,Q,\psi)}$ (see the introduction), then $\phi \colon B \to B'$ is a $\mathbb{Z}$-graded ring homomorphism.
\label{lem:rep_maps}
\end{lemma}
\begin{proof}
Applying Theorem \ref{thm:universal} to $(S,T,\sigma,B)$, it follows that $B_i = \eta(\mathcal{T}_i)$ and hence, by (\ref{eq:grading}),
\begin{align*}
B_i = \Span_R \{ T(q) S(p) \mid q \in Q^{\otimes m}, p \in P^{\otimes n} \text{ where }m-n=i \},
\end{align*}
for every $i \in \mathbb{Z}$. Similarly,
$B_i' = \Span_R \{ T'(q) S'(p) \mid q \in Q^{\otimes m}, p \in P^{\otimes n} \text{ where }m-n=i \},$
for every $i \in \mathbb{Z}$. Since $\phi \circ T = T'$ and $\phi \circ S = S'$ it follows that $\phi(B_i) \subseteq B_i'$. Thus, $\phi$ is a $\mathbb{Z}$-graded ring homomorphism.
\end{proof}
The following corollary is straightforward to prove:
\begin{corollary}
Let $R$ be a ring and let $(P,Q,\psi)$ be an $R$-system. Suppose that $(S,T,\sigma,B) \cong_{\text{r}} (S', T', \sigma', B')$ are two isomorphic graded covariant representations of $(P,Q,\psi)$. Then, we have that $B \cong_{\text{gr}} B'$.
\label{cor:rep_iso}
\end{corollary}
\subsection{Adjointable operators, Condition (FS) and Cuntz-Pimsner representations}
Recall from the $C^*$-setting, that finite generation of the Hilbert module $E$ is equivalent to the ring of compact operators $B(E)=K(E)$ being unital. In the algebraic setting, the ring of compact operators $K(E)$ is replaced by $\mathcal{F}_P(Q)$ and $\mathcal{F}_Q(P)$ (see \cite[Def. 2.1]{carlsen2011algebraic}). We will later see that if $P,Q$ are finitely generated, then $\mathcal{F}_P(Q)$ and $\mathcal{F}_Q(P)$ are unital (see Proposition \ref{prop:fsprime_char}). For now, we recall the definition of these rings. A right $R$-module homomorphism $T \colon Q_R \to Q_R$ is called \emph{adjointable} if there exists a left $R$-module homomorphism $S \colon _R P \to _R P$ such that $\psi(p \otimes T(q)) = \psi(S(p) \otimes q)$ for all $q \in Q$ and $p \in P$. The set of adjointable homomorphisms is denoted by $\mathcal{L}_P(Q)$ and $\mathcal{L}_Q(P)$. Note that $\mathcal{L}_P(Q)$ and $\mathcal{L}_Q(P)$ are subrings of $\text{End}(Q_R)$ and $\text{End}(_R P)$ respectively.
Given fixed elements $q \in Q$ and $p \in P$, define $\theta_{q,p} \colon Q_R \to Q_R$ and $\theta_{p,q} \colon _R P \to {}_R P$ by $\theta_{q,p}(x) = q \cdot \psi(p \otimes x)$ and $ \theta_{p,q}(y) =\psi(y \otimes q) \cdot p$ for $x \in Q$ and $y \in P$ respectively. The $R$-linear span of the homomorphisms $\{ \theta_{q,p} \mid q \in Q, p \in P \}$ is denoted by $\mathcal{F}_P(Q)$. Similarly, the $R$-linear span of $\{ \theta_{p,q} \mid q \in Q, p \in P \}$ is denoted by $\mathcal{F}_Q(P)$. It can be proved that $\mathcal{F}_P(Q)$ and $\mathcal{F}_Q(P)$ are two-sided ideals of $\mathcal{L}_P(Q)$ and $\mathcal{L}_Q(P)$ respectively (see \cite[Lem. 2.3]{carlsen2011algebraic}).
The following technical condition was introduced by Carlsen and Ortega:
\begin{definition}
(\cite[Def. 3.4]{carlsen2011algebraic})
Let $R$ be a ring. An $R$-system $(P,Q,\psi)$ is said to satisfy \emph{Condition (FS)} if for all finite sets $\{ q_1, q_2, \dots, q_n \} \subseteq Q$ and $\{ p_1, p_2, \dots, p_m \} \subseteq P$ there exist some $\Theta \in \mathcal{F}_P(Q)$ and $\Phi \in \mathcal{F}_Q(P)$ such that $\Theta(q_i) = q_i$ and $\Phi(p_j)=p_j$ for all $1 \leq i \leq n$ and $1 \leq j \leq m$.
\label{def:cond_fs}
\end{definition}
Note that we have the following inclusion of rings:
\begin{align}
\mathcal{F}_P(Q) & \subseteq \mathcal{L}_P(Q) \subseteq \text{End}(Q_R), \nonumber \\
\mathcal{F}_Q(P) & \subseteq \mathcal{L}_Q(P) \subseteq \text{End}(_R P).
\label{eq:inc1}
\end{align}
Carlsen and Ortega (see \cite[Def. 3.10]{carlsen2011algebraic}) defined maps $\Delta \colon R \to \mathcal{L}_P(Q)$ and $\Gamma \colon R \to \mathcal{L}_Q(P)$ by $\Delta(r)(q) = r q$ and $\Gamma(r)(p) = p r$ for all $r \in R, q \in Q, p \in P$.
In the $C^*$-setting, it turns out that there are always injective morphisms $\pi_n \colon K(E^{\otimes n}) \to \mathcal{T}_E$ for each $n > 0$. In the algebraic setting, Carlsen and Ortega obtained something similar under the assumption that the system satisfies Condition (FS). Another way to put it is that if the $R$-system satisfies Condition (FS), then there are induced representations of $\mathcal{F}_P(Q)$ and $\mathcal{F}_Q(P)$. Recall that the opposite ring $R^{\text{op}}$ of a ring $R$ has the same additive structure but with a new multiplication defined by $a \star b = ba$ for all $a,b \in R$.
\begin{proposition}
(\cite[Prop. 3.11]{carlsen2011algebraic})
Let $R$ be a ring, let $(P,Q,\psi)$ be an $R$-system satisfying Condition (FS) and let $(S,T,\sigma,B)$ be a covariant representation of $(P,Q,\psi)$. Then there exist unique ring homomorphisms $\pi_{T, S} \colon \mathcal{F}_P(Q) \to B$ and $\chi_{T, S} \colon \mathcal{F}_Q(P) \to B^{\text{op}}$ such that $\pi_{T, S}(\theta_{q,p}) = T(q) S(p)$ and $\chi_{T,S}(\theta_{p,q}) = S(p) \star T(q)$ for all $q \in Q, p \in P$. The maps satisfy the following equations for all $\Theta \in \mathcal{F}_P(Q)$ and $\Phi \in \mathcal{F}_Q(P)$:
\begin{align}
\pi_{T, S}(\Delta(r) \Theta) = \sigma(r) \pi_{T, S}(\Theta), \qquad & \pi_{T, S}(\Theta \Delta(r)) = \pi_{T, S}(\Theta) \sigma(r) \nonumber \\
\chi_{T, S}(\Gamma(r)\Phi) = \sigma(r) \star \chi_{T, S}(\Phi) , \qquad & \chi_{T, S}(\Phi \Gamma(r)) = \chi_{T, S}(\Phi) \star \sigma(r) \nonumber \\
\pi_{T, S}(\Theta)T(q) = T(\Theta(q)), \qquad & \chi_{T, S}(\Phi) \star S(p) = S(\Phi(p)). \label{eq:2.9a}
\end{align}
Moreover, $\pi_{T,S}(\mathcal{F}_P(Q)) = \chi_{T, S}(\mathcal{F}_Q(P)) = \Span_R \{ T(q) S(p) \mid q \in Q, p \in P \} \subseteq B$.
If $\sigma$ is injective, then the maps $\pi_{T,S}$ and $\chi_{T,S}$ are also injective.
\label{prop:pi}
\end{proposition}
\begin{remark}We make two remarks regarding Proposition \ref{prop:pi}.
\begin{enumerate}[(a)]
\begin{item}
The equation $\chi_{T, S}(\Phi) \star S(p) = S(p) \chi_{T, S}(\Phi) = S(\Phi(p))$ is misprinted in \cite[Prop. 3.11]{carlsen2011algebraic}.
\end{item}
\begin{item}
Following Carlsen and Ortega, let $\pi$ denote the map $\bigcup_m \mathcal{F}_{P^{\otimes m}}(Q^{\otimes m}) \to \mathcal{T}_{(P,Q,\psi)}$ .
\end{item}
\end{enumerate}
\end{remark}
We now recall the definition of the Cuntz-Pimsner invariant representations. If the $R$-system $(P,Q,\psi)$ satisfies Condition (FS), then the Cuntz-Pimsner invariant representations exhaust all injective, surjective graded covariant representations of $(P,Q,\psi)$ up to isomorphism in $\mathcal{C}_{(P,Q,\psi)}$ (see \cite[Rem. 3.30]{carlsen2011algebraic}).
\begin{definition}(\cite[Def. 3.15, Def. 3.16]{carlsen2011algebraic})
Let $R$ be a ring and let $(P,Q,\psi)$ be an $R$-system satisfying Condition (FS). Let $J$ be an ideal of $R$. If $J \subseteq \Delta^{-1}(\mathcal{F}_P(Q))$, then the ideal $J$ is called \emph{$\psi$-compatible}. If $\ker \Delta \cap J = \{0 \}$, then $J$ is called \emph{faithful}.
For a $\psi$-compatible ideal $J \subseteq R$, let $\mathcal{T}(J)$ be the ideal of $\mathcal{T}_{(P,Q,\psi)}$ generated by the set $\{ \iota_R(x) - \pi(\Delta(x)) \mid x \in J \}$.
The \emph{Cuntz-Pimsner ring relative to $J$} is defined as the quotient ring $\mathcal{O}_{(P,Q,\psi)} = \mathcal{T}_{(P,Q,\psi)} / \mathcal{T}(J)$. Let $\rho \colon \mathcal{T}_{(P,Q,\psi)} \to \mathcal{O}_{(P,Q,\psi)}$ be the quotient map. Let $\iota_Q^J = \rho \circ \iota_Q$, $\iota_P^J = \rho \circ \iota_P$ and $\iota_R^J = \rho \circ \iota_R$. The covariant representation $(\iota_Q^J, \iota_P^J, \iota_R^J, \mathcal{O}_{(P,Q,\psi)}(J))$ is called the \emph{Cuntz-Pimsner representation relative to $J$}.
\end{definition}
A covariant representation $(S,T,\sigma, B)$ is called \emph{invariant relative to $J$} if $\pi_{T,S}(\Delta(x)) = \sigma(x)$ holds in $B$ for each $x \in J$. The relative Cuntz-Pimsner representation $(\iota_Q^J, \iota_P^J, \iota_R^J, \mathcal{O}_{(P,Q,\psi)}(J))$ is invariant relative to $J$ and satisfies a universal property among invariant representations (see \cite[Thm. 3.18]{carlsen2011algebraic}). Finally, we recall the definition of the Cuntz-Pismner ring:
\begin{definition}(\cite[Def. 5.1]{carlsen2011algebraic})
Let $R$ be a ring and let $(P,Q,\psi)$ be an $R$-system. Suppose that there exists a unique maximal $\psi$-compatible, faithful ideal $J$ of $R$. The \emph{Cuntz-Pimsner ring} is defined as $\mathcal{O}_{(P,Q,\psi)} = \mathcal{O}_{(P,Q,\psi)}(J)=\mathcal{T}_{(P,Q,\psi)}/\mathcal{T}(J)$ and the \emph{Cuntz-Pimsner representation} $(\iota_Q^{CP}, \iota_P^{CP}, \iota_R^{CP}, \mathcal{O}_{(P,Q,\psi)})$ is defined to be $(\iota_Q^J, \iota_P^J, \iota_R^J, \mathcal{O}_{(P,Q,\psi)}(J))$.
\label{def:cp_ring}
\end{definition}
\subsection{Leavitt path algebras}
\label{sec:lpa}
The Leavitt path algebra associated to a directed graph was introduced by Ara, Moreno and Pardo \cite{ara2007nonstable} and by Abrams and Aranda Pino \cite{abrams2005leavitt}.
For a thorough account of the theory of Leavitt path algebras, we refer the reader to the monograph by Abrams, Ara, and Siles Molina \cite{abrams2017leavitt}.
We now recall the realization of Leavitt path algebras as Cuntz-Pimsner rings given by Carlsen and Ortega (see \cite[Expl. 1.10, Expl. 5.9]{carlsen2011algebraic}). They only considered Leavitt path algebras with coefficients in a commutative unital ring, but their construction also works for non-commutative unital rings. Let $K$ be a unital ring that will serve as the coefficient ring. Let $E=(E^0, E^1, s, r)$ be a directed graph consisting of a vertex set $E^0$, an edge set $E^1$ and maps $s \colon E^1 \to E^0$ and $r \colon E^1 \to E^0$ specifying the source vertex $s(f)$ and range vertex $r(f)$ for each edge $f \in E^1$. For vertices $u, v \in E^0$, let $\delta_{u,v}=1$ if $u=v$ and $\delta_{u,v} = 0$ if $u \ne v$. Moreover, let $\{ \eta_v \mid v \in E^0 \}$ be a copy of the set $E^0$ and similarly let $\{ \eta_f \mid f \in E^1 \}$ and $\{ \eta_{f^*} \mid f \in E^1 \}$ be copies of the set $E^1$.
\begin{enumerate}[(a)]
\begin{item}
Put $R := \bigoplus_{v \in E^0} K \eta_v$. Define a multiplication on $R$ by $K$-linearly extending the rules $\eta_u \eta_v = \delta_{u,v} \eta_v$ for all $u, v \in E^0$.
\end{item}
\begin{item}
Put $Q := \bigoplus_{f \in E^1} K \eta_f$. Let $R$ act on the left of $Q$ by $K$-linearly extending the rules $\eta_v \cdot \eta_f = \delta_{v, s(f)} \eta_f$ for all $v \in E^0, f \in E^1$. Let $R$ act on the right of $Q$ by $K$-linearly extending the rules $\eta_f \cdot \eta_v = \delta_{v, r(f)} \eta_f$.
\end{item}
\begin{item}
Put $P := \bigoplus_{f \in E^1} K \eta_{f^*}$. Let $R$ act on the left of $P$ by $K$-linearly extending the rules $\eta_v \cdot \eta_{f^*} = \delta_{v, r(f)} \eta_{f^*}$ for all $v \in E^0, f \in E^1$. Let $R$ act on the right of $P$ by $K$-linearly extending the rules $\eta_{f^*} \cdot \eta_v = \delta_{v, s(f)} \eta_{f^*}$ for all $v \in E^0, f \in E^1$.
\end{item}
\begin{item}
Define an $R$-bimodule homomorphism $\psi \colon P \otimes_R Q \to R$ by $\eta_{f^*} \otimes \eta_{f'} \mapsto \delta_{f, f'} \eta_{r(f)}$ for all $f, f' \in E^1$.
\end{item}
\end{enumerate}
We will refer to the above $R$-system $(P,Q,\psi)$ as the \emph{standard Leavitt path system} associated to the directed graph $E$ (with coefficients in $K$). Carlsen and Ortega proved (see \cite[Expl. 5.8]{carlsen2011algebraic}) that $(P,Q,\psi)$ satisfies Condition (FS), that the Cuntz-Pimsner ring is well-defined and that $\mathcal{O}_{(P,Q,\psi)} \cong_{\text{gr}} L_K(E)$. The covariant representation $(\iota_{Q}^{CP}, \iota_P^{CP}, \iota_R^{CP}, \mathcal{O}_{(P,Q,\psi)})$ is called the \emph{standard Leavitt path algebra covariant representation}. Clark, Fletcher, Hazrat and Li also obtained these facts using more general methods (see \cite[Expl. 3.6]{2018arXiv180810114O}).
\subsection{Corner skew Laurent polynomial rings}
\label{sec:corner}
The general construction of fractional skew monoid rings was introduced by Ara, Gonzalez-Barroso, Goodearl and Pardo in \cite{ara2004fractional} as algebraic analogues of certain $C^*$-algebras introduced by Paschke \cite{paschke1980crossed}. Here, we consider the special case of a fractional skew monoid ring by a corner isomorphism which is also called a \emph{corner skew Laurent polynomial ring}.
Let $R$ be a unital ring and let $\alpha \colon R \to eRe$ be a corner ring isomorphism where $e$ is an idempotent of $R$. The corner skew Laurent polynomial ring $R[t_{+}, t_{-}; \alpha]$ is defined to be the universal unital ring satisfying the following conditions:
\begin{enumerate}[(a)]
\begin{item}
There is a unital ring homomorphism $i \colon R \to R[t_{+}, t_{-}; \alpha]$;
\end{item}
\begin{item}
$R[t_{+}, t_{-}; \alpha]$ is the $R$-algebra satisfying the following equations for every $r \in R$:
\begin{equation*}
t_{-}t_{+} = 1, \qquad t_{+} t_{-} = i(e), \qquad i(r)t_{-} = t_{-} i(\alpha(r)), \qquad t_{+}i(r) =i(\alpha(r)) t_{+}.
\end{equation*}
\end{item}
\end{enumerate}
Moreover, $R[t_{+}, t_{-}; \alpha]$ is $\mathbb{Z}$-graded with $A_0= R$, $A_i = R t_{+}^i$ for $i < 0$ and $A_i = t_{-}^i R$ for $i > 0$. Note that $t_{-} \in A_1$ and $t_{+} \in A_{-1}$! Carlsen and Ortega \cite[Expl. 5.7]{carlsen2011algebraic} proved that the corner skew Laurent polynomial ring $R[t_{+}, t_{-}; \alpha]$ can be realized as a Cuntz-Pimsner ring.
\section{Nearly epsilon-strongly $\mathbb{Z}$-graded rings as Cuntz-Pimsner rings}
\label{sec:necessary}
In this section, we will see that a recent result by Clark, Fletcher, Hazrat and Li \cite{2018arXiv180810114O} will allow us to derive necessary conditions for certain Cuntz-Pimsner rings to be nearly epsilon-strongly $\mathbb{Z}$-graded. Inspired by Exel we make the following definition:
\begin{definition}
(cf. \cite[Def. 4.9]{exel2017partial})
Let $A=\bigoplus_{i \in \mathbb{Z}} A_i$ be a $\mathbb{Z}$-graded ring. If $A_n = (A_1)^n$ and $A_{-n} = (A_{-1})^n$ for $n > 0$, then $A$ is called \emph{semi-saturated}.
\end{definition}
We show that the Toeplitz ring and any graded covariant representation is semi-saturated.
\begin{proposition}
Let $R$ be a ring and let $(P, Q, \psi)$ be an $R$-system.
\begin{enumerate}[(a)]
\begin{item}
The Toeplitz ring $\mathcal{T}_{(P,Q,\psi)}=\bigoplus_{i \in \mathbb{Z}} \mathcal{T}_i$ is semi-saturated.
\end{item}
\begin{item}
Let $(S,T,\sigma, B)$ be any graded covariant representation of $(P,Q,\psi)$. Then $B=\bigoplus_{i \in \mathbb{Z}} B_i$ is semi-saturated.
\end{item}
\end{enumerate}
\label{prop:toeplitz_exponent}
\end{proposition}
\begin{proof}
(a): Take an arbitrary integer $t > 0$. It follows from the $\mathbb{Z}$-grading that $(\mathcal{T}_1)^t \subseteq \mathcal{T}_t$. We prove the reverse inclusion. Let $\iota_{Q^{\otimes m}}(q) \iota_{P^{\otimes n}}(p) \in \mathcal{T}_t$ where $q \in Q^{\otimes m}, p \in P^{\otimes n}$ and $m-n=t$. We need to show that $\iota_{Q^{\otimes m}}(q) \iota_{P^{\otimes n}}(p) \in (\mathcal{T}_1)^t$.
Suppose $q = f_1 \otimes f_2 \otimes \dots \otimes f_{n+t}$ and $p = g_1 \otimes g_2 \otimes \dots \otimes g_{n}$.
Then,
\begin{align*}
\iota_{Q^{\otimes m}}(q) \iota_{P^{\otimes n}}(p) = \iota_Q(f_1) \iota_Q(f_2) \dots \iota_Q(f_{t-1}) \iota_{Q^{\otimes (n+1)}}(f_{t} \otimes f_{t+1} \otimes f_t \otimes \dots \otimes f_{n+t}) \iota_{P^{\otimes n}}(p),
\end{align*}
is contained in $ (\mathcal{T}_1)^t$. Hence, $\mathcal{T}_t = (\mathcal{T}_1)^t$ for $t > 0$. A similar argument shows that $\mathcal{T}_{-t} = (\mathcal{T}_{-1})^{t}$ for $t > 0$.
(b): By Theorem \ref{thm:universal}, there is a $\mathbb{Z}$-graded ring epimorphism $\eta \colon \mathcal{T}_{(P,Q,\psi)} \to B$. Hence, $B_n = \eta(\mathcal{T}_n) = \eta((\mathcal{T}_1)^n) = \eta(\mathcal{T}_1)^n = (B_1)^n$ for any $n > 0$. Similarly, $B_{-n} = (B_{-1})^n$ for any $n > 0$.
\end{proof}
If $M$ is a left $R$-module, then the left annihilator $\Ann_{R}(M) = \{ r \in R \mid r \cdot m = 0 \enspace \forall m \in M \}$ is an ideal of $R$. If $J$ is an ideal of $R$, then $J^\bot = \{ r \in R \mid rx = xr = 0 \enspace \forall x \in J \}.$ The following result was recently obtained by Clark, Fletcher, Hazrat and Li. Their formulation of the theorem is weaker but they in fact prove the stronger statement below.
\begin{theorem}(\cite[Cor. 3.2]{2018arXiv180810114O})
Let $A=\bigoplus_{i \in \mathbb{Z}} A_i$ be a $\mathbb{Z}$-graded ring satisfying the following assertions:
\begin{enumerate}[(a)]
\begin{item}
$A$ is semi-saturated;
\end{item}
\begin{item}
For $\{a_1, a_2, \dots, a_n \} \subseteq A_1$ there is $r \in A_1 A_{-1}$ such that $r a_l = a_l$ for each $1 \leq l \leq n$, and
for $\{ b_1, b_2, \dots, b_m \} \subseteq A_{-1}$ there is $s \in A_1 A_{-1}$ such that $b_l s = b_l$ for each $1 \leq l \leq m$;
\end{item}
\begin{item}
$\Ann_{A_0}(A_1) \cap \Ann_{A_0}(A_1)^\bot = \{ 0 \}$.
\end{item}
\end{enumerate}
Let $\psi \colon A_{-1} \otimes A_1 \to A_0$ be defined by $\psi(a' \otimes a) = a' a$. Then the $A_0$-system $(A_{-1}, A_1, \psi)$ satisfies Condition (FS). Let $i_{A_{-1}} \colon A_{-1} \to A$, $i_{A_1} \colon A_1 \to A$, $ i_{A_0} \colon A_0 \to A$ denote the inclusion maps and let $J = A_1 A_{-1}$. Then $(i_{A_{-1}}, i_{A_{1}}, i_{A_0}, A)$ is a surjective covariant representation of $(A_{-1}, A_1, \psi)$ and,
\begin{equation}
(i_{A_{-1}}, i_{A_1}, i_{A_0}, A) \cong_{\text{r}} (\iota_{A_{-1}}^J, \iota_{A_1}^J, \iota_{A_0}^J, \mathcal{O}_{(A_{-1},A_1, \psi')}(J)).
\label{eq:rep_iso}
\end{equation}
Furthermore, $J$ is faithfully maximal, hence, $$(\iota_{A_{-1}}^J, \iota_{A_1}^J, \iota_{A_0}^J, \mathcal{O}_{(A_{-1},A_1, \psi')}(J)) = (\iota_{A_{-1}}^{CP}, \iota_{A_1}^{CP}, \iota_{A_0}^{CP}, \mathcal{O}_{(A_{-1},A_1, \psi')}).$$
\noindent
In particular, we have that $A \cong_{\text{gr}} \mathcal{O}_{(A_{-1}, A_1, \psi')}$.
\label{thm:clark}
\end{theorem}
\begin{proof}
Note that $(A_{-1}, A_1, \psi)$ is an $A_0$-system. Since $A$ is semi-saturated, it follows that $A$ is generated as a ring by $A_{-1} \cup A_{1} \cup A_0$. Hence, $(i_{A_1}, i_{A_{-1}}, i_{A_0}, A)$ is a surjective covariant representation. In the proof of \cite[Thm. 3.1]{2018arXiv180810114O}, they show that $(A_{-1}, A_1, \psi)$ satisfies Condition (FS) and that the ideal $J=A_1 A_{-1}$ is the maximal faithful, $\psi$-compatible ideal of $A_0$. Hence, the Cuntz-Pimsner representation is well-defined and equal to $(\iota_{A_{-1}}^J, \iota_{A_1}^J, \iota_{A_0}^J, \mathcal{O}_{(A_{-1},A_1, \psi')}(J))$. Moreover, they show that the graded representation $(i_{A_1}, i_{A_{-1}}, i_{A_0}, A)$ is Cuntz-Pimsner invariant with respect to $J$. By the universal property of relative Cuntz-Pimsner rings (see \cite[Thm. 3.18]{carlsen2011algebraic}), there exists a surjective map $\eta \colon (\iota_{A_{-1}}^{CP}, \iota_{A_1}^{CP}, \iota_{A_0}^{CP}, \mathcal{O}_{(A_{-1},A_1, \psi')}) \to (i_{A_1}, i_{A_{-1}}, i_{A_0}, A)$. It follows by Lemma \ref{lem:rep_maps}, that $\eta \colon \mathcal{O}_{(A_{-1}, A_1, \psi)} \to A$ is $\mathbb{Z}$-graded. By the graded uniqueness theorem for Cuntz-Pimsner rings (see \cite[Cor. 5.4]{carlsen2011algebraic}), it follows that $\eta$ is also injective. Thus, (\ref{eq:rep_iso}) holds. Note that $A \cong_{\text{gr}} \mathcal{O}_{(A_{-1},A_1, \psi')})$ follows from Corollary \ref{cor:rep_iso}.
\end{proof}
Let $R$ be a ring, let $(P,Q,\psi)$ be an $R$-system and let $(S,T,\sigma, B)$ be a graded covariant representation of $(P,Q,\psi)$. Recall (see Definition \ref{def:covariant_representation}) that for every $k \geq 0$ and $q \in Q^{\otimes k}, p \in P^{\otimes k}$ we have that $\sigma(\psi_k(p \otimes q)) = S^{\otimes k}(p) T^{\otimes k}(q)$. Since $S^{\otimes k}(p) \in B_{-k}$ and $T^{\otimes k}(q) \in B_k$, it follows that, $\sigma (\psi_k(p \otimes q)) \in B_{-k} B_k$. Moreover, since $I_{\psi,\sigma}^{(k)}$ is generated as a $B_0$-ideal by the set $\{ \sigma (\psi_k(p \otimes q)) \mid p \in P^{\otimes k}, q \in Q^{\otimes k} \}$, we have that $I_{\psi,\sigma}^{(k)} \subseteq B_{-k} B_k$.
Recall (see Definition \ref{def:semi-full}) that we call $(S,T, \sigma, B)$ semi-full if $I_{\psi,\sigma}^{(k)} = B_{-k} B_k $ for every $k \geq 0$. The following result is one of the key insights of this article:
\begin{proposition}
The covariant representation $$ (i_{A_{-1}}, i_{A_1}, i_{A_0}, A) \cong_{\text{r}} (\iota_{A_{-1}}^J, \iota_{A_1}^J, \iota_{A_0}^J, \mathcal{O}_{(A_{-1},A_1, \psi')}(J)) = (\iota_{A_{-1}}^{CP}, \iota_{A_1}^{CP}, \iota_{A_0}^{CP}, \mathcal{O}_{(A_{-1},A_1, \psi')})$$ in Theorem \ref{thm:clark} is a semi-full covariant representation of $(A_{-1}, A_1, \psi)$.
\label{rem:5}
\end{proposition}
\begin{proof}
Note that $A$ comes equipped with a $\mathbb{Z}$-grading which trivially satisfies $i_{A_{-1}}(A_{-1}) \subseteq A_{-1}$, $i_{A_{1}}(A_{1}) \subseteq A_{1}$ and $i_{A_{0}}(A_{0}) \subseteq A_{0}$. Hence, $(i_{A_{-1}}, i_{A_1}, i_{A_0}, A)$ is a graded representation of $(A_{-1}, A_1,\psi)$. Note that $I_{\psi,i_{A_0}}^{(k)} \subseteq A_{-k} A_k$. Recall that $A$ is semi-saturated by Proposition \ref{prop:toeplitz_exponent}(b). Thus, for any monomial $a' a \in A_{-k} A_k$, we have that $a' = a_1' a_2' \dots a_k'$ and $a=a_1 a_2 \dots a_k$ for some elements $a_i' \in A_{-1}$ and $a_i \in A_1$. Next, note that by the definition, $$\psi_k((a_1' \otimes a_2' \otimes \dots \otimes a_k') \otimes (a_1 \otimes \dots a_k)) = a_1' a_2' \dots a_k' a_1 \dots a_k = a' a.$$ Thus, $A_{-k} A_k = I_{\psi,i_{A_0}}^{(k)}.$ For $k=0$, note that $\text{Im}(\psi_0)=A_0^2$ since $\psi_0(r \otimes r')=r r'$ for all $r,r' \in A_0$ by convention. Thus, we have that $A_0 A_0 = A_0^2 = i_{A_0}(A_0^2) = I_{\psi, i_{A_0}}^{(0)}$. Hence, it follows that $I_{\psi,i_{A_0}}^{(k)} = A_{-k} A_k$ for every integer $k \geq 0$.
\end{proof}
\begin{remark}
In particular, Proposition \ref{rem:5} implies that some of the examples Clark, Fletcher, Hazrat and Li gave in \cite{2018arXiv180810114O} are realizable from semi-full representations. More precisely, the corner skew Laurent polynomial rings (see \cite[Expl. 3.4]{2018arXiv180810114O}) and the Steinberg algebras associated to unperforated graded groupoids (see \cite[Cor. 4.6]{2018arXiv180810114O}) are realizable as the representation ring belonging to a semi-full covariant representation.
\end{remark}
We will see that, for our purposes, we only need to consider s-unital and unital $R$-systems. In the $C^*$-setting, Chirvasitu \cite{2018arXiv180512318C} only considered unital $C^*$-correspondences (i.e. the coefficient $C^*$-algebra $A$ is unital). This assumption guarantees that the Cuntz-Pimsner $C^*$-algebra is unital. We analogously introduce the following notions for $R$-systems:
\begin{definition}
Let $R$ be a ring and let $(P,Q,\psi)$ be an $R$-system. The $R$-system $(P,Q,\psi)$ is called \emph{s-unital} if $R$ is an s-unital ring and $P,Q$ are s-unital $R$-bimodules. The $R$-system $(P,Q,\psi)$ is called \emph{unital} if $R$ is a unital ring and $P,Q$ are unital $R$-bimodules.
\label{def:s-unital}
\end{definition}
\begin{remark}
At this point we make two remarks.
\begin{enumerate}[(a)]
\begin{item}
Note that we explicitly require that $R$ is an s-unital (unital) ring for the $R$-system $(P,Q,\psi)$ to be s-unital (unital). This is needed since the trivial module $\{ 0 \}$ is a unital $R$-bimodule for any ring $R$ (cf. Example \ref{ex:2}).
\end{item}
\begin{item}
Let $R$ be a unital ring, let $(P,Q,\psi)$ be a unital $R$-system and let $(S,T,\sigma,B)$ be a covariant representation of $(P,Q,\psi)$. If $1_R$ is the multiplicative identity element of $R$, then $1_B = \sigma(1_R)$ is the multiplicative identity element of $B$.
\end{item}
\end{enumerate}
\end{remark}
We now show that a certain type of semi-saturated, nearly epsilon-strongly $\mathbb{Z}$-graded rings can be realized as Cuntz-Pimsner rings coming from s-unital $R$-systems.
\begin{definition}
If $A = \bigoplus_{i \in \mathbb{Z}} A_i$ is a semi-saturated, nearly epsilon-strongly $\mathbb{Z}$-graded ring that satisfies $\Ann_{A_0}(A_1) \cap (\Ann_{A_0}(A_1))^\bot = \{ 0 \}$, then $A$ is called \emph{pre-CP}.
\label{def:pre-cp}
\end{definition}
As a special case of Theorem \ref{thm:clark}, we obtain the following:
\begin{corollary}
Let $A = \bigoplus_{i \in \mathbb{Z}} A_i$ be a pre-CP ring. Let $\psi \colon A_{-1} \otimes A_1 \to A_0$ be defined by $a \otimes b \mapsto ab$. Then $(A_{-1}, A_1, \psi)$ is an s-unital $A_0$-system that satisfies Condition (FS) and
\begin{equation}
(i_{A_{-1}}, i_{A_1}, i_{A_0}, A) \cong_{\text{r}} (\iota_{A_{-1}}^{CP}, \iota_{A_1}^{CP}, \iota_{A_0}^{CP}, \mathcal{O}_{(A_{-1}, A_1, \psi)}).
\label{eq:5}
\end{equation}
In particular, $A \cong_{\text{gr}} \mathcal{O}_{(A_{-1},A_1,\psi)}$. Furthermore, the covariant representation (\ref{eq:5}) is semi-full.
\label{cor:nearly_cuntz}
\end{corollary}
\begin{proof}
Note that conditions (a) and (c) in Theorem \ref{thm:clark} are satisfied by definition. Moreover, by the assumption that $A$ is nearly epsilon-strongly $\mathbb{Z}$-graded (see Definition \ref{def:nystedt_epsilon}), it follows that $A_1$ is an s-unital $A_1 A_{-1} \text{--} A_{-1}A_{1}$-bimodule. From this, (b) follows directly. Furthermore, we see that $(A_{-1}, A_1, \psi)$ is an s-unital $A_0$-system. The conclusion now follows by applying Theorem \ref{thm:clark} and Proposition \ref{rem:5}.
\end{proof}
Next, we give two sets of sufficient conditions for a ring to be pre-CP. Recall that a ring is called \emph{semi-prime} if it has no nonzero nilpotent ideals.
\begin{lemma}
Let $A = \bigoplus_{i \in \mathbb{Z}} A_i$ be a $\mathbb{Z}$-graded ring. The following assertions hold:
\begin{enumerate}[(a)]
\begin{item}
If $A_0$ is semi-prime, then $\Ann_{A_0}(A_1) \cap (\Ann_{A_0}(A_1))^\bot = \{ 0 \}$. If $A$ is semi-saturated, nearly epsilon-strongly $\mathbb{Z}$-graded and $A_0$ is semi-prime, then $A$ is pre-CP.
\end{item}
\begin{item}
If $A$ is unital strongly $\mathbb{Z}$-graded, then $A$ is pre-CP.
\end{item}
\end{enumerate}
\label{lem:semi-prime}
\end{lemma}
\begin{proof}
(a): Note that $\Ann_{A_0}(A_1) \cap (\Ann_{A_0}(A_1))^\bot$ is a nilpotent ideal of $A_0$.
(b): Since $A$ is unital strongly $\mathbb{Z}$-graded, it follows that $A_i = (A_1)^i, A_{-i} = (A_{-1})^i$ for $i > 0$. Hence, $A$ is semi-saturated. Moreover, $\Ann_{A_0}(A_1) \subseteq \Ann_{A_0}(A_1A_{-1}) = \Ann_{A_0}(A_0) = \{ 0 \}$ since $A_0$ is unital. It follows that $\Ann_{A_0}(A_1) \cap (\Ann_{A_0}(A_1))^\bot = \{ 0 \}$. Finally, recall that unital strongly $\mathbb{Z}$-graded rings are nearly epsilon-strongly $\mathbb{Z}$-graded (see (\ref{eq:implications})). Thus, $A$ is pre-CP.
\end{proof}
\begin{proposition}
Let $K$ be a unital ring and let $E$ be any directed graph. Then the Leavitt path algebra $L_K(E)$ is pre-CP.
\label{prop:lpa-cp}
\end{proposition}
\begin{proof}
The Leavitt path algebra $L_K(E)$ is nearly epsilon-strongly $\mathbb{Z}$-graded (see \cite[Thm. 1.3]{nystedt2017epsilon}). Moreover, since $L_K(E)$ can be realized as a Cuntz-Pimsner ring (see Section \ref{sec:lpa}), it follows by Proposition \ref{prop:toeplitz_exponent}(b) that $L_K(E)$ is semi-saturated.
Next, we prove that,
\begin{equation}
\Ann_{L_K(E)_0}(L_K(E)_1) = \Span_K \{ v \in E^0 \mid v E^1 = \{ 0 \} \}.
\label{eq:71}
\end{equation}
Since $L_K(E)_1 L_K(E)_{-1}$ is s-unital by Proposition \ref{prop:nearly_char}(a) and, $$\Ann_{L_K(E)_0}(L_K(E)_1) \subseteq \Ann_{L_K(E)_0}(L_K(E)_1 L_K(E)_{-1}),$$ it follows that,
\begin{equation}
L_K(E)_1 L_K(E)_{-1} \cap \Ann_{L_K(E)_0}(L_K(E)_1) \subseteq \Ann_{L_K(E)_1 L_K(E)_{-1}}(L_K(E)_1 L_K(E)_{-1}) = \{ 0 \}.
\label{eq:81}
\end{equation}
Furthermore, recall that the natural $\mathbb{Z}$-grading of $L_K(E)$ is given by,
\begin{equation*}
L_K(E)_i = \Span_K \{ \alpha \beta^* \mid \alpha, \beta \in \text{Path}(E), \text{len}(\alpha) - \text{len}(\beta) =i \},
\end{equation*}
for all $i \in \mathbb{Z}$. By convention, the elements $v \in L_K(E)_0$ are considered to be paths of zero length. This means that $L_K(E)_0$ is generated by the sets $E^0$ and $B:=\{ \alpha \beta^* \mid \text{len}(\alpha)=\text{len}(\beta) \geq 1 \}.$ Any $\alpha \beta^* \in B$ can be written $\alpha \beta^* = f_1 \alpha' (\beta')^* (f_2)^* \in L_K(E)_1 L_K(E)_0 L_K(E)_{-1} = L_K(E)_1 L_K(E)_{-1} $ for some $f_1, f_2 \in E^1$ and $\alpha, \beta \in \text{Path}(E)$. Thus, $B \subseteq L_K(E)_1 L_K(E)_{-1}$. By (\ref{eq:81}), it follows that $\Ann_{L_K(E)_0}(L_K(E)_1) \subseteq \Span_K \{ v \in E^0 \}$.
To establish (\ref{eq:71}), it remains to prove that for any $v \in E^0$, we have that $v L_K(E)_1 = \{ 0 \}$ if and only if $v E^1 = \{ 0 \}$. The `only if' direction is clear since $E^1 \subseteq L_K(E)_1$. On the other hand, let $v \in E^0$ such that $v E^1 = \{0 \}$. Note that any $\alpha \beta^* \in L_K(E)_1$ satisfies $\text{len}(\alpha) - \text{len}(\beta) = 1$ which implies that $\text{len}(\alpha) \geq 1$. Hence, we can write $\alpha = f' \alpha'$ for some $f' \in E^1$ and some $\alpha' \in \text{Path}(E)$. It follows that $v \alpha \beta^* = (v f') \alpha' \beta^* =0 $. Hence, $v L_K(E)_1 = \{ 0 \}$.
A moment's thought yields that, $$(\Ann_{L_K(E)_0}(L_K(E)_1))^\bot \cap \Span_K \{ v \in E^0 \} = \Span_K \{ v \in E^0 \mid vE^1 \ne \{ 0 \} \} .$$ Hence, $\Ann_{L_K(E)_0}(L_K(E)_1) \cap (\Ann_{L_K(E)_0}(L_K(E)_1))^\bot = \{ 0 \}$ and $L_K(E)$ is pre-CP.
\end{proof}
From Corollary \ref{cor:nearly_cuntz}, we derive necessary conditions for certain Cuntz-Pimsner rings to be nearly epsilon-strongly $\mathbb{Z}$-graded.
\begin{corollary}
Let $(P,Q,\psi)$ be an $R$-system such that (i) $\mathcal{O}_{(P,Q,\psi)}=\bigoplus_{i \in \mathbb{Z}} \mathcal{O}_i$ exists and is nearly epsilon-strongly $\mathbb{Z}$-graded and (ii) $ \Ann_{\mathcal{O}_0}(\mathcal{O}_1) \cap (\Ann_{\mathcal{O}_0}(\mathcal{O}_1))^\bot = \{ 0 \}$.
\noindent
Let $\psi' \colon \mathcal{O}_{-1} \otimes \mathcal{O}_1 \to \mathcal{O}_0$ be defined by $\psi'(a \otimes a') = a a'$. Then $(\mathcal{O}_{-1}, \mathcal{O}_{1}, \psi')$ is an s-unital $\mathcal{O}_0$-system such that,
$$ (i_{\mathcal{O}_{-1}}, i_{\mathcal{O}_1}, i_{\mathcal{O}_0}, \mathcal{O}_{(P,Q,\psi)}) \cong_{\text{r}} (\iota_{\mathcal{O}_{-1}}^{CP}, \iota_{\mathcal{O}_{1}}^{CP}, \iota_{\mathcal{O}_{0}}^{CP}, \mathcal{O}_{(\mathcal{O}_{-1}, \mathcal{O}_1, \psi')}).$$ In particular, $\mathcal{O}_{(P,Q,\psi)} \cong_{\text{gr}} \mathcal{O}_{(\mathcal{O}_{-1}, \mathcal{O}_1, \psi')}$ Furthermore, the following assertions hold:
\begin{enumerate}[(a)]
\begin{item}
$(\mathcal{O}_{-1}, \mathcal{O}_{1}, \psi')$ is an s-unital $\mathcal{O}_0$-system that satisfies Condition (FS);
\end{item}
\begin{item}
$(\iota_{\mathcal{O}_{-1}}^{CP}, \iota_{\mathcal{O}_{1}}^{CP}, \iota_{\mathcal{O}_{0}}^{CP}, \mathcal{O}_{(\mathcal{O}_{-1}, \mathcal{O}_1, \psi')})$ is a semi-full covariant representation of $(\mathcal{O}_{-1}, \mathcal{O}_{1}, \psi')$;
\end{item}
\begin{item}
$I_{\psi',\iota_{\mathcal{O}}^{CP}}^{(k)} = \mathcal{O}_{-k} \mathcal{O}_k$ is s-unital for $k \geq 0$.
\end{item}
\end{enumerate}
\label{cor:reduction}
\end{corollary}
\begin{proof}
By Proposition \ref{prop:toeplitz_exponent}, $\mathcal{O}_{(P,Q,\psi)}$ is semi-saturated. Hence, with (i) and (ii), it follows that $\mathcal{O}_{(P,Q,\psi)}$ is pre-CP. Thus, Corollary \ref{cor:nearly_cuntz} establishes the isomorphism of covariant representations and the conclusions (a), (b). Since the covariant representation is semi-full we have that $I_{\psi',\iota_{\mathcal{O}}^{CP}}^{(k)} = \mathcal{O}_{-k} \mathcal{O}_k$ for each $k \geq 0$. By (i) and Proposition \ref{prop:nearly_char}(a), we see that $\mathcal{O}_{-k} \mathcal{O}_k$ is s-unital for every $k \geq 0$. Thus, (c) is established.
\end{proof}
\begin{remark}
It is not clear to the author if the assumption (ii) in Corollary \ref{cor:reduction} is needed. No examples of nearly epsilon-strongly $\mathbb{Z}$-graded Cuntz-Pimsner rings that do not satisfy $\Ann_{\mathcal{O}_0}(\mathcal{O}_1) \cap (\Ann_{\mathcal{O}_0}(\mathcal{O}_1))^\bot = \{ 0 \}$ have been found. On the other hand, it follows from Lemma \ref{lem:semi-prime} that condition (ii) in Corollary \ref{cor:reduction} is satisfied if either $\mathcal{O}_0$ is semi-prime or $\mathcal{O}_{(P,Q,\psi)}$ is strongly $\mathbb{Z}$-graded.
\end{remark}
\section{Strongly $\mathbb{Z}$-graded Cuntz-Pimsner rings}
\label{sec:strongly}
In this section, we will provide sufficient conditions for the Toeplitz and Cuntz-Pimsner rings to be strongly $\mathbb{Z}$-graded. This is an algebraic analogue of recent work by Chirvasitu \cite{2018arXiv180512318C} where he gave necessary and sufficient conditions for the gauge action of a Cuntz-Pimsner $C^*$-algebra to be free. Unfortunately, his proofs rely on topological arguments which do not seem to generalize fully to the algebraic setting.
We begin by introducing the following new condition that is stronger than Condition (FS):
\begin{definition}
Let $R$ be a ring. An $R$-system $(P,Q,\psi)$ is said to satisfy \emph{Condition (FS')} if there exist some $\Theta \in \mathcal{F}_P(Q)$ and $\Phi \in \mathcal{F}_Q(P)$ such that $\Theta(q) = q$ and $\Phi(p)=p$ for every $q \in Q$ and $p \in P$.
\label{def:cond_fsprime}
\end{definition}
We will later give an example (see Example \ref{ex:fsprime}) which shows that Condition (FS) and Condition (FS') are in fact different.
We omit the proof of the following proposition as it is a straightforward analogue of the corresponding statement for Condition (FS).
\begin{proposition}(cf. \cite[Lem. 3.8]{carlsen2011algebraic})
Let $R$ be a ring and let $(P,Q,\psi)$ be an $R$-system. If $(P,Q,\psi)$ satisfies condition (FS'), then $(P^{\otimes n}, Q^{\otimes n}, \psi_n)$ satisfies condition (FS') for every integer $n \geq 1$.
\label{prop:fsprime}
\end{proposition}
Throughout the rest of this section, we assume that $R$ is a unital ring and that $(P,Q,\psi)$ is a unital $R$-system. The following result characterizes Condition (FS'):
\begin{proposition}
Let $R$ be a unital ring and let $(P,Q,\psi)$ be a unital $R$-system. The following assertions are equivalent:
\begin{enumerate}[(a)]
\begin{item}
$(P,Q,\psi)$ satisfies Condition (FS');
\end{item}
\begin{item}
$\id_Q = \Delta(1_R) \in \mathcal{F}_P(Q)$ and $\id_P = \Gamma(1_R) \in \mathcal{F}_Q(P)$. In this case, $\mathcal{L}_P(Q) = \mathcal{F}_P(Q)$ and $\mathcal{L}_Q(P) = \mathcal{F}_Q(P)$ are unital rings;
\end{item}
\begin{item}
$(P,Q,\psi)$ satisfies Condition (FS), $Q_R$ is finitely generated as a right $R$-module and $_R P$ is finitely generated as a left $R$-module.
\end{item}
\end{enumerate}
\label{prop:fsprime_char}
\end{proposition}
\begin{proof}
(a) $\Leftrightarrow$ (b): Consider the inclusions in (\ref{eq:inc1}). If $1_R$ is the multiplicative identity element of $R$, then $\id_Q = \Delta(1_R) \in \mathcal{L}_P(Q)$ is the multiplicative identity element for the ring $\mathcal{L}_P(Q)$. First assume that $(P,Q,\psi)$ satisfies Condition (FS'). Then, $\Theta \in \mathcal{F}_P(Q)$ is a multiplicative identity element of the ring $\mathcal{L}_P(Q)$. Hence, $\Theta = \Delta(1_R) = \id_Q$ which implies that $\mathcal{L}_P(Q) = \mathcal{F}_P(Q)$. Similarly, $\Phi = \Gamma(1_R) = \id_P$ which implies that $\mathcal{L}_Q(P)=\mathcal{F}_Q(P)$. The converse statement follows by noting that $\Delta(1_R)(q)=1_R \cdot q = q$ and $\Gamma(1_R)(p) = p \cdot 1_R = p$ for all $q \in Q$ and $p \in P$.
(b) $\Rightarrow$ (c): Assume that $\text{id}_P(Q) \in \mathcal{F}_P(Q)$ and $\text{id}_Q(P) \in \mathcal{F}_Q(P)$. By choosing $\Theta := \text{id}_P(Q)$ and $\Phi := \text{id}_Q(P)$ in Definition \ref{def:cond_fs}, we see that $(P,Q,\psi)$ satisfies Condition (FS). Furthermore, there are some $q_1, \dots, q_n \in Q$ and $p_1, \dots, p_n \in P$ such that $ \text{id}_P(Q) = \sum_{i=1}^n \Theta_{q_i, p_i}$. For any $q' \in Q$ we then have that, $$q' =\text{id}_P(Q)(q') = \sum_{i=1}^n \Theta_{q_i,p_i}(q') = \sum_{i=1}^n q_i \cdot \psi(p_i \otimes q') \in \text{Span}_R \{ q_1, \dots, q_n \}. $$ In other words, $Q$ is finitely generated as a right $R$-module by the set $\{ q_1, \dots, q_n \}$. A similar argument establishes that $P$ is finitely generated as a left $R$-module.
(c) $\Rightarrow$ (a): Assume that $(P,Q,\psi)$ satisfies Condition (FS), $Q$ is generated as a right $R$-module by the set $\{q_1, \dots, q_n \}$ and that $P$ is generated as a left $R$-module by the set $\{p_1, \dots, p_m \}$ for some non-negative integers $n,m$ and $q_i \in Q, p_i \in P$. Let $\Theta \in \mathcal{F}_P(Q)$ and $\Phi \in \mathcal{F}_Q(P)$ be such that $\Theta(q_i) = q_i$ and $\Phi(p_j) = p_j$ for all $i \in \{ 1, \dots, n\}, j \in \{1, \dots, m\}.$ Take an arbitrary $q' \in Q$ and note that there are some $r_i \in R$ such that $q' = \sum_{i=1}^n q_i \cdot r_i$. But since $\Theta$ is a right $R$-module homomorphism, it follows that $\Theta(q') = \Theta(\sum_{i=1}^n q_i \cdot r_i) = \sum_{i=1}^n \Theta(q_i) \cdot r_i = \sum_{i=1}^n q_i \cdot r_i = q'$. A similar argument shows that $\Phi(p')=p'$ for every $p \in P$. Thus, $(P,Q,\psi)$ satisfies Condition (FS').
\end{proof}
\begin{remark}
At this point, we make two remarks regarding Proposition \ref{prop:fsprime_char}.
\begin{enumerate}[(a)]
\begin{item}
Note that Condition (FS) (cf. Definition \ref{def:cond_fs}) and Condition (FS') (cf. Definition \ref{def:cond_fsprime}) relates to each other similarly to how s-unital rings relate to unital rings. In Section \ref{sec:epsilon}, we will show that Condition (FS)/Condition (FS') implies that the ideals $\mathcal{T}_i \mathcal{T}_{-i}$ are s-unital/unital for $i \geq 0$.
\end{item}
\begin{item}
In the $C^*$-setting, finite generation of the Hilbert module $E$ is equivalent to the ring of compact operators $B(E)=K(E)$ being unital. Proposition \ref{prop:fsprime_char} is the algebraic analogue of this statement.
\end{item}
\end{enumerate}
\end{remark}
The following system satisfies Condition (FS) but not Condition (FS'):
\begin{example}
Let $E$ consist of one vertex $v$ with countably infinitely many loops $f_1, f_2, \dots$.
This is sometimes called a rose with countably many petals.
\begin{displaymath}
\xymatrix{
\bullet_v \ar@(ur,ul)_{(\infty)}
}
\end{displaymath}
The standard Leavitt path algebra system $(P,Q,\psi)$ attached to the graph $E$ satisfies Condition (FS) (see \cite[Expl. 5.8]{carlsen2011algebraic}). Furthermore, it is straightforward to check that $(P,Q,\psi)$ is a unital $R$-system with multiplicative identity element $1_R = \eta_{v}$.
However, since $E$ contains infinitely many edges it follows that $P$ and $Q$ are not finitely generated (see Section \ref{sec:lpa} and Lemma \ref{lem:finitely_many_edges}). By Proposition \ref{prop:fsprime_char}(c), $(P,Q,\psi)$ does not satisfy Condition (FS'). In other words, $(P,Q,\psi)$ is an example of an $R$-system satisfying Condition (FS) but not Condition (FS').
\label{ex:fsprime}
\end{example}
To prove that the Toeplitz ring is strongly $\mathbb{Z}$-graded, we need the following definition.
\begin{definition}
Let $R$ be a unital ring, let $(P,Q,\psi)$ be an $R$-system satisfying Condition (FS') and let $(S,T,\sigma,B)$ be a covariant representation of $(P,Q,\psi)$. Then $(S,T,\sigma,B)$ is called \emph{faithful} if $\pi_{T,S}(\Delta(1_R)) = \sigma(1_R)$.
\label{def:faithful}
\end{definition}
To make sense of Definition \ref{def:faithful}, note that $\Delta(1_R) \in \mathcal{F}_P(Q)$ for every $R$-system satisfying Condition (FS') by Proposition \ref{prop:fsprime_char}(b). Hence, the condition $\pi_{T,S}(\Delta(1_R)) = \sigma(1_R)$ makes sense. It also follows from Proposition \ref{prop:fsprime_char}(c) that if an $R$-system $(P,Q,\psi)$ admits a faithful covariant representation, then $Q$ is finitely generated as a right $R$-module and $P$ is finitely generated as a left $R$-module.
\smallskip
Next, we will consider a graded covariant representation and derive sufficient conditions for it to be strongly $\mathbb{Z}$-graded.
\begin{lemma}
Let $R$ be a unital ring. Suppose that $(P,Q,\psi)$ is an $R$-system and that $(S,T, \sigma, B)$ is a graded, injective, surjective and faithful representation of $(P,Q,\psi)$. Then, $$\pi_{T^n, S^n}(\Delta^{(n)}(1_R))=\sigma(1_R) = 1_B \in B_n B_{-n}$$ for every $n > 0$.
\label{lem:faithful}
\end{lemma}
\begin{proof}
Take an arbitrary $n > 0$. By Proposition \ref{prop:fsprime}, $(P^{\otimes n}, Q^{\otimes n}, \psi_n)$ satisfies Condition (FS'). This means that $\Delta^{(n)}(1_R) \in \mathcal{F}_{P^{\otimes n}}(Q^{\otimes n})$. Furthermore, by faithfulness, $\pi_{T,S}(\Delta(1_R))= \sigma(1_R) = \sum_i T(q_i) S(p_i)$ for some $q_i \in Q, p_i \in P$. Then,
\begin{align*}
1_B &= \sigma(1_R) = \sum_i T(q_i) (1_B) S(p_i) = \sum_{i,j} T(q_i) T(q_j) S(p_j) S(p_i) \\ &\in \Span_R \{ T^2(q) S^2(p) \mid q \in Q^{\otimes 2}, p \in P^{\otimes 2} \} \subseteq B_2 B_{-2}.
\end{align*}
By an induction argument, we get that, $$1_B \in \Span_R \{ T^n(q) S^n(p) \mid q \in Q^{\otimes n}, p \in P^{\otimes n} \} \subseteq B_n B_{-n}$$ for any $n > 0$. By Proposition \ref{prop:pi} and the assumption that the covariant representation is injective, it follows that the map $\pi_{T^n, S^n} \colon \mathcal{F}_{P^{\otimes n}}(Q^{\otimes n}) \to \Span_R \{ T(q) S(p) \mid q \in Q^{\otimes n}, p \in P^{\otimes n} \}$ is a ring isomorphism. Hence, $\pi_{T^n, S^n}(\Delta^{(n)}(1_R)) = 1_B = \sigma(1_R) \in B_{n} B_{-n}$ for $n > 0$.
\end{proof}
\begin{lemma}
Let $R$ be a unital ring and let $(P,Q,\psi)$ be a unital $R$-system such that the map $\psi \colon P \otimes Q \to R$ is surjective. Let $(S,T,\sigma, B)$ be a surjective, graded covariant representation of $(P,Q,\psi)$. Then, $1_B \in B_{-n} B_n$ for every $n > 0$.
\label{lem:psi_surjective}
\end{lemma}
\begin{proof}
We prove that if $\psi \colon P \otimes Q \to R$ is surjective, then $\psi_n \colon P^{\otimes n} \otimes Q^{\otimes n} \to R$ is surjective for every $n > 1$. The proof goes by induction on $n$. Suppose that $\psi_{n-1}$ is surjective. Then there is some $p \in P^{\otimes (n-1)}$ and $q \in Q^{\otimes (n-1)}$ such that $\psi_{n-1}(p \otimes q) = 1_R$. Then, since $1_R$ acts trivially on $Q$, it follows that, $$\psi_n((p' \otimes p) \otimes (q \otimes q')) = \psi(p' \otimes \psi_{n-1}(p \otimes q) \cdot q') = \psi(p' \otimes 1_R \cdot q') = \psi(p' \otimes q') = 1_R,$$ if we choose $p'$ and $q'$ such that $\psi(p' \otimes q') = 1_R$. Thus, the claim follows from the induction principle.
Take an arbitrary integer $n > 0$. We have that $1_R = \psi_n(p \otimes q)$ for some $p \in P^{\otimes n}$ and $q \in Q^{\otimes n}$. Hence, $\sigma(1_R) = \sigma(\psi_n (p \otimes q)) = S^n(p) T^n(q) \in B_{-n} B_n$ which proves that $1_B = \sigma(1_R) \in B_{-n} B_n$ for every $n > 0$.
\end{proof}
We have now found sufficient conditions for a representation ring to be strongly $\mathbb{Z}$-graded:
\begin{proposition}
Let $R$ be a unital ring and let $(P,Q,\psi)$ be a unital $R$-system that satisfies Condition (FS'). Let $(S,T,\sigma,B)$ be an injective, surjective and graded covariant representation of $(P,Q,\psi)$. Furthermore, suppose that the following assertions hold:
\begin{enumerate}[(a)]
\begin{item}
$(S, T, \sigma, B)$ is a faithful representation of $(P,Q,\psi)$;
\end{item}
\begin{item}
$\psi$ is surjective.
\end{item}
\end{enumerate}
Then $B$ is strongly $\mathbb{Z}$-graded.
\label{prop:strong_suff}
\end{proposition}
\begin{proof}
By assumption (a), it follows from Lemma \ref{lem:faithful} that $1_B \in B_n B_{-n}$ for every $n > 0$. By assumption (b) and Lemma \ref{lem:psi_surjective}, it follows that $1_B \in B_{-n} B_n$ for every $n > 0$. Furthermore, since $1_B = \sigma(1_R) \in B_0$, it follows that $B_0$ is a unital subring of $B$. Thus, $1_B \in \mathcal{T}_i \mathcal{T}_{-i}$ for every $i \in \mathbb{Z}$. It then follows that $B$ is strongly $\mathbb{Z}$-graded (see e.g. \cite[Prop. 1.1.1]{nastasescu2004methods}).
\end{proof}
Note that since the Toeplitz representation $(\iota_P, \iota_Q, \iota_R, \mathcal{T}_{(P,Q,\psi)})$ is injective, surjective and graded, Proposition \ref{prop:strong_suff} gives, in particular, sufficient conditions for the Toeplitz ring to be strongly $\mathbb{Z}$-graded.
\begin{corollary}
Let $R$ be a unital ring and let $(P,Q,\psi)$ be a unital $R$-system that satisfies Condition (FS'). Consider the Toeplitz ring $\mathcal{T}_{(P,Q,\psi)} = \bigoplus_{i \in \mathbb{Z}} \mathcal{T}_i$. If $\pi(\Delta(1_R))=\iota_R(1_R)$ and $\psi$ is surjective, then $\mathcal{T}_{(P,Q,\psi)}$ is strongly $\mathbb{Z}$-graded.
\end{corollary}
The requirement of faithfulness is more easily formulated when considering the relative Cuntz-Pimsner representations.
\begin{corollary}
Let $R$ be a unital ring and let $(P,Q,\psi)$ be a unital $R$-system that satisfies Condition (FS'). Let $J \subseteq R$ be a $\psi$-compatible ideal. Furthermore, suppose that the following assertions hold:
\begin{enumerate}[(a)]
\begin{item}
$1_R \in J$;
\end{item}
\begin{item}
$\psi$ is surjective.
\end{item}
\end{enumerate}
Then the relative Cuntz-Pimsner ring $\mathcal{O}_{(P,Q,\psi)}(J)$ is strongly $\mathbb{Z}$-graded.
\label{cor:cuntz_strongly}
\end{corollary}
\begin{proof}
Recall that the Cuntz-Pimsner representation $(\iota_P^J, \iota_Q^J, \iota_R^J, \mathcal{O}_{(P,Q,\psi)}(J))$ is injective, surjective and graded. Furthermore, note that (a) implies that the identity $\iota_R^J(1_R) = \pi_{\iota_Q^J, \iota_P^J}(\Delta(1_R))$ holds in the Cuntz-Pimsner ring. This implies that the representation $(\iota_P^{J}, \iota_Q^{J}, \iota_R^{J}, \mathcal{O}_{(P,Q,\psi)}(J))$ is faithful. By Proposition \ref{prop:strong_suff} and (b), we have that $\mathcal{O}_{(P,Q,\psi)}(J)$ is strongly $\mathbb{Z}$-graded.
\end{proof}
For the rest of this section, we apply the above theorems to the special cases of Leavitt path algebras and corner skew Laurent polynomial rings. We begin by proving that the conditions in Corollary \ref{cor:cuntz_strongly} are satisfied for any Leavitt path algebra associated to a finite graph without sinks.
\begin{remark}
The Leavitt path algebra of a graph $E$ is the Cuntz-Pimsner ring relative to the ideal $J$ generated by the regular vertices $\text{Reg}(E)\subseteq E^0$. In other words, $L_K(E) \cong_{\text{gr}} \mathcal{O}_{(P,Q,\psi)}(J)$ where $(P,Q,\psi)$ is the standard Leavitt path algebra system associated to $E$ (see \cite[Expl. 5.8]{carlsen2011algebraic} and Section \ref{sec:lpa}). Suppose that $E$ is a finite graph without any sinks. We now prove that the conditions (a) and (b) in Corollary \ref{cor:cuntz_strongly} are satisfied.
\begin{enumerate}[(a)]
\begin{item}
Since a singular vertex (non-regular vertex) is either an infinite emitter or a sink, by the requirements on $E$, it follows that $\text{Reg}(E) = E^0$. This implies that $J=R$ and hence that $1_R = \sum_{v \in E^0} \eta_v \in J$.
\end{item}
\begin{item}
Since $E$ does not contain any sinks, we have that for any $v \in E^0$ there is some $f \in E^1$ such that $r(f) = v$. Thus, $\eta_v = \eta_{r(f)} = \psi(\eta_{f^*} \otimes \eta_f)$. This proves that $\psi$ is surjective.
\end{item}
\end{enumerate}
\label{rem:aa1}
\end{remark}
Compare the following lemma with Example \ref{ex:fsprime}:
\begin{lemma}
Let $K$ be a unital ring and let $E$ be a directed graph with finitely many vertices. Then the standard Leavitt path algebra system $(P,Q,\psi)$ is a unital $R$-system. Furthermore, $(P,Q,\psi)$ satisfies Condition (FS') if and only if $E$ has finitely many edges.
\label{lem:finitely_many_edges}
\end{lemma}
\begin{proof}
Recall that the standard Leavitt path algebra system (see Section \ref{sec:lpa}) is defined by $P = \bigoplus_{f \in E^1} K \eta_{f^*}$ and $Q=\bigoplus_{f \in E^1} K\eta_f$. The assumption that $E$ has finitely many vertices implies that $R$ is a unital ring and that $(P,Q,\psi)$ is a unital $R$-system. By Proposition \ref{prop:fsprime_char}(c), $(P,Q,\psi)$ satisfies Condition (FS') if and only if $(P,Q,\psi)$ satisfies Condition (FS), (i) $Q$ is finitely generated as a right $R$-module and (ii) $P$ is finitely generated as a left $R$-module. However, the $R$-system $(P,Q,\psi)$ always satisfies Condition (FS) (see \cite[Expl. 5.8]{carlsen2011algebraic}). Moreover, it follows from the definition of $P$ and $Q$ that (i) and (ii) hold if and only if $E$ has finitely many edges.
\end{proof}
We can now partially recover a result obtained by Hazrat on when a Leavitt path algebra of a finite graph is strongly $\mathbb{Z}$-graded (see \cite[Thm. 3.15]{hazrat2013graded}).
\begin{corollary}
Let $K$ be a unital ring and let $E$ be a finite graph without any sinks. Then the Leavitt path algebra $L_K(E)$ is strongly $\mathbb{Z}$-graded.
\label{cor:lpa_strong}
\end{corollary}
\begin{proof}
By Lemma \ref{lem:finitely_many_edges}, Remark \ref{rem:aa1} and Corollary \ref{cor:cuntz_strongly} it follows that $L_K(E) \cong_{\text{gr}} \mathcal{O}_{(P,Q,\psi)}(J)$ is strongly $\mathbb{Z}$-graded.
\end{proof}
We will now consider corner skew Laurent polynomial rings. Recall that we need to specify a unital ring $R$, an idempotent $e \in R$ and a corner isomorphism $\alpha \colon R \to eRe$. Moreover, recall that an idempotent $e \in R$ is called \emph{full} if $ReR = R$. Hazrat showed (see \cite[Prop. 1.6.6]{hazrat2016graded}) that $R[t_{+}, t_{-}; \alpha]$ is strongly $\mathbb{Z}$-graded if and only if $e$ is a full idempotent.
\begin{corollary}
Let $R$ be a unital ring and let $\alpha \colon R \to eRe$ be a ring isomorphism where $e$ is an idempotent of $R$. The corner skew Laurent polynomial ring $R[t_{+}, t_{-}; \alpha]$ is strongly $\mathbb{Z}$-graded if $e$ is a full idempotent.
\label{cor:fractional_strong}
\end{corollary}
\begin{proof
Let $(P,Q,\psi)$ denote the $R$-system in \cite[Expl. 5.6]{carlsen2011algebraic}, i.e. let, $$P = \Big \{ \sum r_i \alpha(r_i') \mid r_i, r_i' \in R \Big \}, \quad Q = \Big \{ \sum \alpha(r_i) r_i' \mid r_i, r_i' \in R \Big \}, \quad \psi(p \otimes q) = pq,$$ where the left and right actions of $R$ on $P$ and $Q$ are defined by $r \cdot r_1 \alpha(r_2) = r r_1 \alpha(r_2)$, $r_1 \alpha(r_2) \cdot r = r_1 \alpha(r_2 r)$, $r \cdot \alpha(r_1) r_2 = \alpha(r r_1) r_2$, $\alpha(r_1) r_2 \cdot r = \alpha(r_1) r_2 r$ for all $r, r_1, r_2 \in R$. By \cite[Expl. 5.7]{carlsen2011algebraic}, the $R$-system $(P,Q,\psi)$ satisfies Condition (FS). Assume that $e$ is a full idempotent. Then, $$\text{Im}(\psi) = P Q = (R eR e) (e R e R) = R eR (e e ) R e R = (R e R) e (R e R) = R e R = R.$$ Hence, $\psi$ is surjective. Furthermore, note that $_R P = {_R} ( R e R e) = {_R} (R e R)e = {_R} R e$ as left $R$-modules. It follows that $P$ is finitely generated as a left $R$-module. Similarly, $Q _R = ( e R e R)_R = e R _R$ is finitely generated as a right $R$-module. By Proposition \ref{prop:fsprime_char}(c), it follows that $(P,Q,\psi)$ satisfies Condition (FS'). Recall from \cite[Expl. 5.7]{carlsen2011algebraic} that $J= R$ is $\psi$-compatible and $R[t_{+}, t_{-}; \alpha] \cong_{\text{gr}} \mathcal{O}_{(P,Q,\psi)}(J)$. By Corollary \ref{cor:cuntz_strongly}, it follows that $\mathcal{O}_{(P,Q,\psi)}(J)$ is strongly $\mathbb{Z}$-graded. Thus, $R[t_{+}, t_{-}; \alpha]$ is strongly $\mathbb{Z}$-graded.
\end{proof}
\section{Epsilon-strongly $\mathbb{Z}$-graded Cuntz-Pimsner rings}
\label{sec:epsilon}
We will show that Condition (FS) and Condition (FS') correspond to local unit properties of the rings $\mathcal{T}_i \mathcal{T}_{-i}$ for $i > 0$. This allows us to find sufficient conditions for certain representation rings to be nearly epsilon-strongly and epsilon-strongly $\mathbb{Z}$-graded.
\begin{proposition}
Let $R$ be an s-unital ring and let $(P,Q,\psi)$ be an s-unital $R$-system that satisfies Condition (FS). Consider the Toeplitz ring $\mathcal{T}_{(P,Q,\psi)}=\bigoplus_{i \in \mathbb{Z}} \mathcal{T}_i$. The following assertions hold:
\begin{enumerate}[(a)]
\begin{item}
For $i \geq 0$, $\mathcal{T}_i$ is a left s-unital $\mathcal{T}_i \mathcal{T}_{-i}$-module;
\end{item}
\begin{item}
For $i \geq 0$, $\mathcal{T}_{-i}$ is a right s-unital $\mathcal{T}_i \mathcal{T}_{-i}$-module;
\end{item}
\begin{item}
$\mathcal{T}_i \mathcal{T}_{-i}$ is an s-unital ring for $i \geq 0$;
\end{item}
\begin{item}
$\mathcal{T}_i = \mathcal{T}_i \mathcal{T}_{-i} \mathcal{T}_i $ for every $i \in \mathbb{Z}$.
\end{item}
\end{enumerate}
\label{prop:nearly_toeplitz}
\end{proposition}
\begin{proof}
(a): Take an arbitrary integer $i \geq 0$ and an element $s \in \mathcal{T}_i$. Then, $ s = \sum_j \iota_{Q^{\otimes m_j}}(q_j) \iota_{P^{\otimes n_j}}(p_j)$ for some non-negative integers $\{ m_j \}, \{n_j\}$ and elements $q_j \in Q^{\otimes m_j}, p_j \in P^{\otimes n_j}$. Note that $m_j - n_j = i$ for all indices $j$. Furthermore, since $i$ is non-negative, we have that $0 \leq i \leq m_j$ for all $j$. We will construct an element $\epsilon(s)$ such that $\epsilon(s) s = s$.
If $i=0$, then by the assumption that $(P,Q,\psi)$ is an s-unital $R$-system and Remark \ref{rem:s-unital}, we can find some element $r \in R$ such that $r \cdot q_j = q_j$ for all $j$. Put $\epsilon(s) := \iota_R(r) \in \mathcal{T}_0$. Then,
\begin{align*}
\epsilon(s) s &= \iota_R(r) \sum_j \iota_{Q^{\otimes m_j}}(q_j) \iota_{P^{\otimes n_j}}(p_j) = \sum_j \iota_{Q^{\otimes m_j}}(r \cdot q_j) \iota_{P^{\otimes n_j}}(p_j) \\ &= \sum_j \iota_{Q^{\otimes m_j}}(q_j) \iota_{P^{\otimes n_j}}(p_j) = s.
\end{align*}
If $i > 0$, then let $q_j'$ denote the $i$th initial segment of $q_j$ for every $j$. In other words, for every $j$ we have that $q_j = q_j' \otimes q_j''$ where $q_j' \in Q^{\otimes i}$ and $q_j'' \in Q^{\otimes (m_j-i)}$. Since $(P,Q,\psi)$ satisfies Condition (FS), it follows by \cite[Lem. 3.8]{carlsen2011algebraic} that $(P^{\otimes i}, Q^{\otimes i}, \psi_i)$ satisfies Condition (FS). Therefore, there is some $\Theta \in \mathcal{F}_{P^{\otimes i}}(Q^{\otimes i})$ such that $\Theta(q_j') = q_j'$ for all $j$. Invoking Proposition \ref{prop:pi}, we put $\epsilon(s) := \pi_{\iota_{Q^{\otimes i}}, \iota_{P^{\otimes i}}}(\Theta).$ By Proposition \ref{prop:pi} and (\ref{eq:grading}), we have that, $$\pi_{\iota_{Q^{\otimes i}}, \iota_{P^{\otimes i}}}(\Theta) \in \Span_R \{ \iota_{Q^{\otimes i}}(q) \iota_{P^{\otimes i}}(p) \mid q \in Q^{\otimes i}, p \in P^{\otimes i} \} \subseteq \mathcal{T}_i \mathcal{T}_{-i}.$$ Furthermore, by using the left relation of (\ref{eq:2.9a}),
\begin{align*}
\epsilon(s) s &= \pi(\Theta) \sum_j \iota_{Q^{\otimes m_j}}(q_j) \iota_{P^{\otimes n_j}}(p_j) = \pi(\Theta) \sum_j \iota_{Q^{\otimes i}}(q_j') \iota_{Q^{\otimes (m_j-i)}}(q_j'') \iota_{P^{\otimes n_j}}(p_j) \\
&= \sum_j ( \pi(\Theta) \iota_{Q^{\otimes i}}(q_j') ) \iota_{Q^{\otimes (m_j-i)}}(q_j'') \iota_{P^{\otimes n_j}}(p_j) = \sum_j (\iota_{Q^{\otimes i}}(\Theta(q_j')) ) \iota_{Q^{\otimes (m_j-i)}}(q_j'') \iota_{P^{\otimes n_j}}(p_j) \\
&= \sum_j (\iota_{Q^{\otimes i}}(q_j') ) \iota_{Q^{\otimes (m_j-i)}}(q_j'') \iota_{P^{\otimes n_j}}(p_j) = \sum_j \iota_{Q^{\otimes m_j}}(q_j) \iota_{P^{\otimes n_j}}(p_j) = s.
\end{align*}
(b): Analogous to (a)
(c): Let $i \geq 0$ be an arbitrary non-negative integer. Any element of $\mathcal{T}_i \mathcal{T}_{-i}$ is a finite sum $s = \sum_j a_j b_j$ where $a_j \in \mathcal{T}_i$ and $b_j \in \mathcal{T}_{-i}$. Since $\mathcal{T}_i$ is a left s-unital $\mathcal{T}_i \mathcal{T}_{-i}$-module by (a), Remark \ref{rem:s-unital} implies that we can find some element $t_1 \in \mathcal{T}_i \mathcal{T}_{-i}$ such that $t_1 a_j = a_j$ for all indices $j$. Similarly, (b) and Remark \ref{rem:s-unital} implies that there is some element $t_2 \in \mathcal{T}_i \mathcal{T}_{-i}$ such that $b_j t_2 = b_j$ for all indices $j$. Hence, $t_1 s = s$ and $s t_2 = s$. This implies that $\mathcal{T}_i \mathcal{T}_{-i}$ is a left s-unital $\mathcal{T}_i \mathcal{T}_{-i}$-module and a right s-unital $\mathcal{T}_i \mathcal{T}_{-i}$-module. Thus, $\mathcal{T}_i \mathcal{T}_{-i}$ is an s-unital ring.
(d): Take an arbitrary integer $i \in \mathbb{Z}$. From the grading, it is clear that $\mathcal{T}_i \mathcal{T}_{-i} \mathcal{T}_i \subseteq \mathcal{T}_i$. It remains to show that $\mathcal{T}_i \subseteq \mathcal{T}_i \mathcal{T}_{-i} \mathcal{T}_i$. Let $s \in \mathcal{T}_i$ be an arbitrary element. First suppose that $i \geq 0$, then by (a) there is some $\epsilon(s) \in \mathcal{T}_i \mathcal{T}_{-i}$ such that $s = \epsilon(s) s \in \mathcal{T}_i \mathcal{T}_{-i} \mathcal{T}_i$. On the other hand, if $i < 0$, then by (b) there is some $\epsilon(s) \in \mathcal{T}_{-i} \mathcal{T}_{i}$ such that $s=s \epsilon(s) \in \mathcal{T}_i \mathcal{T}_{-i} \mathcal{T}_i$. Thus, $\mathcal{T}_i = \mathcal{T}_i \mathcal{T}_{-i} \mathcal{T}_i$ for every $i \in \mathbb{Z}$.
\end{proof}
Recall that for idempotents $e, f$ we define the idempotent ordering by $e \leq f \iff ef=fe=e$.
\begin{remark}
Let $A$ be an epsilon-strongly $\mathbb{Z}$-graded ring. Let $\epsilon_i \in A_i A_{-i}$ denote the multiplicative identity element of $A_i A_{-i}$ for $i \in \mathbb{Z}$ (see Proposition \ref{prop:nearly_char}).
If the gradation on $A$ is semi-saturated, then
$\epsilon_0 \geq \epsilon_1 \geq \epsilon_2 \geq \epsilon_3 \geq \ldots$
and
$\epsilon_0 \geq \epsilon_{-1} \geq \epsilon_{-2} \geq \epsilon_{-3} \geq \ldots$.
\end{remark}
For the next section, let $(P,Q,\psi)$ be a unital $R$-system. Suppose that $(P,Q,\psi)$ satisfies Condition (FS'). By Proposition \ref{prop:fsprime_char}(b), this implies that $\Delta(1_R) \in \mathcal{F}_P(Q)$ and $\Gamma(1_R) \in \mathcal{F}_Q(P)$. Consider the Toeplitz representation $(\iota_Q, \iota_P, \iota_R, \mathcal{T}_{(P,Q,\psi)})$. We define, $$\epsilon_0 := \iota_R(1_R), \qquad \epsilon_i := \pi_{\iota_{Q^{\otimes i}}, \iota_{P^{\otimes i}}}(\Delta^i(1_R)) = \chi_{\iota_{Q^{\otimes i}}, \iota_{P^{\otimes i}}}(\Gamma^i(1_R)) \in \mathcal{T}_{i} \mathcal{T}_{-i},$$ for $i > 0$.
\begin{lemma}
The sequence $\{ \epsilon_i \}_{i \geq 0}$ consists of idempotents such that $\epsilon_0 \geq \epsilon_1 \geq \epsilon_2 \geq \epsilon_3 \geq \epsilon_4 \geq \dots$ holds in the idempotent ordering.
\label{lem:idem_seq}
\end{lemma}
\begin{proof}
Fix an arbitrary integer $i \geq 0$. By Proposition \ref{prop:pi}, we have that $\epsilon_i = \pi (\Delta^i(1_R)) = \sum_j \iota_{Q^{\otimes i}}(q_j) \iota_{P^{\otimes i}}(p_j)$ for some $q_j \in Q^{\otimes i}$ and $p_j \in P^{\otimes i}$. Then, by the left relation in (\ref{eq:2.9a}),
\begin{align*}
\epsilon_i^2 &= \sum_j \epsilon_i \iota_{Q^{\otimes i}}(q_j) \iota_{P^{\otimes i}}(p_j) = \sum_j (\pi (\Delta^i(1_R)) \iota_{Q^{\otimes i}}(q_j)) \iota_{P^{\otimes i}}(p_j) \\ &= \sum_j \iota_{Q^{\otimes i}}(\Delta^i(1_R)(q_j))\iota_{P^{\otimes i}}(p_j) = \sum_j \iota_{Q^{\otimes i}}(q_j) \iota_{P^{\otimes i}}(p_j) = \epsilon_i.
\end{align*} Hence, $\epsilon_i$ is an idempotent.
It is clear that $\iota_R(1_R) = \epsilon_0 \geq \epsilon_1$. Take an arbitrary integer $m > 0$. We will prove that $\epsilon_m \geq \epsilon_{m+1}$. This is equivalent to $\epsilon_{m+1} = \epsilon_{m+1} \epsilon_m = \epsilon_m \epsilon_{m+1}$.
We first prove that $\epsilon_m \epsilon_{m+1} = \epsilon_m$.
Let $\epsilon_{m+1} = \sum_{j} \iota_{Q^{\otimes m+1}}(q_j)\iota_{P^{\otimes m+1}}(p_j)$. Write $q_j = q_j' \otimes q_j''$ where $q_j' \in Q^{\otimes m}$ and $q_j'' \in Q$. Then, by the left relation in (\ref{eq:2.9a}),
\begin{align*}
\epsilon_m \epsilon_{m+1} &= \sum_j \epsilon_m \iota_{Q^{\otimes m+1}}(q_j)\iota_{P^{\otimes m+1}}(p_j) = \sum_j \epsilon_m \iota_{Q^{\otimes m}}(q_j') \iota_{Q}(q_j'')\iota_{P^{\otimes m+1}}(p_j) \\ &= \sum_j \iota_{Q^{\otimes m}}(\Delta^{m}(1_R)(q_j')) \iota_Q (q_j'') \iota_{P^{\otimes m+1}}(p_j) = \sum_j \iota_{Q^{\otimes m}}(q_j') \iota_Q (q_j'') \iota_{P^{\otimes m+1}}(p_j) \\ &= \sum_j \iota_{Q^{\otimes m+1}}(q_j) \iota_{P^{\otimes m+1}}(p_j) = \epsilon_m.
\end{align*}
Again, let $\epsilon_{m+1} = \sum_{j} \iota_{Q^{\otimes m+1}}(q_j)\iota_{P^{\otimes m+1}}(p_j)$. This time write $p_j = p_j' \otimes p_j''$ for some $p_j' \in P$ and $p_j'' \in P^{\otimes m}$. Then, by the right relation in (\ref{eq:2.9a}),
\begin{align*}
\epsilon_{m+1} \epsilon_m &= \sum_j \iota_{Q^{\otimes m+1}}(q_j)\iota_{P^{\otimes m+1}}(p_j) \epsilon_m = \sum_j \iota_{Q^{\otimes m+1}}(q_j) \iota_P(p_j') \iota_{P^{\otimes m}}(p_j'') \epsilon_m \\ &= \sum_j \iota_{Q^{\otimes m+1}}(q_j) \iota_P(p_j') \iota_{P^{\otimes m}}(p_j'') \chi(\Gamma^m(1_R)) = \sum_j \iota_{Q^{\otimes m+1}}(q_j) \iota_P(p_j') \iota_{P^{\otimes m}}(\Gamma^m(1_R)(p_j'')) \\ &= \sum_j \iota_{Q^{\otimes m+1}}(q_j) \iota_P(p_j') \iota_{P^{\otimes m}}(p_j'') = \sum_j \iota_{Q^{\otimes m+1}}(q_j) \iota_{P^{\otimes m +1 }}(p_j) = \epsilon_m.
\end{align*}
\end{proof}
\begin{proposition}
Let $R$ be a unital ring and let $(P,Q,\psi)$ be a unital $R$-system that satisfies Condition (FS'). Let $\epsilon_i$ be the idempotents defined above. The following assertions hold for every $i \geq 0$:
\begin{enumerate}[(a)]
\begin{item}
For any $s \in \mathcal{T}_i$ we have that $\epsilon_i s = s$;
\end{item}
\begin{item}
For any $t \in \mathcal{T}_{-i}$ we have that $t \epsilon_i =t$.
\end{item}
\end{enumerate}
Consequently, $\mathcal{T}_i \mathcal{T}_{-i}$ is a unital ideal with multiplicative identity element $\epsilon_i$ for every $i \geq 0$.
\label{prop:positive_ideals}
\end{proposition}
\begin{proof}
Note that $\mathcal{T}_0$ is a unital ring with multiplicative identity element $\epsilon_0 = \iota_R(1_R)$. The statements are clear for $i = 0$.
(a): Take an arbitrary positive integer $i$. Consider a monomial $\iota_{Q^{\otimes m}}(q) \iota_{P^{\otimes n}}(p)$ where $m,n$ are non-negative integers such that $m-n=i$. Then, $0 < i \leq m$. By Lemma \ref{lem:idem_seq}, $\epsilon_m \geq \epsilon_i$. Hence,
\begin{align*}
\iota_{Q^{\otimes m}}(q) \iota_{P^{\otimes n}}(p) &= \iota_{Q^{\otimes m}}(\Delta^{m}(1_R)(q)) \iota_{P^{\otimes n}}(p) = \pi(\Delta^m(1_R)) \iota_{Q^{\otimes m}}(q) \iota_{P^{\otimes n}}(p) \\ &= \epsilon_m \iota_{Q^{\otimes m}}(q) \iota_{P^{\otimes n}}(p) = \epsilon_i \epsilon_m \iota_{Q^{\otimes m}}(q) \iota_{P^{\otimes n}}(p) = \epsilon_i \iota_{Q^{\otimes m}}(q) \iota_{P^{\otimes n}}(p).
\end{align*} Any element $s \in \mathcal{T}_i$ is a finite sum of elements of the above form (see (\ref{eq:grading})). Hence, it follows that $\epsilon_i s = s$.
(b): Take an arbitrary positive integer $i$. Consider a monomial $\iota_{Q^{\otimes m}}(q) \iota_{P^{\otimes n}}(p)$ where $m,n$ are non-negative integers such that $m-n=-i$. Then $0 < i \leq n$. By Lemma \ref{lem:idem_seq}, $\epsilon_n \geq \epsilon_i$. Hence, $\iota_{Q^{\otimes m}}(q) \iota_{P^{\otimes n}}(p) = \iota_{Q^{\otimes m}}(q) \iota_{P^{\otimes n}}(\Gamma^n(1_R)(p)) = \iota_{Q^{\otimes m}}(q) \iota_{P^{\otimes n}}(p) \chi(\Gamma^n(1_R)) = \iota_{Q^{\otimes m}}(q) \iota_{P^{\otimes n}}(p) \epsilon_n = \iota_{Q^{\otimes m}}(q) \iota_{P^{\otimes n}}(p) \epsilon_n \epsilon_i = \iota_{Q^{\otimes m}}(q) \iota_{P^{\otimes n}}(p)\epsilon_i$. Since any element $t \in \mathcal{T}_{-i}$ is a finite sum of elements of the above form, it follows that $ t \epsilon_i = t$.
\end{proof}
We will see that restricting our attention to semi-full covariant representations $(S,T,\sigma,B)$ makes life easier. This special type of graded covariant representations have the property that the image of $\psi_k$ is enough to generate the ideal $B_{-k} B_k$ for $k \geq 0$ (see Definition \ref{def:semi-full}). We first prove that the property of being semi-full is invariant under isomorphism in the category of surjective covariant representations $\mathcal{C}_{(P,Q,\psi)}$.
\begin{proposition}
Let $R$ be a ring, let $(P,Q,\psi)$ be an $R$-system and suppose that $(S,T,\sigma,B) \cong_{\text{r}} (S',T',\sigma', B')$ are two isomorphic covariant representations of $(P,Q,\psi)$. If $(S,T,\sigma,B)$ is semi-full, then $(S',T',\sigma', B')$ is semi-full.
\label{prop:semi-full_iso}
\end{proposition}
\begin{proof}
Let $\phi \colon B \to B'$ be the $\mathbb{Z}$-graded isomorphism coming from Lemma \ref{lem:rep_maps}. Hence, \begin{align*}
B_{-k}' B_k' &= \phi(B_{-k}) \phi(B_{k}) = \phi(B_{-k} B_k) = \phi(I_{\psi, \sigma}^{(k)}) \\ &= (\{ \phi \circ \sigma(\psi_k(p \otimes q)) \mid p \in P^{\otimes k}, q \in Q^{\otimes k} \}) = I_{\psi,\sigma'}^{(k)}.
\end{align*}
Thus, $(S',T', \sigma', B')$ is semi-full.
\end{proof}
We now establish sufficient conditions for a semi-full covariant representation to be nearly epsilon-strongly $\mathbb{Z}$-graded.
\begin{proposition}
Let $R$ be an s-unital ring and let $(P,Q,\psi)$ be an s-unital $R$-system. Suppose that $(S,T,\sigma, B)$ is a semi-full covariant representation of $(P,Q,\psi)$ and that the following assertions hold:
\begin{enumerate}[(a)]
\begin{item}
$(P,Q,\psi)$ satisfies Condition (FS),
\end{item}
\begin{item}
$I_{\psi,\sigma}^{(k)}$ is s-unital for $k \geq 0$.
\end{item}
\end{enumerate}
Then, $B$ is nearly epsilon-strongly $\mathbb{Z}$-graded.
\label{prop:nearly_epsilon_suff}
\end{proposition}
\begin{proof}
Let $\mathcal{T}_{(P,Q,\psi)} = \bigoplus_{i \in \mathbb{Z}} \mathcal{T}_i$ be the Toeplitz ring associated to the $R$-system $(P,Q,\psi)$. By Proposition \ref{prop:nearly_toeplitz}(c), $\mathcal{T}_i \mathcal{T}_{-i}$ is s-unital for every $ i \geq 0$. By Theorem \ref{thm:universal}, there is a $\mathbb{Z}$-graded ring epimorphism $\eta \colon \mathcal{T}_{(P,Q,\psi)} \to B$. Since the image of an s-unital ring under a ring homomorphism is in turn s-unital, it follows that $B_i B_{-i} = \eta(\mathcal{T}_i) \eta(\mathcal{T}_{-i})=\eta(\mathcal{T}_i \mathcal{T}_{-i})$ is s-unital for every $i \geq 0$. Furthermore, by Proposition \ref{prop:nearly_toeplitz}(d), we have that $\mathcal{T}_i = \mathcal{T}_i \mathcal{T}_{-i} \mathcal{T}_i$ for every $i \in \mathbb{Z}$. Applying $\eta$ to both sides yields, $B_i = B_i B_{-i} B_i$. Hence, $B$ is symmetrically $\mathbb{Z}$-graded.
Next, we show that $B_i B_{-i}$ is s-unital for $i < 0$. Since $(S,T,\sigma, B)$ is semi-full, we have that $B_{-k} B_k = I_{\psi,\sigma}^{(k)}$ for $k \geq 0$. Hence, (b) implies that $B_i B_{-i}$ is s-unital for $i < 0$. Thus, we have showed that $B_i B_{-i}$ is s-unital for $i \in \mathbb{Z}$ and that $B$ is symmetrically $\mathbb{Z}$-graded. By Proposition \ref{prop:nearly_char}(a), it follows that $B=\bigoplus_{i \in \mathbb{Z}} B_i$ is nearly epsilon-strongly $\mathbb{Z}$-graded.
\end{proof}
The proof of the following proposition is entirely analogous to the proof of Proposition \ref{prop:nearly_epsilon_suff}.
\begin{proposition}
Let $R$ be a unital ring and let $(P,Q,\psi)$ be a unital $R$-system. Suppose that $(S,T,\sigma, B)$ is a semi-full covariant representation of $(P,Q,\psi)$ and that the following assertions hold:
\begin{enumerate}[(a)]
\begin{item}
$(P,Q,\psi)$ satisfies Condition (FS'),
\end{item}
\begin{item}
$I_{\psi,\sigma}^{(k)}$ is unital for $k \geq 0$.
\end{item}
\end{enumerate}
Then, $B$ is epsilon-strongly $\mathbb{Z}$-graded.
\label{prop:epsilon_suff}
\end{proposition}
On the other hand, a covariant representation $(S,T,\sigma, B)$ does not need to be semi-full for the ring $B$ to be epsilon-strongly $\mathbb{Z}$-graded (see Example \ref{ex:1}).
\section{Characterization up to graded isomorphism}
\label{sec:characterization}
In this section, we finally give characterizations of unital strongly, nearly epsilon-strongly and epsilon-strongly $\mathbb{Z}$-graded Cuntz-Pimsner rings up to $\mathbb{Z}$-graded isomorphism.
\begin{theorem}
Let $\mathcal{O}_{(P,Q,\psi)}$ be a Cuntz-Pimsner ring of some system $(P,Q,\psi)$. If $\mathcal{O}_{(P,Q,\psi)}$ is nearly epsilon-strongly $\mathbb{Z}$-graded and $ \Ann_{\mathcal{O}_0}(\mathcal{O}_1) \cap (\Ann_{\mathcal{O}_0}(\mathcal{O}_1))^\bot = \{ 0 \}$, then,
$$ \mathcal{O}_{(P,Q,\psi)} \cong_{\text{gr}} \mathcal{O}_{(P', Q', \psi')},$$
where $(P',Q',\psi')$ is an $R'$-system such that $\mathcal{O}_{(P',Q',\psi')}$ is well-defined and the following assertions hold:
\begin{enumerate}[(a)]
\begin{item}
$(P',Q',\psi')$ is an s-unital $R'$-system;
\end{item}
\begin{item}
$(\iota_{P'}^{CP}, \iota_{Q'}^{CP}, \iota_{R'}^{CP}, \mathcal{O}_{(P',Q',\psi')})$ is a semi-full covariant representation of $(P',Q',\psi')$;
\end{item}
\begin{item}
$(P', Q', \psi')$ satisfies Condition (FS);
\end{item}
\begin{item}
$I_{\psi', \iota_{\mathcal{O}_0}^{CP}}^{(k)}$ is s-unital for $k \geq 0$.
\end{item}
\end{enumerate}
Conversely, if $(P',Q',\psi')$ is an $R'$-system such that $\mathcal{O}_{(P',Q',\psi')}$ is well-defined and (a)-(d) hold, then $\mathcal{O}_{(P', Q', \psi')}$ is nearly epsilon-strongly $\mathbb{Z}$-graded.
\label{thm:1}
\end{theorem}
\begin{proof}
If the Cuntz-Pimsner ring $\mathcal{O}_{(P,Q,\psi)}$ is nearly epsilon-strongly $\mathbb{Z}$-graded and the condition $ \Ann_{\mathcal{O}_0}(\mathcal{O}_1) \cap (\Ann_{\mathcal{O}_0}(\mathcal{O}_1))^\bot = \{ 0 \}$ holds, then it follows from Corollary \ref{cor:reduction} that the Cuntz-Pimsner ring is graded isomorphic to $\mathcal{O}_{(\mathcal{O}_{-1}, \mathcal{O}_1,\psi')}$ and that (a)-(d) are satisfied.
Conversely, let $(P', Q', \psi')$ be an $R'$-system such that $\mathcal{O}_{(P',Q',\psi')}$ exists and (a)-(d) are satisfied. Applying Proposition \ref{prop:nearly_epsilon_suff} to the covariant representation $(\iota_{P'}^{CP}, \iota_{Q'}^{CP}, \iota_{R'}^{CP}, \mathcal{O}_{(P',Q',\psi')})$, it follows that $\mathcal{O}_{(P',Q',\psi')}$ is nearly epsilon-strongly $\mathbb{Z}$-graded.
\end{proof}
For epsilon-strongly $\mathbb{Z}$-graded Cuntz-Pimsner rings, we obtain the following result:
\begin{theorem}
Let $\mathcal{O}_{(P,Q,\psi)}$ be a Cuntz-Pimsner ring of some system $(P,Q,\psi)$. If $\mathcal{O}_{(P,Q,\psi)}$ is epsilon-strongly $\mathbb{Z}$-graded and $ \Ann_{\mathcal{O}_0}(\mathcal{O}_1) \cap (\Ann_{\mathcal{O}_0}(\mathcal{O}_1))^\bot = \{ 0 \}$, then,
$$ \mathcal{O}_{(P,Q,\psi)} \cong_{\text{gr}} \mathcal{O}_{(P', Q', \psi')},$$
where $(P',Q',\psi')$ is an $R'$-system such that $\mathcal{O}_{(P',Q',\psi')}$ is well-defined and the following assertions hold:
\begin{enumerate}[(a)]
\begin{item}
$(P',Q',\psi')$ is a unital $R'$-system;
\end{item}
\begin{item}
$(\iota_{P'}^{CP}, \iota_{Q'}^{CP}, \iota_{R'}^{CP}, \mathcal{O}_{(P',Q',\psi')})$ is a semi-full covariant representation of $(P',Q',\psi')$;
\end{item}
\begin{item}
$(P', Q', \psi')$ satisfies Condition (FS');
\end{item}
\begin{item}
$I_{\psi',\iota_{\mathcal{O}_0}^{CP}}^{(k)}$ is unital for $k \geq 0$.
\end{item}
\end{enumerate}
\label{thm:epsilon}
Conversely, if $(P',Q',\psi')$ is an $R'$-system such that $\mathcal{O}_{(P',Q',\psi')}$ is well-defined and (a)-(d) hold, then $\mathcal{O}_{(P', Q', \psi')}$ is epsilon-strongly $\mathbb{Z}$-graded.
\end{theorem}
\begin{proof}
Assume that $(P',Q',\psi')$ is an $R'$-system such that $\mathcal{O}_{(P',Q',\psi')}$ exists and the assertions in (a)-(d) hold. Then Proposition \ref{prop:epsilon_suff} implies that $\mathcal{O}_{(P',Q',\psi')}$ is epsilon-strongly $\mathbb{Z}$-graded.
Conversely, assume that $\mathcal{O}_{(P,Q,\psi)}$ is epsilon-strongly $\mathbb{Z}$-graded and $ \Ann_{\mathcal{O}_0}(\mathcal{O}_1) \cap (\Ann_{\mathcal{O}_0}(\mathcal{O}_1))^\bot = \{ 0 \}$. Note that, in particular, $\mathcal{O}_{(P,Q,\psi)}$ is nearly epsilon-strongly $\mathbb{Z}$-graded. Hence, by Theorem \ref{thm:1}, $\mathcal{O}_{(P,Q,\psi)} \cong_{\text{gr}} \mathcal{O}_{(\mathcal{O}_{-1}, \mathcal{O}_1, \psi')}$ where $(\mathcal{O}_{-1}, \mathcal{O}_1, \psi')$ is an s-unital $\mathcal{O}_0$-system that satisfies Condition (FS) and such that (b) is satisfied. Furthermore (see Corollary \ref{cor:reduction}),
\begin{equation}
(i_{\mathcal{O}_{-1}}, i_{\mathcal{O}_1}, i_{\mathcal{O}_0}, \mathcal{O}_{(P,Q,\psi)}) \cong_{\text{r}} (\iota_{\mathcal{O}_{-1}}^{CP}, \iota_{\mathcal{O}_1}^{CP}, \iota_{\mathcal{O}_0}^{CP}, \mathcal{O}_{(\mathcal{O}_{-1}, \mathcal{O}_1, \psi')}).
\label{eq:6}
\end{equation}
First note that since the $\mathbb{Z}$-grading is assumed to be epsilon-strong it follows that $\mathcal{O}_i$ is a unital $\mathcal{O}_i \mathcal{O}_{-i} \text{--} \mathcal{O}_{-i} \mathcal{O}_i$-bimodule for each $i \in \mathbb{Z}$ (see Definition \ref{def:nystedt_epsilon}). This implies that $(\mathcal{O}_{-1}, \mathcal{O}_1, \psi')$ is a unital $\mathcal{O}_0$-system. Hence, (a) is satisfied.
Next, we prove that the $\mathcal{O}_0$-system $(\mathcal{O}_{-1}, \mathcal{O}_1, \psi')$ satisfies Condition (FS'). Since $\mathcal{O}_{(P,Q,\psi)}$ is assumed to be epsilon-strongly $\mathbb{Z}$-graded, it follows from \cite[Prop. 7(iv)]{nystedt2016epsilon} that $\mathcal{O}_i$ is a finitely generated $\mathcal{O}_0$-bimodule for every $i \in \mathbb{Z}$. In particular, $\mathcal{O}_1$ and $\mathcal{O}_{-1}$ are finitely generated $\mathcal{O}_0$-bimodules and it follows from Proposition \ref{prop:fsprime_char}(c) that $(\mathcal{O}_{-1}, \mathcal{O}_1, \psi)$ satisfies Condition (FS'). In other words, (c) holds.
Moreover, it follows from Proposition \ref{prop:nearly_char}(b) that, in particular, $\mathcal{O}_{-k} \mathcal{O}_k$ is unital for $k \geq 0$. Hence, $\mathcal{O}_{-k} \mathcal{O}_k = I_{\psi', \iota_{\mathcal{O}_0}^{CP}}^{(k)}$ is unital for $k \geq 0$. This establishes (d).
\end{proof}
For unital strongly $\mathbb{Z}$-graded Cuntz-Pimsner rings, we obtain the following complete characterization:
\begin{theorem}
Let $\mathcal{O}_{(P,Q,\psi)}$ be a Cuntz-Pimsner ring of some system $(P,Q,\psi)$. Then, $\mathcal{O}_{(P,Q,\psi)}$ is unital strongly $\mathbb{Z}$-graded if and only if
$$ \mathcal{O}_{(P,Q,\psi)} \cong_{\text{gr}} \mathcal{O}_{(P', Q', \psi')}$$
where $(P',Q',\psi')$ is an $R'$-system such that $\mathcal{O}_{(P',Q',\psi')}$ is well-defined and the following assertions hold:
\begin{enumerate}[(a)]
\begin{item}
$(P',Q',\psi')$ is a unital $R'$-system;
\end{item}
\begin{item}
$(\iota_{P'}^{CP}, \iota_{Q'}^{CP}, \iota_{R'}^{CP}, \mathcal{O}_{(P',Q',\psi')})$ is a semi-full and faithful covariant representation of $(P',Q',\psi')$;
\end{item}
\begin{item}
$\psi'$ is surjective.
\end{item}
\end{enumerate}
\label{thm:2}
\end{theorem}
\begin{proof}
By Proposition \ref{prop:strong_suff}, (a) and (c) are sufficient for the ring $\mathcal{O}_{(P',Q',\psi')}$ to be strongly $\mathbb{Z}$-graded.
Conversely, assume that $\mathcal{O}_{(P,Q,\psi)}$ is unital strongly $\mathbb{Z}$-graded. In particular, $\mathcal{O}_{(P,Q,\psi)}$ is epsilon-strongly $\mathbb{Z}$-graded. Moreover, $ \Ann_{\mathcal{O}_0}(\mathcal{O}_1) \cap (\Ann_{\mathcal{O}_0}(\mathcal{O}_1))^\bot = \{ 0 \}$ by Lemma \ref{lem:semi-prime}(b). Then, by Theorem \ref{thm:epsilon}, $\mathcal{O}_{(P,Q,\psi)} \cong_{\text{gr}} \mathcal{O}_{(\mathcal{O}_{-1}, \mathcal{O}_1, \psi')}$ where $(\mathcal{O}_{-1}, \mathcal{O}_1, \psi')$ satisfies Condition (FS'), (b) is satisfied and $I_{\psi',\iota_{\mathcal{O}_0}^{CP}}^{(k)}$ is unital for $k \geq 0$. Since $\mathcal{O}_{(\mathcal{O}_{-1}, \mathcal{O}_1, \psi)}$ is unital strongly $\mathbb{Z}$-graded, $$1_{\mathcal{O}_{(\mathcal{O}_{-1}, \mathcal{O}_1, \psi')}} = \iota_{\mathcal{O}_0}^{CP}(1_{\mathcal{O}_0}) \in \mathcal{O}_0 = \mathcal{O}_{-1} \mathcal{O}_1 = I_{\psi', \iota_{\mathcal{O}_0}^{CP}}^{(1)}.$$ Since $\iota_{\mathcal{O}_0}^{CP}$ is injective, we get that $1_{\mathcal{O}_0} \in \text{Im}(\psi')$. Hence, $\psi'$ is surjective.
Furthermore, since $\mathcal{O}_{(\mathcal{O}_{-1}, \mathcal{O}_1, \psi')}$ is an epsilon-strongly $\mathbb{Z}$-graded ring that is also strongly $\mathbb{Z}$-graded, we must have $\epsilon_1 = 1$ (see \cite[Prop. 8]{nystedt2016epsilon}) where $\epsilon_1$ is the multiplicative identity element of the ring $\mathcal{O}_1 \mathcal{O}_{-1}$. By Condition (FS') and Proposition \ref{prop:fsprime_char}(b), we have that $\Delta(1_{\mathcal{O}}) \in \mathcal{F}_P(Q)$. Then, by Proposition \ref{prop:pi}, $\pi_{\iota_{\mathcal{O}_{1}}^{CP}, \iota_{\mathcal{O}_{-1}}^{CP}}(\Delta(1_{\mathcal{O}_0})) \in \mathcal{O}_1 \mathcal{O}_{-1}$ is a multiplicative identity element of $\mathcal{O}_1 \mathcal{O}_{-1}$. Thus, $\pi_{\iota_{\mathcal{O}_{1}}^{CP}, \iota_{\mathcal{O}_{-1}}^{CP}}(\Delta(1_{\mathcal{O}_0}))= \epsilon_1 = 1$ and therefore $(\iota_{\mathcal{O}_{-1}}^{CP}, \iota_{\mathcal{O}_{1}}^{CP}, \iota_{\mathcal{O}_0}^{CP}, \mathcal{O}_{(\mathcal{O}_{-1}, \mathcal{O}_1, \psi')})$ is a faithful representation of $(\mathcal{O}_{-1}, \mathcal{O}_1, \psi')$.
\end{proof}
\section{Examples}
\label{sec:ex}
In this section, we collect some important examples.
\begin{example}(Non-nearly epsilon-strongly $\mathbb{Z}$-graded Cuntz-Pimsner ring)
Let $R$ be an idempotent ring that is not s-unital (see e.g. \cite[Expl. 5]{nystedt2018unital}). Put $P=Q=\{ 0 \}$ and let $\psi \equiv 0$ be the zero map. Note that $(P,Q,\psi)$ is an $R$-system that satisfies Condition (FS') trivially. It is not hard to see that the Toeplitz ring is given by $\mathcal{T}_0 = R$, and $\mathcal{T}_{i} = \{ 0 \}$ for all $i \ne 0$. Furthermore, note that $\ker \Delta = R$. Recall that an ideal $J$ of $R$ is called faithful if $J \cap \ker \Delta = \{ 0 \}$. Clearly, $J := (0)$ is the maximal faithful and $\psi$-compatible ideal of $R$. It follows that the Cuntz-Pimsner ring $\mathcal{O}_{(P,Q,\psi)}$ is well-defined and coincides with the Toeplitz ring. Since $\mathcal{T}_0 = R = R^2 = \mathcal{T}_0 \mathcal{T}_0$ is not s-unital it follows by Proposition \ref{prop:nearly_char}(a) that the Cuntz-Pimsner ring $\mathcal{O}_{(P,Q,\psi)}=\mathcal{T}_{(P,Q,\psi)}$ is not nearly epsilon-strongly $\mathbb{Z}$-graded. This shows that the assumption of $(P,Q,\psi)$ being an s-unital system in Proposition \ref{prop:nearly_epsilon_suff} cannot be removed.
\label{ex:2}
\end{example}
The following example shows that for some graphs, the standard Leavitt path algebra covariant representation is semi-full (see Section \ref{sec:lpa}).
\begin{example}
\begin{displaymath}
\xymatrix{
\bullet_{v_1} \ar[r]^{f_1} & \bullet_{v_2}
}
\end{displaymath}
Let $K$ be a unital ring and let $E$ consist of two vertices $v_1, v_2$ connected by a single edge $f$. Consider the associated standard Leavitt path algebra system $(P,Q,\psi)$ and the standard Leavitt path algebra covariant representation $(\iota_Q^{CP}, \iota_P^{CP}, \iota_R^{CP}, \mathcal{O}_{(P,Q,\psi)})$. To save space we write $I_k = I_{\psi, \iota_R^{CP}}^{(k)}$ for $k \geq 0$. Note that $I_0 = (\{ v_1, v_2 \})$, $ I_1 = ( v_2 ) $ and $ I_k = (0)$ for $k > 2$.
Furthermore, since $f_1 f_1^* = v_1$ we see that $(L_K(E))_0 = I_0$.
Moreover, note that $(L_K(E))_1 = \Span_K \{ f_1 \},$ $ (L_K(E))_{-1} = \Span_K \{ f_1^* \}$ and hence we see that $(L_K(E))_{-1} (L_K(E))_1 = (v_2) = I_1$. Thus, $(\iota_P, \iota_Q, \iota_R, \mathcal{O}_{(P,Q,\psi)})$ is a semi-full covariant representation of $(P,Q,\psi)$. Furthermore, $(P,Q,\psi)$ satisfies Condition (FS') since $E$ is finite (see Lemma \ref{lem:finitely_many_edges}) and $I_k$ is unital for $k \geq 0$. Thus, $L_K(E)$ is epsilon-strongly $\mathbb{Z}$-graded by Theorem \ref{thm:epsilon}.
\label{ex:3}
\end{example}
In general, however, it is not true that the standard Leavitt path algebra covariant representation is semi-full as the following example shows.
\begin{example}(cf. \cite[Expl. 4.1]{nystedt2017epsilon})
Let $K$ be a unital ring and consider the following finite directed graph $E$.
\begin{displaymath}
\xymatrix{
\bullet_{v_1} & \ar[l]_{f_1} \bullet_{v_2} \ar[r]^{f_2} & \bullet_{v_3} & \ar[l]_{f_3} \bullet_{v_4} & \ar[l]_{f_4} \bullet_{v_5}
}
\end{displaymath}
\noindent
Let $(P,Q,\psi)$ be the standard Leavitt path algebra system associated to $E$ and consider the standard Leavitt path algebra covariant representation,
\begin{equation}
(\iota_Q^{CP}, \iota_P^{CP}, \iota_R^{CP}, \mathcal{O}_{(P,Q,\psi)}).
\label{eq:3}
\end{equation}
We write $S_i= (L_K(E))_i$ and $I_i = I_{\psi, \iota_R^{CP}}^{(i)}$ to save space. Note that,
\begin{align*}
&S_0 = \Span_K \{ v_1, v_2, v_3, v_4, v_5, f_1f_1^*, f_2f_3^*, f_2f_2^*, f_3f_2^* \}, \\
&S_1 = \Span_K \{ f_1, f_2, f_3, f_4, f_4f_3f_2^* \}, \quad \quad
S_{-1} = \Span_K \{ f_1^*, f_2^*, f_3^*, f_4^*, f_2f_3^*f_4^* \}, \\
&S_2 = \Span_K \{ f_4f_3 \}, \quad \quad S_{-2} = \Span_K \{ f_3^*f_4^* \}, \quad \text{and} \quad S_{n} = \{0\}, \text{ for } |n|>2.
\end{align*}
\noindent
Furthermore,
\begin{align*}
&I_0 = ( \{ v_1, v_2, v_3, v_4, v_5 \}),
&I_1 = ( \{ v_1, v_3, v_4 \}), \\
&I_2 = ( \{ v_3 \} ),
&I_k = ( 0 ), \quad k > 2.
\end{align*}
In particular, we have that $S_{-1} S_1 = (\{ v_1, v_3, v_4, f_2 f_2^* \}) \supsetneqq I_1$ because $f_2 f_2^* \not \in I_1$. Hence, the standard Leavitt path algebra covariant representation is not semi-full. In any case, however, we have that $\mathcal{O}_{(P,Q,\psi)} \cong_{\text{gr}} L_K(E)$ (see Section \ref{sec:lpa}). On the other hand, by Proposition \ref{prop:lpa-cp}, we have that $L_K(E)$ is pre-CP. Thus, by Corollary \ref{cor:nearly_cuntz}, $L_K(E)$ is realized by the Cuntz-Pimsner representation,
\begin{equation}
(\iota_{(L_K(E))_{-1}}^{CP}, \iota_{(L_K(E))_1}^{CP}, \iota_{(L_K(E))_0}^{CP}, \mathcal{O}_{((L_K(E))_{-1}, (L_K(E))_{1}, \psi')})
\label{eq:4}
\end{equation} of the $(L_K(E))_0$-system $(L_K(E))_{-1}, (L_K(E))_1, \psi')$. Moreover, the corollary implies that (\ref{eq:4}) is semi-full and $\mathcal{O}_{((L_K(E))_{-1}, (L_K(E))_{1}, \psi')} \cong_{\text{gr}} L_K(E)$. Since (\ref{eq:3}) is not semi-full and (\ref{eq:4}) is semi-full, it follows by Proposition \ref{prop:semi-full_iso} that the covariant representations (\ref{eq:3}) and (\ref{eq:4}) cannot be isomorphic. Thus, $L_K(E)$ is realizable as a Cuntz-Pimsner ring in two different ways.
\label{ex:1}
\end{example}
The following example shows that (a) is crucial in Theorem \ref{thm:epsilon}. It also gives an example of a nearly epsilon-strongly $\mathbb{Z}$-graded ring that is not epsilon-strongly $\mathbb{Z}$-graded.
\begin{example}(cf. \cite[Expl. 4.5]{lannstrom2018induced})
Let $K$ be a unital ring and consider the infinite discrete graph $E$ consisting of countably infinitely many vertices but no edges.
\begin{displaymath}
\xymatrix{
\bullet_{v_1} \qquad \bullet_{v_2} \qquad \bullet_{v_3} \qquad \bullet_{v_4} \qquad \bullet_{v_5} \qquad \bullet_{v_6} \qquad \bullet_{v_7} \qquad \bullet_{v_8} \qquad \bullet_{v_9} \qquad \bullet_{v_{10}} \qquad \dots
}
\end{displaymath}
The standard Leavitt path algebra system is given by $R=\bigoplus_{v \in E^0} \eta_{v}$, $P=Q=\{ 0 \}$. The $R$-system $(P,Q,\psi)$ trivially satisfies Condition (FS'). However, $(P,Q,\psi)$ is not unital as $R$ does not have a multiplicative identity element. However, note that $(P,Q,\psi)$ is s-unital.
We show that the standard Leavitt path algebra covariant representation of $E$ is semi-full. Since $P=Q=\{0 \}$ and $\psi =0$ it follows that the grading is given by $\mathcal{O}_0 = R$ and $\mathcal{O}_i = \{ 0 \}$ for $i \ne 0$ (see Example \ref{ex:2}). Furthermore, $I_{\psi, \iota_R^{CP}}^{(k)} = ( 0 )$ for $k > 0$. Thus, the standard Leavitt path algebra covariant representation satisfies (b)-(d) in Theorem \ref{thm:epsilon} but not (a). Since $E$ contains infinitely many vertices, $L_K(E)$ is not unital (see \cite[Lem. 1.2.12]{abrams2017leavitt}). By Remark \ref{rem:unital_epsilon}, $L_K(E)$ is not epsilon-strongly $\mathbb{Z}$-graded (cf. \cite[Expl. 4.5]{lannstrom2018induced}). Thus, (a) in Theorem \ref{thm:1} cannot be removed. On the other hand, it follows from Theorem \ref{thm:1} that $L_K(E)$ is nearly epsilon-strongly $\mathbb{Z}$-graded.
\end{example}
\section{Noetherian and artinian corner skew Laurent polynomial rings}
\label{sec:app}
We end this article by characterizing noetherian and artinian corner skew Laurent polynomial rings. The following proposition can be proved in a straightforward manner using direct methods, but we show it as a special case of our results.
\begin{proposition}
Let $R$ be a unital ring, let $e \in R$ be an idempotent and let $\alpha \colon R \to eRe$ be a corner ring isomorphism. Then the corner skew Laurent polynomial ring $R[t_{+}, t_{-}; \alpha]$ is epsilon-strongly $\mathbb{Z}$-graded.
\label{prop:corner_skew_epsilon}
\end{proposition}
\begin{proof}
Recall that $R[t_{+}, t_{-}; \alpha] = \bigoplus_{i \in \mathbb{Z}} A_i$ is $\mathbb{Z}$-graded by putting $A_0 = R $, $A_i = R t_{+}^i$ for $i < 0$ and $A_i = t_{-}^i R$ for $i > 0$. Let $\psi' \colon A_{-1} \otimes A_1 \to A_0$ be the map defined by $\psi(a' \otimes a) = a' a$ for $a' \in A_{-1}$ and $a \in A_1$. Since $A_0=R$ is a unital ring, the $A_0$-system $(A_{-1},A_1, \psi')$ is unital. In \cite[Expl. 3.4]{2018arXiv180810114O}, it is shown that $R[t_{+}, t_{-}; \alpha]$ satisfies the conditions in Theorem \ref{thm:clark}. This implies that $(A_{-1}, A_1, \psi')$ satisfies Condition (FS) and,
\begin{equation}
(i_{A_{-1}}, i_{A_1}, i_{A_{0}}, R[t_{+}, t_{-}; \alpha]) \cong_{\text{r}} (\iota_{A_{-1}}^{CP}, \iota_{A_1}^{CP}, \iota_{A_0}^{CP}, \mathcal{O}_{(A_{-1}, A_1, \psi')}).
\label{eq:7}
\end{equation}
Note that $A_1 = t_{-} R$ is finitely generated as a right $A_0$-module and $A_{-1} = R t_{+}$ is finitely generated as a left $A_0$-module. It follows from Proposition \ref{prop:fsprime_char}, that $(A_{-1}, A_1, \psi')$ satisfies Condition (FS'). Furthermore, by Proposition \ref{rem:5}, the covariant representation (\ref{eq:7}) is semi-full.
Next, we show that $I_{\psi', \iota_{R}^{CP}}^{(k)} = A_{-k} A_k$ is unital with multiplicative identity element $t_{+}^k t_{-i}^k = i(e) \in A_{-k} A_k$ for each $k > 0$. Fix a non-negative integer $k > 0$ and note that any element $x \in A_{-k}A_k=(R t_{+}^k)( t_{-i}^k R)$ is a finite sum of elements of the form $r t_{+}^k t_{-}^k r' = t_{+}^k \alpha^{-k}(r) t_{-}^k r' = r t_{+}^k \alpha^{-k}(r') t_{-}^k$ where $r, r' \in R$. For any $r,r'\in R$, we get that, $$i(e) r t_{+}^k t_{-}^k r' = (t_{+}^k t_{-}^k) (t_{+}^k \alpha^{-k}(r) t_{-}^k r') = t_{+}^k (t_{-}^k t_{+}^k) \alpha^{-k}(r) t_{-}^k r' = t_{+}^k (1) \alpha^{-k}(r) t_{-}^k r' = r t_{+}^k t_{-}^k r'.$$ It follows that $i(e) x = x$. A similar argument shows that $x i(e) = x$. By Theorem \ref{thm:epsilon}, it now follows that $R[t_{+}, t_{-}; \alpha]$ is epsilon-strongly $\mathbb{Z}$-graded.
\end{proof}
We recall the following Hilbert basis theorem for epsilon-strongly $\mathbb{Z}$-graded rings.
\begin{theorem}(\cite[Thm. 1.1, Thm. 1.2]{lannstrom2018chain})
Let $S=\bigoplus_{i \in \mathbb{Z}} S_i$ be an epsilon-strongly $\mathbb{Z}$-graded ring. The following assertions hold:
\begin{enumerate}[(a)]
\begin{item}
If $S_0$ is left (right) noetherian, then $S$ is left (right) noetherian;
\end{item}
\begin{item}
If $S_0$ is left (right) artinian and there exists some positive integer $n$ such that $S_i = \{ 0 \}$ for all $|i| > n$, then $S$ is left (right) artinian.
\end{item}
\end{enumerate}
\label{thm:hilbert}
\end{theorem}
Applying Theorem \ref{thm:hilbert} to the special case of corner skew Laurent polynomial rings, we obtain the following result.
\begin{corollary}
Let $R$ be a unital ring and let $\alpha \colon R \to eRe$ be a ring isomorphism where $e$ is an idempotent of $R$. Consider the corner skew Laurent polynomial ring $R[t_{+}, t_{-}; \alpha]$. The following assertions hold:
\begin{enumerate}[(a)]
\begin{item}
$R[t_{+}, t_{-}; \alpha]$ is left (right) noetherian if and only if $R$ is left (right) noetherian;
\end{item}
\begin{item}
$R[t_{+}, t_{-}; \alpha]$ is neither left nor right artinian.
\end{item}
\end{enumerate}
\label{cor:artinian}
\end{corollary}
\begin{proof}
(a): Straightforward.
(b): By Proposition \ref{prop:corner_skew_epsilon}, $R[t_{+}, t_{-}; \alpha] = \bigoplus_{i \in \mathbb{Z}} A_i$ is epsilon-strongly $\mathbb{Z}$-graded where $A_0 = R $, $A_i = R t_{+}^i$ for $i < 0$ and $A_i = t_{-}^i R$ for $i > 0$. By Theorem \ref{thm:hilbert}(b), $R[t_{+}, t_{-}; \alpha]$ is left (right) artinian if and only if $A_0$ is left (right) artinian and $|\text{Supp}(R[t_{+}, t_{-}; \alpha])| < \infty$. However, since $t_{+}^n \ne 0$ for every $n > 0$, it follows that $A_{-n} = R t_{+}^n \ne \{ 0 \}$ for every $n > 0$. Hence, $\text{Supp}(R[t_{+}, t_{-}; \alpha])$ is infinite and $R[t_{+}, t_{-}; \alpha]$ is neither left nor right artinian.
\end{proof}
\section*{acknowledgement}
This research was partially supported by the Crafoord Foundation (grant no. 20170843). The author is grateful to Eduard Ortega for pointing out the characterization in Proposition \ref{prop:fsprime_char}(c). The author is also grateful to Stefan Wagner, Johan Öinert and Patrik Nystedt for giving feedback and comments that helped to improve this manuscript.
\printbibliography
\end{document} |
1911.00654 | \section{Introduction}\label{sec:intro}
Dust is an intrinsic component of the interstellar medium (ISM) and plays important roles in astrophysics. Dust grains absorb and scatter starlight, and infrared emission from heated dust grains is a powerful probe of star and planet formation. Photoelectric effect from small dust grains is essential for heating and cooling of molecular gas, and grain surfaces are catalytic sites for molecule formation (see \citealt{Draine:2011}).
The polarization of starlight (\citealt{1949Sci...109..166H}; \citealt{1949Natur.163..283H}) and polarized thermal emission (\citealt{1989IAUS..135..275H}) due to the alignment of dust grains with ambient magnetic fields allow us to map magnetic fields in various environment conditions, from the diffuse medium to molecular clouds to circumstellar regions (see \citealt{2003ApJ...598..392L}; \citealt{2012ARA&A..50...29C}; \citealt{2015ASSL...407.59}). Moreover, polarized thermal emission from aligned grains is a major foreground contamination of cosmic microwave background (CMB) that must be separated to accurately measure the CMB B-modes (\citealt{2003NewAR..47.1107L}). It is now established that an accurate model of dust polarization spectrum is required for the precise detection of B-modes (\citealt{2016A&A...586A.141P}). Such an accurate model of dust polarization depends on dust physical properties (size, shape, and composition), grain alignment with the magnetic fields, and the gas density and magnetic field structures.
The question of how dust grains become aligned with the magnetic field is a longstanding problem in astrophysics (see \citealt{2007JQSRT.106..225L} for a review). After seven decades of research, RAdiative Torque (RAT) alignment becomes the popular theory to explain grain alignment (see \citealt{2015ARA&A..53..501A} for a recent review). The idea of RATs was first introduced by \cite{1976Ap&SS..43..257D}, which were quantified based on the differential scattering and absorption of left- and right-handed photons (i.e., photon angular momentum). Numerical calculations for several realistically irregular shapes were carried out by \cite{1996ApJ...470..551D} and \cite{1997ApJ...480..633D}. However, an analytical model that provides physical insights into RATs and RAT alignment was formulated by \cite{2007MNRAS.378..910L} where RATs were quantified based on the transfer of photon momentum to the helical grain. Extended numerical calculations of RAT alignment for the different environments were carried out in \cite{2008MNRAS.388..117H}, \cite{2009ApJ...695.1457H}, and \cite{2014MNRAS.438..680H}. A unified theory of grain alignment for grains with iron inclusions is introduced in \cite{2016ApJ...831..159H}. Recently, numerical calculations of RATs for a large ensemble of grain shapes were presented by \cite{2019ApJ...878...96H}.
One of the key predictions of RAT theory is that the degree of grain alignment depends on the local conditions, including the radiation field and gas properties (density and temperature). As a result, toward the center of a dense molecular cloud with low radiation intensity, only large grains can be aligned by attenuated interstellar photons (\citealt{2005ApJ...631..361C}). As a result, the peak wavelength of starlight polarization would increase with increasing visual extinction $A_{V}$ \citep{2015MNRAS.448.1178H}. This prediction was supported by observational data by \cite{2008ApJ...674..304W}. The angle-dependence of RAT alignment is then successfully tested by observations of starlight polarization by \cite{2011A&A...534A..19A}. Submm/FIR polarization of starless cores also reveals the existence of polarization hole (\citealt{2014A&A...569L...1A}; \citealt{2015ApJ...149.31J}), which is expected from the RAT theory. In the other regime of strong radiation sources, the RAT theory predicts an increased alignment of grains when the radiation strength increases, and the peak wavelength is shifted to smaller values. Such a prediction is consistent with observations toward type Ia supernovae (SNe Ia; \citealt{2017ApJ...836...13H}; \citealt{Giangetal:2019}). Therefore, the polarization degree of polarized emission is expected to increase with increasing radiation intensity according to the classical picture of the RAT theory (\citealt{2007MNRAS.378..910L}).
In addition to grain alignment, the grain size distribution is required for modeling dust emission and polarization spectrum. The grain size distribution is expected to evolve from the ISM to dense molecular clouds due to various physical effects. For instance, grains can be destroyed via grain shattering, thermal and non-thermal sputtering in interstellar shocks ( \citealt{1994ApJ...431..321T}; \citealt{1996ApJ...469..740J}). On the other hand, grains can grow in dense molecular clouds due to the accretion of gas species onto the grain surface as well as coagulation due to grain-grain collisions (see \citealt{2018ApJ...857...94Z} and reference therein). In the diffuse medium, grain shattering induced by grain acceleration by magnetohydrodynamic (MHD) turbulence (\citealt{2004ApJ...616..895Y}; \citealt{2012ApJ...747...54H}) is thought to determine the upper cutoff of the grain size distribution (\citealt{2009MNRAS.394.1061H}). However, a new physical mechanism, so-called RAdiative Torque Disruption (RATD), that dominates the upper cutoff of the grain size distribution, was recently discovered by \cite{2019NatAs...3..766H}. This RATD mechanism is based on the fact that suprathermally rotating grains spun-up by RATs induces centrifugal stress that can exceed the maximum tensile strength of grain materials, resulting in the instantaneous disruption of a large grain into small fragments. Since RATs are stronger for larger grains (\citealt{2007MNRAS.378..910L}; \citealt{2008MNRAS.388..117H}), RATD is more efficient for large grains than small ones. As shown in \cite{2019ApJ...876...13H}, the RATD mechanism is much faster than grain shattering and thus determines the upper cutoff of the grain size distribution in the ISM.
According to the RATD mechanism, the upper cutoff of grain size distribution is determined by the tensile strength, which depends on the grain internal structure (i.e., compact vs. composite structures; \citealt{2019ApJ...876...13H}). Unfortunately, the grain structure is one of the least constrained dust physical properties. In principle, one can constrain the internal structure with observational data if the variation of the polarization with the tensile strength is theoretically predicted \citep{2019ApJ...876...13H}. Therefore, the main goal of this paper is first to perform detailed modeling of multi-wavelength dust polarization from optical/UV to FIR/submm for the different local radiation intensity and grain tensile strength by simultaneously taking into account the alignment and rotational disruption of grains by RATs.
Full-sky polarization data from {\it Planck} have provided invaluable information about dust properties, grain alignment, and magnetic fields. A high polarization degree observed from the diffuse and translucent clouds by {\it Planck} reveals that dust grains must be efficiently (perfectly) aligned, which is consistent with a unified alignment theory of grains with iron inclusions \citep{2016ApJ...831..159H}. However, a detailed analysis of the polarization data for various clouds by \cite{2018arXiv180706212P} shows that the polarization degree at $\lambda=850\mum$ ($P_{850}$) does not always increase with grain temperature ($T_{d}$) as expected from the classical RAT theory.
Instead, the polarization degree decreases with the grain temperature for $T_{d}\gtrsim 19\,{\rm K}$. This observed feature was thought to be a challenge to the classical picture of RAT alignment theory. However, as we will show in the paper, this unexpected trend would provide a valuable constraint on the tensile strength and then the internal structure of interstellar grains.
The structure of the paper is organized as follows. In Section \ref{sec:method}, we describe RAT alignment and RATD mechanism and our theoretical models to be used for modeling. In Sections \ref{sec:Pabs} and \ref{sec:Pem}, we will calculate the alignment size, disruption size for the different radiation fields, and present our modeling results of polarization of starlight and polarized emission. In Section \ref{sec:discussion} we will discuss the important implications of our study, focusing on the first constraint of grain internal structure with {\it Planck} data and the understanding of anomalous polarization of type Ia supernovae. A summary of our main findings are shown in Section \ref{sec:summary}.
\section{Grain alignment and grain disruption by Radiative Torques}\label{sec:method}
In this section, we briefly review the theory of grain alignment and rotational disruption by RATs.
\subsection{Grain Alignment by RATs}\label{sec:Align}
\subsubsection{Critical size of aligned grains}
Let $u_{\lambda}$ be the spectral energy density of some radiation field. The total energy density is $u_{\rm rad}=\int u_{\lambda}d\lambda$. For the average interstellar radiation field (ISRF) in the solar neighborhood from \cite{1983AA...128..212}, one obtains the energy density $u_{\rm ISRF}=8.64\times 10^{-13}$erg cm$^{-3}$ and the mean wavelength $\bar{\lambda}=1.2\mum$ (\citealt{1997ApJ...480..633D}). Assuming that the radiation spectrum $u_{\lambda}$ is the same as the ISRF, one can describe the radiation energy density at a given location in the ISM by a dimensionless parameter $U=u_{\rm {rad}}/u_{\rm ISRF}$, which is referred to as {\it radiation strength}.
To account for the variation of the local radiation intensity in the ISM, we will consider a wide range of the radiation strength for both the diffuse ISM and a molecular cloud illuminated by a nearby star as depicted in Figure \ref{fig:GMC}. We assume that a line of sight close to the star probes grains exposed to an averaged radiation field of strength $U=5000$, as illustrated in Figure \ref{fig:GMC}. Other lines of sight more distant from the star probe grains irradiated by weaker radiation fields. Note that the upper value of $U$ is chosen arbitrarily, but it is perhaps typical for photodissociation regions (PDRs).
\begin{figure}
\centering
\includegraphics[scale=0.4]{results/fig1.pdf}
\caption{Schematic illustration of a molecular cloud irradiated by a central star. Different lines of sights probe the different average radiation fields, which is characterized by the radiation strength $U$ spanning from $5000$ to $1$.}
\label{fig:GMC}
\end{figure}
Let $a$ be the effective size of the irregular grain which is defined as the radius of the equivalent sphere with the same volume as the irregular grain. According to the RAT alignment theory, grains are efficiently aligned when they can be spun-up to suprathermal rotation by an anisotropic radiation field.
The radiative torque induced by the interaction of the anisotropic radiation field with the irregular grain is defined by
\begin{equation}
\Gamma_{\rm RAT} = \pi a^2\gamma u_{\rm rad}\left(\frac{\lambda}{2\pi}\right)Q_{\Gamma},
\label{eq:RATs}
\end{equation}
where $\gamma$ is the anisotropy degree of the radiation field, and $Q_{\Gamma}$ is the RAT efficiency. We adopt $\gamma=0.1$ for the diffuse medium and $\gamma=0.7$ for MCs (\citealt{1996ApJ...470..551D}).
Following \cite{2007MNRAS.378..910L}, the RAT efficiency can be approximately described by two power laws:
\begin{eqnarray}
Q_{\Gamma}\approx 0.4\left(\frac{\lambda}{1.8a}\right)^{-\eta},
\label{eq:Qgam}
\end{eqnarray}
where $\eta=0$ for $\lambda/a<1.8$ and $\eta=3$ for $\lambda/a\gtrsim 1.8$. This scaling was obtained by approximating numerical calculations with Equations (\ref{eq:Qgam}) for different grain compositions and grain shapes (\citealt{2007MNRAS.378..910L}). A slightly shallower slope is obtained from numerical calculations for an extended ensemble of grain shapes by \cite{2019ApJ...878...96H}.
The grain rotation is damped due to collisions with gas species (atoms and molecules) followed by their evaporation and the emission of IR photons after absorption of starlight. Let us define the ratio of the rotational gas damping to IR damping times as $\tau_{\rm gas}/\tau_{\rm em}\equiv F_{\rm IR}$. By plugging $\Gamma_{\rm RAT}$ (Eq. \ref{eq:RATs}) into Equation (\ref{eq:omega_rad}), we can obtain the maximum angular velocity spun up by RATs:
\begin{eqnarray}
\frac{\omega_{\rm RAT}}{\omega_{T}} &\simeq& 48.7\hat{\rho}a_{-5}^{3.2} U^{2.4}\left(\frac{\gamma}{0.1}\right) \left(\frac{30\,{\rm {cm}}^{-3}}{n_{\rm H}}\right)\left(\frac{\overline{\lambda}}{1.2\mum}\right)^{-1.7}
\nonumber\\
&&\times \left(\frac{100\,{\rm K}}{T_{\rm gas}}\right) \left( 1+F_{\rm IR}\right),
\label{eq:wRAT}
\end{eqnarray}
where $a_{-5}\equiv a/(10^{-5}\,{\rm {cm}})$, $\hat{\rho}=\rho/(3\,{\rm g}\,{\rm {cm}}^{-3})$ with $\rho$ grain mass density, $F_{\rm IR}$ the dimensionless coefficient of rotational damping by IR emission as given by (see Appendix \ref{sec:damping})
\begin{equation}
F_{\rm IR} \simeq 0.4\left(\frac{U^{2/3}}{a_{-5}}\right)
\left(\frac{30\rm{cm}^{-3}}{n_{\rm H}}\right)\left(\frac{100\,{\rm K}}{T_{\rm gas}}\right)^{1/2},
\end{equation}
and the thermal rotation rate $\omega_{\rm T}$ is given by
\begin{equation}
\begin{split}
\omega_{\rm T}&= \left(\frac{kT_{\rm {gas}}}{I}\right)^{1/2}=\left(\frac{15kT_{\rm {gas}}}{8\pi\alpha_1\rho a^5}\right)^{1/2}\nonumber\\
&\simeq 1.6\times 10^{5}T_{2}^{1/2}a_{-5}^{-5/2} \alpha_{1}^{-1/2}\rm rad\,{\rm s}^{-1},
\end{split}
\label{eq:wT}
\end{equation}
where $T_{2}=T_{\rm {gas}}/100\,{\rm K}$, assuming the rotational kinetic energy of a grain around one axis is equal to $kT_{\rm {gas}}/2$. For simplicity, we assume $\alpha_{1}=1$ throughout the paper. For above analytical estimates, the RAT efficiency averaged over the radiation spectrum $\overline{Q}_{\Gamma}\approx 2(\bar{\lambda}/a)^{-2.7}$ for $a<\bar{\lambda}/1.8$ has been used (see Eq. 68 in \citealt{2014ApJ...790....6H}).
Let $a_{\rm align}$ be the critical size that grains can be driven to suprathermal rotation at which $\omega_{\rm RAT}/\omega_{T}=3$. Above this limit, the degree of grain alignment starts to rise, and eventually grains achieve perfect alignment if high-J attractors are present (\citealt{2016ApJ...831..159H}). From Equation (\ref{eq:wRAT}) and the suprathermal rotation criterion, we can calculate the critical size of aligned grains for the various value of $U$. This alignment size depends on the gas density and temperature and the intensity in the radiation field. The representative results for a few values of $U$ with two different tensile strengths are listed in the Table \ref{tab:DustSize}. For the typical ISM, $U=1$, and one has $a_{\rm align} \sim 0.055\mum$, and $a_{\rm align}$ becomes smaller for higher $U$.
\begin{figure}
\includegraphics[scale=0.45]{results/fig2.pdf}
\caption{Alignment function $f(a)$ obtained for various radiation strengths $U$. The alignment function is broader for higher $U$ due to the decrease of the alignment size $a_{\rm align}$ with increasing $U$.}
\label{fig:Afunc}
\end{figure}
\begin{table}[]
\centering
\begin{threeparttable}
\caption{Physical parameters for the diffuse ISM and MC}
\label{tab:condition}
\begin{tabular}{l|cc}
\toprule
Parameters & Diffuse ISM & MC \\
\hline
n$_{\rm{\rm H}}$ (cm$^{-3}$) & 30 & 10$^{4}$ \\
T$_{\rm{\rm {gas}}}$ (K) & 100 & 20 \\
$\rho$ (g/cm$^{3}$) & 3 & 3 \\
$\gamma$ & 0.1 & 0.7 \\
$u_{\rm rad}$ (\,{\rm {erg}}\,{\rm {cm}}$^{-3}$) & 8.64 x 10$^{-13}$ & Varied \\
$\bar{\lambda}$ ($\mu$m) & 1.2 & Varied \\
\hline
\end{tabular}
\end{threeparttable}
\end{table}
\subsubsection{Grain alignment function}
Numerical simulations in \cite{2016ApJ...831..159H} show that if the RAT alignment has a high-J attractor point, then, grains can be perfectly aligned when they are spun-up to suprathermal rotation. For grains without iron inclusions (i.e., ordinary paramagnetic material), high-J attractors are only present for a limited range of the radiation direction that depends on the grain shape. This range is increased if grains have an enhanced magnetic susceptibility by including iron clusters (\citealt{2008ApJ...676L..25L}; \citealt{2016ApJ...831..159H}).
For grains smaller than $a_{\rm align}$, numerical simulations show that the alignment degree is rather small even in the presence of iron inclusions because grains rotate subthermally \citep{2016ApJ...831..159H}. Thus, to describe the size-dependence of grain alignment degree, we adopt an alignment function
\begin{equation}
f(a)=f_{\rm min} + (f_{\rm max}-f_{\rm min})[1-\,{\rm {exp}}(-(\frac{0.5a}{a_{\rm align}})^{3}],
\end{equation}
where we fix $f_{\rm max} = 1.0$ corresponding to the perfect alignment of large grains. The lower value $f_{\rm min}$ describes the alignment of small grains of $a<a_{\rm align}$, chosen to be $10^{-3}$, so that their contribution to the total polarization is negligible.
The above alignment function approximately agrees with the results obtained from inverse modeling of polarization data from \cite{2009ApJ...696....1D} and \cite{2014ApJ...790....6H}. Therefore, it is appropriate to use this alignment function for modeling dust polarization.
Figure \ref{fig:Afunc} shows the alignment function calculated for the different radiation strength. One can see that the stronger radiation field can align smaller grains, shifting the alignment function toward smaller sizes. In the other word, the range of aligned grains becomes broader for higher radiation intensity.
\begin{table}[]
\begin{center}
\caption{Grain alignment and disruption size for the diffuse media}
\begin{tabular}{c|c|c|c|c}
\toprule
\multirow{4}{*}{Radiation Strength} & \multicolumn{4}{c}{Diffuse ISM} \\
\cline{2-5}
& \multicolumn{2}{c|}{$S_{\max}$=10$^7$ erg cm$^{-3}$} & \multicolumn{2}{c}{$S_{\max}$=10$^9$ erg cm$^{-3}$} \\
\cline{2-5}
& \textbf{a}$\boldsymbol{_{\mathrm{align}}}$ & \textbf{a}$\boldsymbol{_{\mathrm{disr}}}$ & \textbf{a}$\boldsymbol{_{\mathrm{align}}}$ & \textbf{a}$\boldsymbol{_{\mathrm{disr}}}$ \\
(U) & ($\mu$m) & ($\mu$m) & ($\mu$m) & ($\mu$m) \\
\hline
0.1 & 0.105 & 1.0 & 0.105 & 1.0\\
1 & 0.057 & 0.31 & 0.057 & 1.0\\
10 & 0.031 & 0.15 & 0.031 & 0.4\\
100 & 0.018 & 0.10 & 0.018 & 0.25\\
1000 & 0.010 & 0.076 & 0.010 & 0.18\\
5000 & 0.007 & 0.062 & 0.007 & 0.15
\end{tabular}
\end{center}
\label{tab:DustSize}
\end{table}
\normalsize
\subsection{Grain rotational disruption by the RATD mechanism} \label{sec:Disrupt}
\subsubsection{Grain disruption size and tensile strength}
A rapidly spinning grain of angular velocity $\omega$ develops a tensile stress of $S=\rho \omega^{2}a^{2}/4$ with $\rho$ being the mass density of dust. For large grains in the strong radiation field, the angular velocity by RATs can be sufficiently large such that $S$ exceeds the maximum tensile strength $S_{\max}$ of grain material, resulting in rotational disruption (\citealt{2019NatAs...3..766H}).
The critical angular velocity that the grain is disrupted is given by $S=S_{\max}$, which yields
\begin{equation}
\omega_{\rm disr} \simeq \frac{3.6\times 10^8}{a_{-5}}S^{1/2}_{\max,7}\hat{\rho}^{-1/2} \hspace{0.3cm} \rm{rad\,s^{-1}},
\label{eq:wdisr}
\end{equation}
where $S_{\max,7}=S_{\max}/(10^{7}\,{\rm {erg}}\,{\rm {cm}}^{-3})$.
The tensile strength measures the maximal mechanical limit that can resist against an applied tension force before it breaks. The exact value of $S_{\rm max}$ depends on the grain internal structure and composition, which is never constrained for interstellar dust. Physically, compact grains are expected to have higher $S_{\rm max}$ than composite grains due to the difference in the bonding energy between grain constituents. Thus, a higher $S_{\max}$ implies a more compact grain, while a lower value of $S_{\max}$ implies a porous or composite grain (\citealt{2019ApJ...876...13H}). For instance, polycrystalline bulk solid can have $S_{max} \sim 10^{9} - 10^{10} \,{\rm {erg}} \,{\rm {cm}}^{-3}$ (\citealt{Burke74}; \citealt{Draine79}), while ideal materials, i.e., diamond, have $S_{\max} \sim 10^{11} \,{\rm {erg}} \,{\rm {cm}}^{-3}$ (see \citealt{2019NatAs...3..766H}). In this paper, we assume a reasonable range of the tensile strength, $S_{\max}\sim 10^6 - 10^9$ erg cm$^{-3}$ to account for various structures of interstellar grains.
By equating Equations (\ref{eq:wRAT}) and (\ref{eq:wdisr}), one can obtain the critical size $a_{\rm disr}$ above which grains are disrupted as follows:
\begin{equation}
\left( \frac{a_{\rm disr}}{0.1 \mu\rm{m}}\right)^{2.7}\simeq 5.1\gamma^{-1}_{-1}U^{-1/3}\bar{\lambda}^{1.7}_{0.5}S^{1/2}_{\max,7},
\label{eq:adisr}
\end{equation}
for strong radiation fields of $U\gg 1$ and $a_{\max}\le \bar{\lambda}/1.8$, and $\gamma_{-1}=\gamma/0.1$.
The maximum size that grains are still disrupted by RATD is given by (see \citealt{2019ApJ...877...36H})
\begin{eqnarray}
a_{\rm disr,max}&\simeq& 2.8\gamma\bar{\lambda}_{0.5}\left(\frac{U}{\hat{n}\hat{T}_{\rm {gas}}^{1/2}}\right)^{1/2}\left(\frac{1}{1+F_{\rm IR}}\right)\nonumber\\
&&\times \hat{\rho}S_{\max,7}^{-1/2}~\mum.\label{eq:adisr_up}
\end{eqnarray}
Equation (\ref{eq:adisr_up}) gives $a_{\rm disr,max}\sim 2.4\mum$ for the tensile strength of $S_{\max}\approx 10^{7}\,{\rm {erg}}\,{\rm {cm}}^{-3}$ and $\gamma_{\rm rad}=0.5$. Therefore, for the diffuse ISM, all grains between $a_{\rm disr}$ and $a_{\max}=1\mum$ are disrupted. Here we disregard the possibility of having micron-sized grains in the ISM, therefore, essentially all grains larger than $a_{\rm disr}$ are destroyed by RATD.
Using numerical calculations, we can obtain the critical size of grain alignment ($a_{\rm align}$) and rotational disruption ($a_{\rm disr}$) by RATs for the various radiation field strength and local gas properties. Table \ref{tab:DustSize} lists the values of $a_{\rm align}$ and $a_{\rm disr}$ for dust grains in the diffuse ISM illuminated by the different radiation fields. In the typical ISM ($U=1$), dust grains of size $a\gtrsim 0.06\mum$ can be aligned by, whereas grains of $a\gtrsim 0.31\mum$ are disrupted by RATs. In Table \ref{tab:DustSize}, we see that both the alignment and the disruption size become smaller as the radiation field strength increases.
\subsubsection{Grain size distribution in the presence of RATD}\label{sec:GrainSIZE_T}
We adopt a mixed dust model consisting of two separate populations of amorphous silicate grains and carbonaceous (graphite) grains (see \citealt{2001ApJ...548..296W}; \citealt{2007ApJ...657..810D}. The grain size distribution of component $j=sil$ or $gra$ follows a power-law distribution (\citealt{1977ApJ...217..425M}, hereafter MRN):
\begin{equation}
\frac{1}{n_{\rm H}}\frac{dn_{j}}{da} = C_{j}a^{-3.5} \hspace{0.7cm} \rm{at} \hspace{0.1cm} \it a_{\rm min}<a<a_{\rm max},
\label{eq:MRN}
\end{equation}
where $dn_{\rm j}$ is the number density of grains of material $j$ between $a, a+da$, $n_{\rm H}$ is the number density of hydrogen, and $a_{\rm min}$=10{\AA} and $a_{\max}=a_{\rm disr}$ are assumed. We take constant $C_j$ from \cite{2001ApJ...548..296W} for MRN size distribution as follows: $C_{\rm sil}=10^{-25.11}$cm$^{2.5}$ and $C_{\rm gra}=10^{-25.13}$cm$^{2.5}$.
The RATD mechanism tends to reduce the abundance of large grains and increases the abundance of smaller grains because of the conservation of total dust mass. To account for this effect, we assume that the slope of the size distribution can be constant and increase the normalization constant $C$ (see more details in \cite{2019arXiv190611498C}).
Previous studies (e.g., \citealt{1995ApJ...444..293K}; \citealt{2006ApJ...652.1318D}; \citealt{2009ApJ...696....1D}) show that different grain shapes and axial ratio $r$ can reproduce the observational data. Therefore, we consider two special cases of a prolate spheroidal shape with $r=1/3$ and an oblate spheroidal shape with $r=1.5$, for both silicate and carbonaceous grains.
\section{Polarization of Starlight}\label{sec:Pabs}
\subsection{Polarization Curves}
For modeling polarization, we assume that graphite grains are randomly oriented, whereas silicate grains can be aligned via RATs. The polarization of starlight arising from absorption and scattering of aligned silicate grains in a slab of thickness $dz$ is defined as
\begin{eqnarray}
dp_{\lambda}(x,z)=\frac{1}{2} \int^{a_{\max}}_{a_{\rm align}} \left(C_x-C_y\right) \frac{dn_{sil}}{da}dadz,
\label{eq:dplam_xz}
\end{eqnarray}
where $C_{x}$ and $C_{y}$ are the grain cross section along the $x-$ and $y-$ axes, respectively, in the reference system which the line of sight is directed along the z-axis.
Following \cite{2014ApJ...790....6H}, one has
\begin{eqnarray}
C_x - C_y = C_{\rm pol}R{\rm cos}^2\zeta,
\label{eq:cross-section}
\end{eqnarray}
where $R$ is the Rayleigh reduction factor (\citealt{1999MNRAS.305..615R}), and $\zeta$ is the angle between the magnetic field and the plane of the sky.
Let $f=R{\rm cos}^2\gamma$ be the effective degree of grain alignment, which depends on the grain size (\citealt{2016ApJ...831..159H}). In the following, to explore how the polarization spectrum changes with the local radiation field, we assume that the magnetic field is uniform along the line of sight and lies in the plane of the sky, so $\cos^{2}\gamma=1$. The polarization degree produced by all grains along the line of sight is given by
\begin{eqnarray}
\frac{P_{\lambda}}{N_{\rm H}}=\int^{a_{\max}}_{a_{\rm align}} \frac{1}{2}C_{\rm {pol}}^{sil}(\lambda,a)f(a)\frac{1}{n_{\rm H}}\frac{dn_{sil}}{da}da,
\label{eq:Plam}
\end{eqnarray}
where $N_{\rm H}=n_{\rm H}L$ with $L$ the length of the line of sight.
Equation (\ref{eq:Plam}) can be rewritten as
\begin{equation}
P({\lambda})=\sigma_{\rm {pol}}(\lambda)\times N_{\rm H},
\label{eq:sigpol}
\end{equation}
where $\sigma_{\rm {pol}}$ in units of $\,{\rm {cm}}^{2} \rm H^{-1}$ is the polarization cross section.
\subsection{Numerical Results}
To get insights into the dependence of the polarization spectrum on grain alignment and disruption, we consider two typical environments, the standard diffuse ISM and dense molecular clouds with the physical parameters listed in Table \ref{tab:condition}. We calculate the polarization of starlight using the cross-section $C_{\rm pol}$ and $C_{\rm ext}$ obtained from \cite{2018A&A...610A..16G} which fit well the average Planck full-sky emission and polarization.
Figure \ref{fig:PabsDiff} shows the polarization curves for the diffuse ISM without RATD (left panels) and with RATD (right panel), assuming an axial ratio of grains $r=1/3$ and the tensile strength $S_{\max}=10^{7}\,{\rm {erg}}\,{\rm {cm}}^{-3}$.
\begin{figure*}
\includegraphics[scale=0.45]{results/fig3-1.pdf}
\includegraphics[scale=0.45]{results/fig3-2.pdf}
\caption{Polarization spectrum due to extinction of starlight by dust grains aligned with axial ratio $r=1/3$ by RATs for the diffuse media with various radiation field strengths for two cases without RATD and with RATD. The tensile strength $S_{\max}=10^{7}\,{\rm {erg}}\,{\rm {cm}}^{-3}$ is considered.}
\label{fig:PabsDiff}
\end{figure*}
Figure \ref{fig:PabsDiff} (left panel) shows that the polarization spectrum with $r=1/3$ in the diffuse media peaks at $\lambda_{\max} \sim 0.48 \mu$m when RATD is not taken into account. The polarization at $U=1$ reflects the polarization spectrum from the typical interstellar radiation field. As the radiation field strength increases, $\lambda_{\max}$ moves to shorter wavelengths because of the enhancement of small grains.
Figure \ref{fig:PabsDiff} (right panel) shows the results obtained when RATD is taken into account. It shows that the width of the polarization spectrum becomes narrower as the radiation field strength increases. The reason for that is that the stronger radiation field not only makes smaller grains to be aligned but also disrupts large grains into smaller ones. As a result, the polarization at long wavelength (optical-NIR) decreases, and the polarization at UV wavelengths increases with increasing $U$.
\begin{figure*}
\includegraphics[scale=0.45]{results/fig4-1.pdf}
\includegraphics[scale=0.45]{results/fig4-2.pdf}
\caption{Same as Figure \ref{fig:PabsDiff} but for $S_{\max}=10^{9}\,{\rm {erg}}\,{\rm {cm}}^{-3}$}
\label{fig:Pabsdiff_Smax2}
\end{figure*}
Same as Figure \ref{fig:PabsDiff}, but Figure \ref{fig:Pabsdiff_Smax2} shows the results for a higher tensile strength. The results for the case without disruption is the same, but the effect of RATD (right panel) is less prominent than the case of the lower tensile strength.
\begin{figure*}
\includegraphics[scale=0.45]{results/fig5-1.pdf}
\includegraphics[scale=0.45]{results/fig5-2.pdf}
\caption{Same as Figure \ref{fig:PabsDiff} but for oblate grains with axial ratio $r=1.5$.}
\label{fig:PabsDiffr15}
\end{figure*}
Figure \ref{fig:PabsDiffr15} shows similar results but for an axial ratio $r=1.5$. The maximum polarization is larger than for the case $r=1/3$, but the peak wavelength is similar.
Following Equation (\ref{eq:adisr}), the grain disruption size by RATD is determined by the tensile strength. The radiation disrupts larger grains when they have a higher tensile strength and the wide range of aligned grain size distribution contribute to polarization. The bottom and right panels of Figure \ref{fig:PabsDiff} show the polarization from grains with $S_{\max}$=10$^9$ $\,{\rm {erg}}\,{\rm {cm}}^{-3}$ that the width of the polarization spectrum is larger than that for $S_{\max}$=10$^7$ $\,{\rm {erg}}\,{\rm {cm}}^{-3}$. On the other hand, the polarization curves have a similar shape regardless of the tensile strength when the RATD mechanism is not applied, as shown in the bottom and left panels of Figure \ref{fig:PabsDiff}.
Figure \ref{fig:PabsGMC} and \ref{fig:PabsGMC_Av} show the results for a molecular cloud with a star and without a star at the center, respectively. The polarization fraction is lower than that in the diffuse media (Figure \ref{fig:PabsDiff}) because the high number gas density results in a faster rotational damping time such that the critical size of aligned grains is larger. As a result, the maximum polarization is decreased, and the peak wavelength is increased. One can see from the top panels of Figure \ref{fig:PabsGMC_Av} that the polarization curves has little change of the peak wavelength as the visual extinction increases and almost same at high $A_V$, where very weak radiation results in the disruption of grains very little.
We also consider the case where the ambient interstellar radiation field is 10 times stronger than the standard ISRF, such that grains at $A_V$=0 in a dense MC are exposed to $U=10$. The obtained results are shown in the bottom panels of Figure \ref{fig:PabsGMC_Av}. The profile of the polarization spectrum is similar to the top panels, but the polarization at longer wavelengths at $A_V$=0 becomes smaller when RATD is taken into account (see right panels of Figure \ref{fig:PabsGMC_Av}). It arises from the fact that large grains at the surface of the molecular cloud can be disrupted into small grains only by a stronger radiation field. However, deep inside the cloud, even the strong interstellar radiation cannot disrupt large grains due to dust extinction.
\begin{figure*}
\includegraphics[scale=0.45]{results/fig6-1.pdf}
\includegraphics[scale=0.45]{results/fig6-2.pdf}
\caption{Polarization spectrum due to extinction of starlight by aligned grains in a molecular cloud by dust grains with axial ratio of $r=1/3$ and the tensile strength of $10^{7}\,{\rm {erg}}\,{\rm {cm}}^{-3}$ for the case without RATD (left panel) and with RATD (right panel). The polarization spectrum changes with different $U$.}
\label{fig:PabsGMC}
\end{figure*}
\begin{figure*}
\includegraphics[scale=0.45]{results/fig7-1.pdf}
\includegraphics[scale=0.45]{results/fig7-2.pdf}
\includegraphics[scale=0.45]{results/fig7-3.pdf}
\includegraphics[scale=0.45]{results/fig7-4.pdf}
\caption{Same as Figure \ref{fig:PabsGMC} but for oblate grains with axial ratio $r=1.5$. Two different strength of interstellar radiation field at $A_V$=0 is used: the strength of radiation at $A_V$=0 is a typical ISM radiation field (1$\times u_{\rm ISRF}$) in top panels and it is 10$\times u_{\rm ISRF}$ in bottom panels.}
\label{fig:PabsGMC_Av}
\end{figure*}
\section{Polarized Thermal Emission from Dust Grains}\label{sec:Pem}
\subsection{Polarization degree}
Dust grains heated by starlight re-emit thermal radiation in infrared. For the optically thin regime, the total emission intensity and polarized intensity are respectively given by (\citealt{2009ApJ...696....1D}):
\begin{equation}
\begin{split}
\frac{I_{\rm em} (\lambda)}{N_{\rm H}} &= \sum_{j=sil,car}\int ^{a_{\max}}_{a_{\rm min}}Q_{\rm ext} \pi a^2 \int dT B_{\lambda}(T_d)\frac{dP}{dT} \frac{1}{n_{\rm H}}\frac{dn_{j}}{da} da,\\
\frac{I_{\rm pol} (\lambda)}{N_{\rm H}} &= \int ^{a_{\max}}_{a_{\rm min}}f(a)Q_{\rm pol}\pi a^2 \int dT B_{\lambda}(T_d)\frac{dP}{dT} \frac{1}{n_{\rm H}}\frac{dn_{sil}}{da} da,
\end{split}
\label{eq:Ipol_Iem}
\end{equation}
$dP/dT$ is the temperature distribution function which depends on the grain size and radiation strength $U$, and $B_{\lambda}$ is the Planck function as given by
\begin{equation}
B_{\lambda}(\lambda, T) = \frac{2hc^2}{\lambda^5}\frac{1}{e^{hc/(k_B T\lambda)}-1}.
\label{eq:blackbody}
\end{equation}
Above, we disregard the minor effect of grain alignment on the thermal emission, which is considered in \cite{2009ApJ...696....1D}.
The polarization degree is then given by
\begin{equation}
P (\%)= 100\times \left(\frac{I_{\rm pol}}{I_{\rm em}}\right).
\label{eq:Pem_ratio}
\end{equation}
\subsection{Grain temperature distribution} \label{sec:GrainT}
\begin{figure*}
\includegraphics[scale=0.45]{results/fig8-1.pdf}
\includegraphics[scale=0.45]{results/fig8-2.pdf}
\caption{Temperature probability distribution $dP/d\rm{ln}T$ for silicate grains at $U=1$ with various grain sizes (\it{left panel}) and at $a\sim 0.01\mu$m with $U = 0.1 - 5000$ (\it{right panel}).}
\label{fig:Tdust}
\end{figure*}
Dust grains are heated to high temperatures by absorption of optical/UV photons from stars, and subsequently, the grains cool down by re-emitting photons at long wavelengths. Let $dP$ be the probability of finding the grain temperature in the interval $[T, T+dT]$. Large grains can achieve a steady temperature due to high heat capacity, but small grains undergo strong temperature fluctuations due to its low heat capacity.
We compute $dP/dT$ using a DustEM code which is publicly available at https://www.ias.u-psud.fr/DUSTEM/. Figure \ref{fig:Tdust} (left panel) shows the temperature distribution function of silicate grains at several sizes and the standard radiation field ($U=1$). The temperature distribution is very broad for small grains ($a<0.05\mum$) and becomes narrower for larger grains. The right panel of Figure \ref{fig:Tdust} shows the change in the temperature for silicate grains of size $a=0.01\mum$ subject to various radiation fields. For a low radiation strength of $U<10$, the temperature distribution is broad, and the distribution becomes narrower and shifts to higher peak temperature as $U$ increases.
\subsection{Polarization spectrum for the diffuse interstellar medium}
Figure \ref{fig:PemD} shows the polarization spectrum of thermal emission from dust grains aligned by RATs in the absence of RATD (left panel) and presence of RATD (right panel) for prolate grains of axial ratio $r=1/3$, assuming the tensile strength $S_{\max}=10^{7}\,{\rm {erg}}\,{\rm {cm}}^{-3}$.
In the absence of RATD (left panel), the maximum polarization increases with increasing the radiation strength $U$ as a result of enhanced alignment of small grains (see Figure \ref{fig:Afunc}). The peak wavelength ($\lambda_{\max}$) of the polarization spectrum moves toward short wavelengths as $U$ increases, but their spectral profiles remain similar. When the RATD mechanism is taken into account, the polarization degree for $U\gtrsim 1$ is essentially lower than the case without RATD due to the removal of large grains by RATD (see Table \ref{tab:DustSize}). Moreover, the peak polarization degree decreases as the radiation strength increases from $U=0.1$ to $U=1.0$.
Figure \ref{fig:PemD_S1e9} shows the results but for dust grains having a higher tensile strength (i.e., $S_{\max}=10^{9}\,{\rm {erg}}\,{\rm {cm}}^{-3}$). The similar trend is observed, but the peak polarization increases for $U=0.1-1$ and then decreases as the radiation strength increases from $U=1$ (green line) to $U=10$. The reason is that the disruption requires a higher radiation strength than for grains with lower $S_{\max}$.
\begin{figure*}
\includegraphics[scale=0.45]{results/fig9-1.pdf}
\includegraphics[scale=0.45]{results/fig9-2.pdf}
\caption{Polarization spectrum of thermal emission from aligned grains by RATs with axial ratio $r=1/3$ in the diffuse medium with various radiation field strengths, assuming no grain disruption (left panel) and with disruption by RATD (right panel). The tensile strength $S_{\max}=10^{7}\,{\rm {erg}}\,{\rm {cm}}^{-3}$ is considered.}
\label{fig:PemD}
\end{figure*}
\begin{figure*}
\includegraphics[scale=0.45]{results/fig10-1.pdf}
\includegraphics[scale=0.45]{results/fig10-2.pdf}
\caption{Same as Figure \ref{fig:PemD} but for a higher tensile strength of $S_{\max}=10^{9}\,{\rm {erg}}\,{\rm {cm}}^{-3}$.}
\label{fig:PemD_S1e9}
\end{figure*}
Figure \ref{fig:PemDr15} shows similar results as Figures \ref{fig:PemD}, but for oblate grains of axial ratio $r=1.5$. It shows that increasing the axial ratio of grains, $r=1.5$, results in shorter peak wavelength due to efficient alignment of elongated dust grains, but the shape of polarization curves is not influenced strongly even in the case for taking account of dust grain disruption.
\begin{figure*}[h]
\includegraphics[scale=0.45]{results/fig11-1.pdf}
\includegraphics[scale=0.45]{results/fig11-2.pdf}
\caption{Polarization spectrum of thermal emission from aligned grains by RATs with axial ratio, $r=1.5$, in the diffuse ISM with various radiation strengths for two cases without RATD (left panel) and with RATD. The tensile strength $S_{\max}=10^{7}\,{\rm {erg}}\,{\rm {cm}}^{-3}$ is considered.}
\label{fig:PemDr15}
\end{figure*}
\subsection{Polarization spectrum for molecular clouds}
Figure \ref{fig:PemG} shows the polarization spectrum obtained for aligned grains in a MC, assuming $S_{\max}=10^{7}\,{\rm {erg}}\,{\rm {cm}}^{-3}$. As seen, the polarization degree first increases from $U=0.1$ to $U=10$ and then it falls between $U=10$ and $U=100$ due to the disruption of large grains via the RATD mechanism. Note that for MCs of higher gas density (i.e., faster rotational damping), the rotational disruption occurs at $U\sim 10$ because the required radiation strength must be higher than for the diffuse ISM, assuming the same tensile strength.
\begin{figure*}
\includegraphics[scale=0.45]{results/fig12-1.pdf}
\includegraphics[scale=0.45]{results/fig12-2.pdf}
\caption{Polarization spectrum of thermal dust emission with axial ratio of $1/3$ and tensile strength of $10^9\,{\rm {erg}}\,{\rm {cm}}^{-3}$ in a molecular cloud where a star is located in the center of: using grain size distribution including constant maximum size of dust grains aligned by radiation field (left panel), and including aligned grain size less than disruption size (right panel). Polarization spectrum changes with different $U$.}
\label{fig:PemG}
\end{figure*}
\begin{figure}
\includegraphics[scale=0.45]{results/fig13.pdf}
\caption{Polarization spectrum due to thermal emission of dust grains with axial ratio of $r=1/3$ and tensile strength of $10^7\,{\rm {erg}}\,{\rm {cm}}^{-3}$ in a molecular cloud. Polarization spectrum changes with different $A_{\rm V}$.}
\label{fig:PemG_Av}
\end{figure}
\subsection{Variation of submm polarization with the radiation field}
To see in more detail how the submm polarization degree changes with $U$ and grain temperature $T_{d}$, we calculate the polarization degree at $\lambda=850\mu$m (P$_{\rm 850}$) using our results in the previous section. Grain temperature is simply estimated from $U$ using the formula $T_{d}=16.4a_{-5}^{1/15}U^{1/6}$ for silicate grains (see \citealt{Draine:2011}).
\begin{figure*}
\includegraphics[scale=0.45]{results/fig15-1.pdf}
\includegraphics[scale=0.45]{results/fig15-3.pdf}
\caption{Polarization degree at 850$\mu$m for the different radiation strength ($U$) or grain temperature ($T_d$, top horizontal axis) for two cases, without RATD (solid lines) and with RATD (dashed lines), assuming the different tensile strength of grains in the diffuse ISM (left panel) and a MC (right panel). Grains with axial ratio of $r=1/3$ are considered.}
\label{fig:Pem850}
\end{figure*}
In Figure \ref{fig:Pem850}, we show the variation $P_{850}$ with the radiation strength $U$ or grain temperature $T_d$, calculated for grains in the diffuse ISM (left panel) and molecular clouds (right panel), assuming a wide range of the tensile strength $S_{\max}=10^{6}-10^{9}\,{\rm {erg}}\,{\rm {cm}}^{-3}$. The black line shows the results when the RATD is not taken into account. In contrast to the increase of $P_{850}$ with $U$ in the absence of RATD, the polarization $P_{850}$ in the diffuse ISM does not change considerably when the radiation strength increases between $3-100$ when RATD is accounted for. This is because of the compensation of the shift of polarization toward short wavelengths due to lower $a_{\rm align}$ and the increase of the polarization. Indeed, in the case of high tensile strength ($S_{\max}\ge 10^{8}\,{\rm {erg}}\,{\rm {cm}}^{-3}$), we cannot expect the overall increase of the polarization degree with $U$, but rather, the variation in the wavelength dependence polarization. When $U\le 1$ ($T_{d}<16.4\,{\rm K})$ for $S_{\max}=10^{9}\,{\rm {erg}}\,{\rm {cm}}^{-3}$, the polarization $P_{850}$ increases as U increases. The peak of polarization $P_{850}$ moves to a smaller radiation strength or lower grain temperature for a smaller $S_{\max}$.
Figure \ref{fig:Pem850} (right panel) show the similar results but for a MC. The amplitude of the polarization variation with $U$ due to RATD is larger for the MC (see right panel of Figure \ref{fig:Pem850}). Within the RAT paradigm, the wide amplitude of the change for the MC is understood because for a high gas number density $n_{\rm H}$, the RATD requires a higher radiation strength to be effective. So, in the case of high tensile strength ($S_{\max}\ge 10^{8}\,{\rm {erg}}\,{\rm {cm}}^{-3}$) for dense MC, when $U$ increases from $U=0.1$, the polarization increases until $U\sim10$ and then decreases due to RATD. The larger pumping range raises the peak of the polarization as seen in the right panel of Figure \ref{fig:Pem850}.
Figure \ref{fig:Pem850_n100} (left panel) shows the results for a translucent cloud with gas density between the diffuse ISM and dense MC. The similar trend is observed, but the critical strength where $P_{850}$ starts to decrease is larger than the diffuse ISM, but smaller than the MC. We also study the variation of $P_{850}$ with $U$ for grains of axial ratio $r=1.5$ in the right panel and find the similar trend.
\begin{figure*}
\includegraphics[scale=0.45]{results/fig15-2.pdf}
\includegraphics[scale=0.45]{results/fig14-2.pdf}
\caption{Left panel is the same as Figures \ref{fig:Pem850} but for a translucent cloud with density $n_{\rm H}=100\,{\rm {cm}}^{-3}$. Right panel shows the variation of $P_{850}$ for two grain shapes of axial ratio $r=0.3$ and $r=1.5$.}
\label{fig:Pem850_n100}
\end{figure*}
\section{Discussion}\label{sec:discussion}
\subsection{Physical forward modeling of multi-wavelength polarization}\label{sec:multi-lamb}
The polarization spectrum closely depends on the grain size distribution and alignment degree of dust grains. Both the grain size and alignment are expected to change with the local environments. Inverse modeling of observational data (e.g., \citealt{2009ApJ...696....1D}; \citealt{2018A&A...610A..16G}) is a useful technique to derive the average property of dust grains.
In this paper, we focus on the variation of the local radiation strength $U$ and perform forward modeling of multi-wavelength dust polarization from UV-optical-NIR (starlight polarization) to far-IR (polarized emission) to predict how the polarization spectrum changes with increasing $U$ from the standard ISRF. We simultaneously treat grain alignment and disruption by RATs. The grain size distribution is modeled consistently using the RATD mechanism, which changes with the strength of the radiation field, as shown in Section \ref{sec:Align}. Our modeling results show that when the radiation strength $U$ increases, the polarization spectrum in general shifts to short wavelengths (see Figures \ref{fig:PabsDiff}-\ref{fig:PabsDiffr15} for starlight polarization and Figures \ref{fig:PemD}-\ref{fig:PemDr15} for polarized thermal emission). At the same time, the maximum polarization degree of starlight as well as thermal dust emission also increases with increasing $U$.
Thanks to the RATD effect, for the first time, we can study the dependence of the interstellar polarization spectrum on the mechanical properties of dust, characterized by the tensile strength $S_{\max}$. For a given radiation field, our results show that the polarization spectrum depends crucially on $S_{\max}$ because the RATD determines the upper cutoff of the grain size distribution.
Previously, \cite{2018A&A...610A..16G} modeled the dust polarization spectrum for different local radiation strength of $U=0.1$ to $U=10^{3}$ using the best-fit alignment function (model D) obtained from fitting the average full-sky Planck data. This model does not take into account the variation of grain alignment efficiency with $U$. As $U$ increases, the polarization spectrum shifts to short wavelengths, but the peak polarization slightly changes.
\subsection{Towards constraining grain internal structures with observational data}\label{sec:planck}
In Figure \ref{fig:Pem850}, we have shown that in the absence of grain disruption by RATD, the polarization at $850\mum$, denoted by $P_{850}$, increases monotonically with the radiation intensity (i.e., grain temperature) over the considered range of $U$. The absence of RATD is equivalent to the situation where grains are made of ideal material without impurity such that the tensile strength is as high as $S_{\max}\sim 10^{11}\,{\rm {erg}}\,{\rm {cm}}^{-3}$ (e.g., diamonds). However, when the RATD effect is taken into account for grains made of weaker material ($S_{\max}\lesssim 10^{9}\,{\rm {erg}}\,{\rm {cm}}^{-3})$, the variation of the polarization degree $P_{850}$ with $U$ depends closely on the tensile strength. The general trend is that $P_{850}$ first increases from a low value of $U$ and then decreases when $U$ becomes sufficiently large. The critical value $U$ at the turning point is determined by the value $S_{\rm max}$ and local gas density $n_{\rm H}$ that controls the grain disruption size $a_{\rm disr}$ according to RATD.
\cite{2018arXiv180706212P} performed a detailed analysis of the variation of $P_{850}$ with the radiation field using {\it Planck data}. The authors discovered that $P_{850}$ first increases with increasing grain temperature from $T_{d}\sim 16-19\,{\rm K}$ and then drops as the dust temperature increases to $T_{d}\gtrsim 19\,{\rm K}$. Such an unusual $P_{850}-T_{d}$ relationship cannot be reproduced if large grains are not disrupted (i.e., RATD is not taken into account), as shown in Figures \ref{fig:Pem850} and \ref{fig:Pem850_n100}. Moreover, the observed trend is, in general, consistent with our model with RATD, but grains have the tensile strength of $S_{\max}\lesssim 10^{9}\,{\rm {erg}}\,{\rm {cm}}^{-3}$. This range of tensile strength favors a composite internal structure of grains over the compact one.
We also note that the polarization degree of polarized thermal emission obtained from our model is lower than predicted by \cite{2018A&A...610A..16G}. The difference perhaps arises from the fact that we adopt a power-law size distribution instead of using the best-fit size distribution to the Planck data by \cite{2018A&A...610A..16G}. However, we focus on the overall polarization spectrum with the varying radiation strength instead of fitting to the observational data.
\subsection{Comparison to the optical polarization of SNe Ia}\label{sec:SN}
Due to extinction by aligned grains, the starlight is polarized, and the degree of polarization varies with the wavelength. In general, the maximum polarization occurs at the peak wavelength of $\lambda_{\max} \sim 0.55 \mu$m and declines on both sides of the peak. Following \cite{1973IAUS...52..145S}, the polarization curve of starlight can be described by an empirical formula (namely Serkowski law):
\begin{equation}
P(\lambda) = P_{\max}\,{\rm {exp}}\left(-K\,{\rm {ln}}^2(\lambda/\lambda_{\max})\right)
\label{eq:serkowski}
\end{equation}
where $P_{\max}$ is the maximum of polarization, $\lambda_{\max}$ is the peak wavelength at $P_{\max}$, and $K$ is a parameter that characterizes the "width" of the polarization profile (see more in Section\ref{sec:SN}).
\begin{figure}
\includegraphics[scale=0.5]{results/fig16.pdf}
\caption{Extinction sight-lines in the $\lambda_{\max}$-K plane. A magenta solid line and a red solid line show the calculation adopted RAT and RATD, respectively, for dust grains with $S_{\max}$ of 10$^7$ erg/cm$^3$ and axial ratio of $r=1/3$ in the diffuse ISM. The dashed black line traces the relation in \cite{1992ApJ...386..562W}. Points are the observational data: green stars are the samples of SNe Ia from \cite{2015A&A...577A..53P} and \cite{2017ApJ...836...88Z}.}
\label{fig:lamb-K}
\end{figure}
Polarimetric observations of SNe Ia are an excellent test for our theoretical prediction of dust polarization. The polarization curve has correlation between Serkowski parameters K and $\lambda_{\max}$. The "width" parameter K, in Equation (\ref{eq:serkowski}), is linearly correlated with $\lambda_{\max}$ as $K=c_1\lambda_{\max}+c_2$, where $c_1$ and $c_2$ are the average values of the slope and intercept in K-$\lambda_{\rm max}$ plane. For standard K-$\lambda_{\max}$ relationship, the current best value for $c_1$ and $c_2$ are $1.66\pm 0.09$ and $0.01\pm 0.05$ respectively (see \citealt{2003dge..conf.....W}). The smaller $K$ shows broader profile in polarization curve. The correlation of $\lambda_{\max}$ and $K$ is shown in Figure \ref{fig:lamb-K} where the standard relationship by \cite{1992ApJ...386..562W} is presented in the dashed black line. The left panel of Figure \ref{fig:PabsDiff} shows broader curve as radiation strength becomes higher. From the curve, we calculate $\lambda_{\max}$ and derive the $K$ value by fitting the Serkowski law to the calculated polarization curve. Its correlation is shown as a magenta line in the Figure \ref{fig:lamb-K}. Our calculation for the model of RAT alignment is consistent with the standard relationship. In the right panel of Figure \ref{fig:PabsDiff}, on the other hand, we find that the curve becomes narrower and $\lambda_{\max}$ gets shorter for a higher radiation strength. This result of RATD model is consistent with the study of \cite{2018A&A...615A..42C}.
Peculiar polarization data observed toward SNe Ia by \cite{2018A&A...615A..42C} show that the $K$ parameter does not follow the standard relationship. In order to see if the RATD mechanism can explain SNe Ia polarization data, we calculate the $K$ parameter and $\lambda_{\max}$ using the polarization curves from Section \ref{sec:Pabs}. The red line is our samples calculated in consideration of RATD mechanism with aligned grains which has axial ratio, $r=1/3$, at $S_{\max}=10^7\,{\rm {erg}}\,{\rm {cm}}^{-3}$. A least-square fit to our samples has slope and intercept $c_1= -8.5$ and $c_2= 6.5$. The relationship in our calculation is significantly different from the relation for dust grains in the Taurus region (\citealt{1992ApJ...386..562W}), while it is similar with samples of SNe \rom{1}a. The strong radiation from a hot source like as SNe \rom{1}a can disrupt large grains and form small grains. These small grains are aligned by radiation and produce large K at shorter $\lambda_{\max}$, representing negative correlation between K and $\lambda_{\max}$.
Finally, our calculations assumed the grain size distribution with the standard slope of $\alpha=-3.5$. In principle, the disruption of large grains by RATD enhances the abundance of small grains, so that the size distribution may be steeper than the standard value as long as RATD occurs (\citealt{Giangetal:2019}). To see how the slope affects the polarization polarization, we repeat our calculations for $\alpha=-4$. We find that the obtained results are slightly different from results shown in Figure \ref{fig:PabsGMC_Av}.
\section{Summary}\label{sec:summary}
In this paper, we have performed physical modeling of multi-wavelength polarization by aligned grains for the different radiation fields. Our main results are summarized as follows:
\begin{enumerate}
\item Using the RAT alignment and RATD theory, we obtain the grain alignment function and size distribution of dust grains for the ISM with various radiation fields and model the polarization of starlight and polarized thermal emission by aligned grains.
\item For the diffuse medium, we find that the polarization spectrum of starlight is shifted to the shorter wavelength due to the enhancement of small grains when the radiation intensity increases. At the same time, the optical/NIR polarization is reduced due to the disruption of large grains into smaller ones.
\item For polarized thermal emission, we find that the peak polarization increases but the peak wavelength decreases with increasing radiation strength $U$ due to enhanced alignment of small grains. This prediction can be tested with observations such as by SOFIA/HAWC+.
\item In the absence of RATD, we find that the submm polarization degree at 850 $\mu$m ($P_{850}$) increases with increasing grain temperature ($T_{d}$) until $T_{d}\sim 50\,{\rm K}$. However, when taking into account RATD, we find that the variation of the polarization degree with the radiation strength depends on the tensile strength of grain materials.
\item Comparing our predictions of $P_{850}-T_{d}$ with the results from \cite{2018arXiv180706212P} using {\it Planck} data, we find that grain disruption must occur in order to reproduce the observed non-monotonic increase of $P_{850}$ with $T_{d}$. This suggests that interstellar grains unlikely to have a compact structure with very high tensile strength but perhaps a composite structure.
\item Based on our results, we suggest that an important way to test RAT theory and RATD is to observe polarization toward star-forming regions. This is a complementary to the traditional way to test RAT for starless cores.
\item Our models of starlight polarization for high radiation intensity with RATD find that the $K-\lambda_{\max}$ does not follow a standard relationship observed for the average ISRF. However, this predicted trend qualitatively agree with observations toward SNe Ia.
\end{enumerate}
\acknowledgments
We are grateful to A. Lazarian for his warm encouragements. We thank V. Guillet for sharing with us the data of cross-section of dust grains used in their paper. This work was supported by the National Research Foundation of Korea (NRF) grants funded by the Korea government (MSIT) through the Basic Science Research Program (2017R1D1A1B03035359) and Mid-career Research Program (2019R1A2C1087045).
\bibliographystyle{/Users/thiemhoang/Dropbox/Papers2/apj} |
2006.01150 | \section{Introduction}
On August 17th 2017, advanced LIGO and Virgo observed the first gravitational wave signal from a binary neutron star (NS) merger \citep{gw170817}. This event, named GW170817, was followed by a
short duration gamma-ray burst, GRB170817A, and, 9 days later, by a non-thermal afterglow emission, visible across the electromagnetic spectrum \citep{grb170817, Troja17, Hallinan17}.
After an initial rising phase, $F \propto t^{0.8}$ \citep{Troja18,Mooley18,Lyman18,Margutti18, Ruan18}, the afterglow peaked at $\approx$160 d after the merger and then started a rapid decay phase, $F \propto t^{-2.2}$ \citep{Mooley18b,Lamb19,Troja19}. This behavior is markedly different from the garden-variety GRB afterglows, observed to fade within a few minutes since the burst.
The low-luminosity of the gamma-ray emission and the atypical temporal evolution of the afterglow component are widely interpreted as manifestation of a highly-relativistic structured jet seen at an angle of $\approx$20-30 deg from its axis \citep{Troja17,grb170817,Lazzati18,Lyman18,Troja18,Mooley18b,Margutti18,Lamb19,Ryan19,Troja19,Hajela19}.
In this model, the energy and Lorentz factor of the relativistic ejecta vary with the angle from the jet's axis \citep[e.g.][]{ZM02}.
The initial rising slope and the peak time strongly depend on the observer's viewing angle and the jet's angular profile \citep{Ryan19}. However, the post-peak behavior is dominated by the emission from the jet's core and should resemble the post jet-break evolution of a standard GRB afterglow.
Even in this case, the post-break evolution can exhibit a rich behaviour, and is sensitive to the nature of the spreading dynamics of the decelerating relativistic plasma and to gradients in the circumburst ambient gas mass distribution.
At sufficiently late times, emission from the jet as it has decelerated to non-relativistic flow velocities will begin to dominate the total observed flux, leading to a change in slope relative to the relativistic limit \citep{Frail00}. If a counter jet was launched, this too will at some point become visible \citep{vanEerten10}. However, very few GRBs are close enough to remain continuously visible for years and, for this reason, the jet's late-time evolution is rarely probed by observations at wavelengths other than radio \citep[e.g.][]{DePasquale16,Kouveliotou04}.
Changes in the light curve evolution can also be the product of a genuinely new feature of the outflow not previously detected. Of particular interest to the case of neutron star mergers are scenarios that relate directly to the nature of the remnant (such as prolonged energy injection from a long-lived central engine, \citealt{Piro19}) and to the sub-relativistic merger ejecta, producing a low-luminosity late-peaking afterglow \citep{np11, Hotokezaka18,Kathi18}.
In the case of GW170817, evidence suggests that a substantial amount ($\gtrsim$0.01\,$M_{\odot}$) of fast ($\gtrsim$0.1\,$c$) ejecta comes from the luminous kilonova emission AT2017gfo \citep{Arcavi2017, Evans2017, Drout2017, Kasen17, Kasliwal2017, Nicholl2017, Pian2017, Shappee2017, Smartt2017, Tanvir2017, Troja17}.
As these ejecta continue to expand they will drive a blastwave in the local medium, begin decelerating as more mass is swept up, and emit synchrotron radiation from the blast wave’s forward shock. This emission, which we refer to as kilonova afterglow, peaks years after the initial burst and,
at the distance of GW170817, may be bright enough to be detected with current instruments.
In order to explore the late-time behavior of the relativistic jet and constrain alternative components of emission, the location of GW170817 is periodically monitored at radio and X-ray energies.
In this work, we present the results of the long-term monitoring campaign
with the {\it Chandra} X-ray observatory and the Australian Telescope
Compact Array (ATCA), and discuss the possible origins of the observed long-lived X-ray emission.
Throughout this paper, we adopt a distance of 40~Mpc and a standard $\Lambda$CDM cosmology \citep{Planck18}. Unless otherwise stated, the quoted errors are at the 68\% confidence level, and upper limits are at the 3\,$\sigma$ confidence level.
\begin{figure}
\includegraphics[width=0.99\columnwidth]{mosaic.pdf}
\vspace{-0.2cm}
\caption{X-ray image of GW170817, as observed by {\it Chandra}.
The central pane shows the stacked image of the field,
with total exposure of 783~ks. The image was adaptively smoothed
with a Gaussian kernel.
The position of GW170817 is marked. In addition, several X-ray point sources as well as extended diffuse X-ray emission
are visible.
The image stamps are centered on the location
of GW170817, showing the main phases of its evolution.
}
\label{fig:image}
\vspace{0.3cm}
\includegraphics[width=0.99\columnwidth]{HR.pdf}
\vspace{-0.3cm}
\caption{Hardness Ratio light curve for the X-ray afterglow of GW170817.
We adopted the definition $HR = (H-S)/(H+S)$, where $H$ and $S$ are the net source counts in the hard (2.0-7.0 keV) and soft (0.5-2.0 keV) energy bands, respectively. Error bars represent 1\,$\sigma$ uncertainties. The last three epochs (gray symbols) were binned into a single point in order to improve the signal-to-noise ratio.
Horizontal lines show the values expected for an absorbed power-law with
photon index $\Gamma$=2.0 (dotted line), 1.5 (dot-dashed line), and 1.25 (dashed line).
}
\label{fig:hr}
\end{figure}
\vspace{-0.2cm}
\begin{table*}[t]
\centering
\caption{Late-time X-ray observations of GW170817}
\label{tab:obs}
\begin{tabular}{lcccccc}
\hline
& T-T$_0$ & Exposure & Count Rate & Unabsorbed Flux & Flux Density & Significance \\
& & & (0.5-7.0 keV) & (0.3-10 keV) & 5 keV & \\
& [d] & [ks] & [10$^{-4}$ cts s$^{-1}$] & [10$^{-15}$ erg cm$^{-2}$ s$^{-1}$] & [10$^{-5}$ $\mu$Jy] & [$\sigma$]\\
\hline
Epoch 1
& 582 & 98.8 & 1.5 $\pm$ 0.4 & 2.6 $\pm$ 0.7 & 9 $\pm$ 2 & 7.7 \\
Epoch 2 & 742 & 98.9 & 1.1$^{+0.4}_{-0.3}$ & 1.7$^{+0.7}_{-0.5}$ & 5.8 $\pm$ 1.7 & 6.1 \\
Epoch 3\footnotemark
& 939 & 96.6 & 0.8 $\pm$ 0.3 & 1.4 $\pm$ 0.5 & 4.7 $\pm$ 1.8 & 5.2 \\
\hline
\end{tabular}
\end{table*}
\section{Observations}
\subsection{X-rays}\label{sec:Xrays}
We presented the analysis of the first year of observations in \citet{Troja19}. Since then, the target GW170817 is being monitored by the {\it Chandra} X-ray Telescope
with a cadence of approximately six months under Guest Observer programs 20500691 (PI: Troja) and 20500299 (PI: Margutti).
These three additional epochs (Table~\ref{tab:obs}) track the afterglow evolution from 1.6 to 2.6 years after the merger. The temporal evolution of the X-ray counterpart is shown in Fig.~\ref{fig:image}.
Each epoch was split into multiple observations. Each observation was reduced in a standard fashion using the CIAO v4.12 and the latest calibration files (CALDB 4.9.1).
In order to correct for small positional errors between different observations, we used the tool \textit{reproject\_aspect} to determine a new aspect solution based on common bright point sources. Each observation was reprocessed using the updated astrometric information.
Data were filtered with the task \textit{deflare} to remove background flares by applying a sigma clipping threshold of 3. Observations carried out at a similar epoch were merged into a single image using the task \textit{flux\_obs}. The resulting total exposures are 98.8 ks (Epoch 1), 98.9 ks (Epoch 2), and 96.6 ks (Epoch 3).
Aperture photometry was performed in the broad 0.5-7.0 keV energy band.
Source counts were extracted from the merged images using a circular aperture containing 92\% of the encircled energy fraction, whereas the background contribution was estimated from nearby source-free regions.
X-ray emission from the position of GW170817 is visible at all epochs.
We estimated the detection significance following the Bayesian method of \citet{kbn}, and report in Table~\ref{tab:obs} the equivalent value for a normal probability distribution.
Due to the low number of counts, the source spectral properties can not be adequately constrained. In order to check for possible spectral evolution we computed the hardness ratio (HR; \citealt{behr}), defined as the ratio $(H-S)/(H+S)$, where $H$ and $S$ are the net source counts in the hard (2.0-7.0 keV) and soft (0.5-2.0 keV) energy bands, respectively. The HR light curve (Fig.~\ref{fig:hr}) shows a possible hardening of the spectrum at late times
($t\gtrsim$1.5~yr),
although with low significance.
X-ray fluxes were calculated assuming an absorbed power-law spectrum with column density fixed to the Galactic value 1.1$\times$10$^{21}$\,cm$^{-2}$ \citep{Willingale13} and a photon index $\Gamma$= $\beta$ +1 = 1.585, where $\beta$ is the spectral index derived from broadband afterglow modeling \citep{Troja19}.
A harder spectrum would increase our flux estimate by $\approx$13\% (for $\Gamma$=1.25), still within the statistical uncertainties of the measurement.
Our values are lower, yet consistent within the large uncertainties, than those reported in \citet{Hajela19}. Our conversion into fluxes is based on the broadband (from radio to X-rays) spectral shape and does not change over time, whereas \citet{Hajela19} derives variable conversion factors based on single-epoch X-ray observations. The latter approach is subject to greater uncertainty, and does not take into account the full spectral information available from the multi-wavelength dataset.
\footnotetext
{An independent analysis of this data set reports
a similar count-rate and a 50\% higher X-ray flux \citep{Hajela20}.
We can reproduce this result only by assuming
a hard spectrum with $\Gamma$=0.57,
drastically different from the
spectral properties of the GW afterglow.}
\subsection{Radio}\label{sec:radio}
We re-observed the position of GW170817 with ATCA
(program C3240; PI:Piro) on May 3rd, 2020 (990 d since the merger) for 11 hours. The array configuration was 6A, the centre observing frequency was 2.1~GHz and the observing bandwidth was 2~GHz. The usual primary calibrator 1934-638 was not observed, instead the band-pass calibrator 0823-500 was used to bootstrap the absolute flux density scale assuming a flux density of 6.38~Jy and a spectral slope of -0.215. The source 1245-197 was used as the phase calibrator. The data set was calibrated and imaged in \texttt{Miriad} using standard procedures. The array configuration resulted in a E-W angular resolution of 6.5 arcsec, sufficient to separate the target from its host galaxy NGC~4493.
No detection was found at the position of GW170817 in the natural-weighted restored image. A 3\,$\sigma$ upper limit of 33 $\mu$Jy was estimated from rms noise statistics in a region of the restored image away from bright radio sources.
This measurement constrains the broadband spectral index
to $\beta$\,$<$0.68.
\section{Model Fitting Methods}
Throughout this paper we continue our practice from \cite{Troja18, Troja19, Piro19, Ryan19} of performing Bayesian fits using the model and {\tt afterglowpy} software\footnote{\url{https://github.com/geoffryan/afterglowpy}} described in \cite{Ryan19}. This approach combines a decelerating spreading shell model \citep{vanEerten10} that includes a range of options for lateral and radial energy structure with the \textsc{emcee} (version 2.2.1) Python package for Markov-Chain Monte Carlo analysis \citep{Foreman-Mackey13}. For the jet model with a Gaussian distribution of lateral energy, the parameters are: fraction of post-shock internal energy in magnetic field $\varepsilon_B$, fraction of post-shock internal energy in the accelerated electron population $\varepsilon_e$, power-law slope of the electron population $-p$, homogeneous circumburst medium number density $n_0$, on-axis isotropic equivalent energy $E_0$, jet orientation $\theta_v$, jet core width $\theta_c$, and jet total width $\theta_w$.
We also perform fits that include an additional constant X-ray component, specified by a flux density $F_X$. This accounts for additional sources of emission, such as a long-lived engine or a separate source at close proximity on the sky.
We use the same prior on jet orientation as reported in earlier work \citep{Troja18}, drawn from \cite{Hubble170817} with a Hubble constant as determined by \cite{Planck18}. The additional component $F_X$ is given a flat prior and bounded by $0 < F_X < 2 \times 10^{-4}$ $\mu$Jy.
In order to explore the non-thermal emission from the sub-relativistic ejecta, we consider a quasi-spherical ``kilonova afterglow'' model. While the bulk of the kilonova material coasts at a sub-relativistic velocity, it is expected a less massive tail of material outflows with substantially higher velocities \citep{Bauswein13,Hotokezaka13}. The material is postulated to have an energy distribution which is a power-law in the four-velocity: $E_{>u}(u) = E_{\mathrm{tot}} (u/u_{\mathrm{min}})^{-k}$. We use the same MCMC routines as with the structured jet analysis and the isotropic outflow model from \cite{Troja18}, reparameterized for a kilonova-like outflow.
This model is specified by a power-law $k$ stratification of ejecta velocities, a total ejecta mass $M_{ej} = 2k/(k+2) u_{\mathrm{min}}^{-2} E_{\mathrm{tot}} c^{-2}$, a maximum ejecta four-velocity $u_{max}$, a minimum velocity $\beta_{\rm min}$, as well as the environmental and synchrotron parameters $n_0$, $p$, $\varepsilon_{e}$, and $\varepsilon_{B}$. It is not a given that $\varepsilon_{e}$, $\varepsilon_{B}$ and $p$ are identical for jet and kilonova component.
The structured jet fits used a parallel tempered ensemble MCMC sampler with 20 geometrically spaced temperatures between 1 and $10^6$. Each temperature rung was occupied by 100 walkers, and the chain was run for 20,000 iterations. The kilonova afterglow fits were run using a standard ensemble sampler with 300 walkers for 64,000 iterations. Further details of the method can be found in the references listed above.
Our models were compared to the X-ray, radio and optical afterglow light curves using the same data set described in \citep{Troja19}, and by adding the latest data from \citet{Mooley18b,Fong19,Hajela19} and this work.
To compare different models we utilize the Widely Applicable Information Criterion (WAIC; \citealt{Watanabe10}). The WAIC is an estimate of the ``expected log predictive density'' (\emph{elpd}): a score measuring the likelihood new data will be well described by the current model \citep{Gelman13}. The \emph{elpd} measures the predictive power of a fit, it rewards a tight match to the data while penalizing over fitting and extraneous parameters. The WAIC is proven to be asymptotically equal to the \emph{elpd} for a wide range of models and is straightforward to compute from MCMC posterior samples, whereas the \emph{elpd} itself can only be computed if the true model is known. We use the $p_{\mathrm{WAIC} 2}$ estimator for the effective number of parameters \citep{Gelman13}.
Following \citet{Vehtari17} we compute the WAIC score for each model at every data point. The total WAIC score WAIC$_{elpd}$ for a model is the sum of the scores for each data point. Each model score and score difference $\Delta$WAIC$_{elpd}$ have a standard error computed from the variance over the contributions from each data point. This standard error is likely optimistic but within a factor of 2 of the true value \citep{Bengio04}. In a two-way comparison, a model is favoured if its $\Delta$WAIC is several times larger than its standard error.
\newpage
\section{Results}
Two and a half years after the merger,
{\it Chandra} continues to detect X-ray emission at the location of GW170817.
A comparably long-lived X-ray emission is rare in GRBs, and was reported only for long duration bursts, such as GRB~130427A \citep{DePasquale16} and GRB~980425 \citep{Kouveliotou04}.
For a spectral index $\beta$=0.585, the extrapolation of the observed X-ray emission
corresponds to $F606W$\,$\approx$29.7$\pm$0.3 AB mag in the optical and $\approx$5$\pm$2~$\mu$Jy at 3~GHz. For comparison, at the GW location \textit{HST}/WFC3 can reach a 5\,$\sigma$ point-source sensitivity of $F606W$\,$\approx$28 AB mag in four orbits \citep{Lamb19}, whereas a 6~hr long VLA observation can reach a 5\,$\sigma$ sensitivity of $\approx$10-15\,$\mu$Jy in S-band\footnote{https://obs.vla.nrao.edu/ect/}.
X-ray observations therefore remain the most
powerful probe into the faintest stage of the GW counterpart.
In the latest epochs, the measured X-ray flux is higher than model predictions based
on the earlier dataset \citep{Troja19}, suggesting a shallower temporal decay.
Contamination from an unrelated X-ray source seems unlikely.
The probability of a background AGN of comparable flux is about $ 10^{-4}$\,arcsec$^{-2}$ \citep{Georgakakis08}. The density of luminous X-ray sources within the galaxy is also relatively small,
as can be directly seen from Fig.~\ref{fig:image}.
The population of X-ray binaries in elliptical galaxies is in part associated to globular clusters,
however deep {\it HST} observations find no globular cluster at the transient position \citep{Troja17,Lamb19}.
The density of field X-ray binaries depends on the specific star formation rate (sSFR). Present systematic studies cover the range of log (sSFR)$>-12.1$ \citep{Lehmer19}, while NGC~4993 has a much lower value, log(sSFR$<-13$) \citep{Im17}.
Assuming that the relationship established at higher vales of sSFR holds, $\lesssim\,10$ X-ray binaries with $L_X\gtrsim$3$\times$10$^{38}$\,erg\,s$^{-1}$ are expected in NGC~4993. Taking into account the distribution of X-ray sources as a function of their radial offset \citep{Mineo14}, we derive a chance alignment of $\approx10^{-3}$ arcsec$^{-2}$ at the position of GW170817.
Any significant departure from the jet model is likely inherent to the source, and could be caused by several factors, which we discuss below.
\begin{figure}
\includegraphics[width=1.0\columnwidth]{lcjet.pdf}
\vspace{-0.3cm}
\caption{X-ray afterglow light curve of GW~170817, including
\textit{Chandra} (filled circles) and \textit{XMM-Newton} (open circles) measurements.
The dashed line shows the best fit results from earlier work \citep{Ryan19}, based on the first year of data \citep{Troja19}.
The dark (light) blue range shows the 68\% (95\%) uncertainty region of the updated fit, including the entire dataset.
The solid line shows the best fit non-spreading jet model.
}
\label{fig:jet_only_fits}
\end{figure}
\subsection{Jet}
Figure \ref{fig:jet_only_fits} compares the X-ray dataset to the range of jet model light curve predictions. The fit results are summarized in Table \ref{tab:fit_results_jet}.
Our previous best fit \citep{Ryan19}, based on the full first year data set, is shown by the dashed curve. The discrepancy between the new data and these earlier predictions is approximately $2 \sigma$, with the previous fit notably under-predicting the new observations.
A refit of the full updated data set is also shown in Fig. \ref{fig:jet_only_fits}, the solid bands denoting the distribution of X-ray flux estimated by the model. Even though the new fit result curve intersects the new observations within their error bars, it is nevertheless of interest that the model still systematically under predicts the late time observations. Updated posterior parameter constraints are shown in Table \ref{tab:fit_results_jet}. The new constraints are consistent within the uncertainties with those from the first year data, although both the viewing angle $\theta_v$ and circumburst density $n_0$ center on higher values than before.
Both these increases can be understood on simple grounds. The early rise of the jet fixes the ratio $\theta_v/\theta_c$ but leaves their absolute values relatively unconstrained \citep{Ryan19, np20}. As the jet is slowly approaching the Sedov regime, the brighter than expected late X-ray emission requires a wide jet to contribute more flux. Indeed, Table \ref{tab:fit_results_jet} shows our fit value for the opening angle $\theta_c$ increased from 0.07~rad to 0.09~rad when the new observations were included. Since the early afterglow fixes $\theta_v / \theta_c$, the required viewing angle increases as well. The circumburst density is increased to keep the jet break at 160~d, compensating for the increased viewing angle which would otherwise push the jet break to a later time \citep{Ryan19}.
In Table~\ref{tab:fit_results_jet}, we compare the results of our modelling to additional observing constraints, which were not input into the fit. \cite{Ghirlanda19} constrained the size of the radio centroid at $T_0+207$~d to $\delta < 2.5$ mas at 90\% confidence. All our models are safely within this limit.
A more stringent constrains comes from the apparent velocity $\beta_{\textrm{app}}$ of the center-of-brightness on the sky. A value of $\beta_{\textrm{app}} = 4.1 \pm 0.5$ has been obtained from Very Large Baseline Interferometry (VLBI) by \cite{Mooley18c}, measured between 75~d and 230~d after the burst. The model fit to the first year of data estimates $\beta_{\textrm{app}} = 3.5^{+1.2}_{-0.8}$, consistent with the observed value. However, the updated fit significantly under predicts the observed centroid movement, estimating only $\beta_{\textrm{app}} = 2.2^{+0.5}_{-0.4}$. This is largely due to the increased viewing angle, to which the superluminal apparent velocity is a sensitive function.
\begingroup
\renewcommand{\arraystretch}{1.5}
\begin{table*}
\caption{Fit results for the jet models. Col.~1 reports the parameters name and units.
Col.~2: a Gaussian structured, spreading jet fit to the first 360 days of observations.
Col.~3: identical jet model fit to all 940 days of data.
Col.~4: a Gaussian jet with spreading artificially stopped. This model is not physical, but serves to bracket the diversity of possible behaviours of spreading jets.
Col.~5: a spreading Gaussian jet with an additional constant X-ray flux.
}
\begin{tabular}{lrrrr}
\hline
\multirow{3}{*}{Parameter} &
\multicolumn{1}{c}{360 d} & \multicolumn{3}{c}{940 d}\\
\cmidrule(lr){2-2} \cmidrule(l){3-5}
& Spreading Jet & Spreading Jet & Non-spreading & Spreading Jet\\
& & & Jet & Plus Constant \\
\hline
$\theta_v$ (rad)
& $0.40^{+0.11}_{-0.11}$
& $0.54^{+0.09}_{-0.10}$
& $0.31^{+0.08}_{-0.08}$
& $0.44^{+0.10}_{-0.11}$ \\
$\log_{10} E_0$ (erg)
& $52.9^{+1.0}_{-0.7}$
& $53.0^{+0.90}_{-0.90}$
& $53.2^{+1.0}_{-0.8}$
& $53.2^{+1.0}_{-1.0}$ \\
$\theta_c$ (rad)
& $0.07^{+0.02}_{-0.02}$
& $0.088^{+0.014}_{-0.015}$
& $0.047^{+0.011}_{-0.011}$
& $0.071^{+0.017}_{-0.018}$ \\
$\theta_w$ (rad)
& $0.47^{+0.30}_{-0.19}$
& $0.6^{+0.3}_{-0.3}$
& $0.34^{+0.18}_{-0.14}$
& $0.5^{+0.3}_{-0.2}$ \\
$\log_{10} n_0$ (cm$^{-3}$)
& $-2.7^{+1.0}_{-1.0}$
& $-1.7^{+0.9}_{-1.0}$
& $-2.7^{+1.1}_{-1.1}$
& $-2.3^{+1.1}_{-1.1}$ \\
$p$
& $2.170^{+0.010}_{-0.010}$
& $2.139^{+0.010}_{-0.010}$
& $2.160^{+0.009}_{-0.017}$
& $2.146^{+0.012}_{-0.011}$ \\
$\log_{10}\varepsilon_{e}$
& $-1.4^{+0.7}_{-1.1}$
& $-2.0^{+0.8}_{-0.8}$
& $-1.9^{+0.8}_{-1.1}$
& $-2.1^{+0.9}_{-1.0}$ \\
$\log_{10}\varepsilon_{B}$
& $-4.0^{+1.1}_{-0.7}$
& $-3.7^{+0.9}_{-0.9}$
& $-3.8^{+1.1}_{-0.9}$
& $-3.4^{+1.0}_{-1.0}$ \\
\hline
$E_{\mathrm{tot}}$ (erg)
& $50.6^{+0.9}_{-0.7}$
& $50.9^{+0.9}_{-0.9}$
& $50.5^{+1.0}_{-0.8}$
& $50.8^{+0.9}_{-0.8}$ \\
$\beta_{\mathrm{app}}$ ($c$)
& $3.5^{+1.2}_{-0.8}$
& $2.2^{+0.5}_{-0.4}$
& $4.3^{+1.4}_{-0.9}$
& $2.7^{+1.0}_{-0.6}$ \\
$\delta_{\mathrm{rms}}$ (mas)
& $0.60^{+0.3}_{-0.14}$
& $0.61^{+0.12}_{-0.09}$
& $0.48^{+0.16}_{-0.10}$
& $0.75^{+0.3}_{-0.14}$ \\
\hline
$\tilde{\chi}^2$ (dof)
& 1.51 (94)
& 1.20 (94)
& 1.29 (94)
& 1.18 (93) \\
WAIC$_{\mathrm{elpd}}$
& --
& $694.5$
& $690.4$
& $695.7$ \\
$\Delta$ WAIC$_{\mathrm{elpd}}$
& --
& 0.0
& $-4.1 \pm 3.1$
& $1.2 \pm 1.4$ \\
\hline
\end{tabular}\\
\justify
\textbf{Notes -} Marginalized posterior values for each fit parameter, the median and 68\% confidence interval, from the MCMC runs are given in columns 2 - 5, rows 1-8.
Rows 9-11 give the marginalized posterior values for the total energy $E_{\mathrm{tot}}$, apparent velocity $\beta_{\mathrm{app}}$ measured between the VLBI observations \citep{Mooley18c}, and rms width of the centroid during the EVN observations \citet{Ghirlanda19} respectively, also with median and 68\% confidence interval.
The last three rows give the reduced $\tilde{\chi}^2$ value of the maximum-posterior estimate (and degrees of freedom for each fit), the WAIC estimate of the expected log predictive density (elpd), and difference between the WAIC values and the spreading Gaussian Jet fit with standard error.
A higher elpd indicates a model better able to predict the data.
\label{tab:fit_results_jet}
\end{table*}
\endgroup
In our Gaussian jet model the observed motion of the radio afterglow centroid, which requires smaller viewing angles, appears therefore in slight tension with the late X-ray flux, which instead favors larger viewing angles. The tension could be alleviated if the afterglow light curve were able to flatten faster than our current modelling allows. Such an effect could originate from the dynamics of the GRB jet, changes to the emitted synchrotron spectrum, or possibly an additional emission component.
Because the spreading of GRB jets occurs during an intermediate dynamical regime between ultra-narrow highly relativistic flow and broad non-relativistic flow, the evolution of the jet during the spreading stage is more sensitive to the details of outflow geometry than either asymptotic limit of behaviour would suggest. This affects both multi-dimensional hydrodynamical simulations of jets and semi-analytical models. Our model is based on a semi-analytical model for jet spreading \citep{Ryan19, vanEerten10}, and shares this sensitivity. For that reason, we also test the extreme assumption of no spreading at all. Such a jet is non-physical, but serves to bracket the range of jet model light curve predictions.
We ran a fit to the full dataset with a non-spreading Gaussian jet. The best fit (maximum posterior) light curve is shown in Figure~\ref{fig:jet_only_fits} (solid line) and the summary of fit results are presented in Table~\ref{tab:fit_results_jet}. The non-spreading jet has a slower decay after the jet break and is more easily able to accommodate the late data points while requiring an earlier and broader peak. Changing the model assumption about jet spreading mostly affects our inferred values for the angles and circumburst density (see Table~\ref{tab:fit_results_jet}). These end up smaller, consistent with the previous estimates derived from the dataset at 360~d but outside the uncertainties from the fit to the full dataset. The apparent velocity increases to $\beta_{\textrm{app}}=4.3^{+1.4}_{-0.9}$ due to the smaller viewing angle, and is consistent with the observed value.
Although this model does not describe a realistic jet configuration, this fit serves to demonstrate that the interpretation of afterglow data at these late times is highly sensitive to the dynamics of jet spreading.
Both for jets with and without lateral spreading, the full transition to the non-relativistic regime takes
$t_{\rm NR}$\,$\approx$\,$10^4$ days to complete and will not impact the light curve at the current time scale of observations for a reasonable range of model parameter values. The same holds for the appearance of the counter jet, which our models project to temporarily lead to a near-flat light curve between 3000-5000 days after the burst (at around 10$^{-16}$\,erg\,cm$^{-2}$\,s$^{-1}$ at X-ray frequencies and around 0.2\,$\mu$Jy at 3 GHz).
Rather than the divergence between model and data being due to limitations of the model, the jet dynamics might also genuinely change under changing external conditions, specifically a change in circumburst density. Analytical modeling for a homogeneous environment show that the flux below the cooling break scales proportional to circumburst number density $n$ according to $n^{1/2}$ and $n^{0.4}$ ($p=2.2$) in the relativistic and non-relativistic limit respectively (see e.g. \citealt{Leventis12}). In other words, it would merely take a factor four increase in density at distances beyond about a parsec from the merger site (the approximate distance traveled by the jet when observed at its light curve peak around 160 days) in order for the light curve baseline to drift towards a factor two increase, consistent with the latest observations.
A change in the light curve slope
can also occur if the synchrotron cooling break frequency enters or exits the X-ray band. However, our structured jet modeling shows that both radio and X-ray light curves remain in the same spectral regime between injection break $\nu_m$ and cooling break $\nu_c$ throughout our observations, and that $\nu_c$ shifts upwards again after a closest approach to the X-ray band during the light curve peak around 160 days.
This is exactly the same evolutionary pattern for $\nu_c$ as predicted across jet breaks from ultra-high resolution numerical hydrodynamics simulations (starting from top-hat initial conditions, see Fig~4 of \citealt{vanEertenMacFadyen13})
and matches the evolution of the hardness ratio (Figure~\ref{fig:hr}),
although, given the large error bars, it is not possible to draw too strong a conclusion about this similarity.
We therefore find it unlikely that the cooling frequency $\nu_c$ affects the latest X-ray observations.
Finally, it could be the case that the synchrotron parameters themselves evolve over time. For example, the value of $p$ evolving closer to 2, as expected for non-relativistic shock speeds \citep{Blandford78, Bell78}, would indeed lead to a harder spectrum (from 0.585 for $p=2.17$ to 0.5 for $p = 2$) and shallower temporal slope (since $\alpha$ in $t^{-\alpha}$ equals $p$ for a fast spreading jet, $3p/4$ for a non-spreading jet and $(15p - 21)/10$ in the non-relativistic limit, see \citealt{ZhangMeszaros04, PK04, Frail00}, respectively).
However, this would also affect the overall flux normalization, which contains an $\varepsilon_e (p-2)$ term. Although the impacts of these effects on the light curve will be mitigated by the spread in emission arrival times from the blast wave, it would still require $\varepsilon_e$ to co-evolve such that a substantial shift in baseline flux level is to be avoided.
An updated broadband measurement of the slope of $p$ could directly answer the question whether $p$ is indeed evolving, but for now we conclude that the tentative flattening of the light curve has not been established as a generic prediction of such a scenario.
\begin{figure}
\centering
\includegraphics[width=0.98\columnwidth]{lckn.pdf}
\vspace{-0.0cm}
\caption{X-ray afterglow described with a Jet+Kilonova afterglow model (thin solid line), derived from broadband fitting. The shaded gray area show the range of X-ray fluxes estimated by the model (light gray: 95\% c.l., dark gray: 68\% c.l.).
The dotted line shows the contribution of the jet component,
whereas the thick solid lines show the evolution of the kilonova afterglow for different velocity indices $k$.
The three kilonova models were generated for the same set of input parameters ($M_{\rm ej}$ = 0.025\,$M_{\odot}$,
$\beta_{\rm min}$ = 0.3$c$,
$p$ = 2.01,
$n$ = 8$\times$10$^{-3}$\,cm$^{-3}$,
$\varepsilon_{B}$ = 6$\times$10$^{-5}$) and three different pairs
of values ($k$=8,$\varepsilon_{e}$=0.17; top), ($k$=5,$\varepsilon_{e}$=0.089; middle),
and ($k$=3,$\varepsilon_{e}$=0.045; bottom).
}
\label{fig:KNlc}
\vspace{0.4cm}
\includegraphics[width=0.98\columnwidth]{kdist.pdf}
\vspace{-0.1cm}
\caption{Posterior distribution on the ejecta velocity index $k$, assuming the kilonova afterglow contributes to the observed X-ray flux at 2.5~yrs (orange). Radio upper limits were also included in the fit. In purple, the posterior distribution on $k$ if the X-ray flux remains above the current level up to 5 years after the merger.
}
\label{fig:KNkdist}
\end{figure}
\subsubsection{Limits On An Additional Component}\label{sec:add}
Table \ref{tab:fit_results_jet} presents the results of fitting an additive constant X-ray flux to the spreading, Gaussian structured jet afterglow. In such a scenario the viewing angle $\theta_v$ and circumburst density $n_0$ are somewhat reduced compared to their values from the jet alone, and consistent with the values
derived from the 1 year dataset.
The additional flux density at 5 keV is constrained to $F_X = (2.8 \pm 1.2)\times 10^{-5} \mu$Jy, corresponding to (8 $\pm$ 3) $\times$ 10$^{-15}$ erg\,cm$^{-2}$\,s$^{-1}$, about half the observed flux at $T_0+939$ d. The smaller viewing angle causes a larger apparent velocity, $\beta_{\mathrm{app}} = 2.7^{+1.0}_{-0.6}$, consistent with the observations of \citet{Mooley18c}.
The improvement in WAIC score between the jet plus constant and the standard jet is marginal ($1.2 \pm 1.4$), and does not warrant the addition of another parameter in the model.
\subsection{Kilonova Afterglow}
We use the latest X-ray and radio observations
to constrain the range of valid kilonova afterglow models\footnote{For simplicity, we only discuss kilonova models that do not invoke additional energy injection from a long-lasting central engine \citep[e.g.][]{gao13}}.
In lieu of running a combined fit with both structured jet and kilonova afterglows, we use the structured jet plus constant fit (Sect.~\ref{sec:add}) as a measure of the possible contribution of the kilonova afterglow to the current epoch.
We run a simple MCMC fit with the kilonova afterglow model to the X-ray flux $F_X\approx8\times10^{-15}$\,erg\,cm$^{-2}$\,s$^{-1}$, as well as the latest radio upper limits. There are no other constraints apart from priors and the requirement the light curve be currently rising.
We focus our study on the emission arising from the fastest ejecta, often referred as the ``blue'' kilonova component, as it is expected to peak ealier and initially be brighter \citep{Alexander17, Kathirgamaraju19}.
Our prior on $M_{ej}$ is a normal distribution with mean $2.25 \times 10^{-2} M_{\odot}$ and width $0.75 \times 10^{-2} M_{\odot}$,
as derived from the modeling of AT2017gfo \citep[e.g.][]{Arcavi2017,Evans2017,Nicholl2017, Kasen17,Pian2017,Tanvir2017,Troja17}.
Our prior on the minimum outflow velocity $\beta_{\mathrm{min}}$ is a normal distribution with mean $0.3$ and width $0.05$, as lower values would lead to delayed and dimmer peaks below our detection limits \citep{Kathirgamaraju19}.
The velocity distribution index $k$ was given a uniform prior between $1$ and $10$.
The circumburst density $n_0$ was given a log-uniform prior between $10^{-3}$ and $10^{-1}$ cm$^{-3}$ in agreement with the constraints from the jet model.
The electron spectral index $p$ was given a uniform prior between 2 and 3, while $\varepsilon_{e}$ and $\varepsilon_{B}$ were given log-uniform priors between $10^{-5}$ and 1. We note these parameters are under no obligation to take identical values in both the structured jet and kilonova afterglow.
We find the current data set admits a broad range of kilonova models (Figure~\ref{fig:KNlc}) and is insufficient to provide strong constraints on any of the parameters, including the velocity distribution index $k$. Figure \ref{fig:KNkdist} shows the posterior probability distribution on $k$. Essentially any value is consistent with current observations. Preliminary constraints (disfavoring $k$\,$<$6) were derived by \citet{Hajela19}, our exploration of the parameters space finds instead a broader range of possible solutions.
This result is consistent with the analysis presented in
\citet{Hajela19}, in particular their Fig.~5 showing a wide range
of allowed values, but does not support the conclusion $k$\,$\geq$6.
Higher values of $k$ result in fainter initial emission and a steep rise to the peak flux. Lower values of $k$ are instead brighter at earlier times with a slow rise to the final flux. These are easily brought in agreement with the current observations with a slight reduction of $\varepsilon_{e}$ and $\varepsilon_{B}$ (Figure~\ref{fig:KNlc}).
Continued monitoring of this target would therefore be critical to determine the rising slope of the kilonova afterglow component, and constrain the ejecta velocity profile.
Unfortunately, due to the large number of parameters and uncertainty in the physical properties of the kilonova blast wave, it is difficult to make robust conclusions about its afterglow emission at this time.
Ultimately, the large uncertainty in the synchrotron parameters $\varepsilon_{e}$ and $\varepsilon_{B}$ dominate the analysis, and will only be overcome with successful observations.
As shown in Figure~\ref{fig:predictions}, the same observing settings thus far adopted to monitor GW170817 probe the top 30\% of the estimated flux distribution and
could detect the kilonova afterglow under favorable conditions.
\subsection{Energy injection from a pulsar}
Another possibility of flattening the light curve is to invoke energy injection of a long-lived NS. This possibility was suggested by \cite{Piro19} to account for the X-ray variability around 160 days and to interpret some of the features in the kilonova AT2017gfo associated with GW170817 \citep{Yu18,Li18,Wollaeger19}. A long-lived NS central engine is allowed by the EM and GW observational data, as long as the surface dipole field strength is not very strong \citep{Ai18} and the NS equation of state is stiff enough \citep{Ai19}. For such a NS, the spindown time scale can be of the order of years, so that significant energy injection is still possible at the time of our observations. Indeed, \cite{Piro19} predicted the flattening of the lightcurve based on their model parameters to interpret the X-ray variability.
We consider a general energy injection law from the central engine,
$L(t) \propto t^{-q}$,
where $q<1$ is needed to give a noticeable change of blastwave dynamics \citep{Zhang01}. We consider two possibilities. The first is that the spindown luminosity is injected into the blastwave as a Poynting flux. For GW170817/GRB 170817A, the current epoch is already in the post-jet-break phase since the light curve is already in the rapid decay regime. Let us assume that the blastwave is still in the relativistic regime and that sideways expansion is not important, one can derive an analytical model for decay slopes. For a constant density medium (which is relevant for NS-NS mergers), one has \citep{Zhang18}\footnote{The relevant parameters are for the jet core and an on-axis observer. For a structured jet with a large viewing angle like the case of GRB 170817A, these scalings are relevant after the jet core enters the line of sight, i.e. during the rapid decay phase.} $\Gamma \propto t^{-(2+q)/8}$, $r \propto t^{(2-q)/4}$, $\nu_m \propto t^{-(2+q)/2}$, $\nu_c \propto t^{(q-2)/2}$. The peak flux density can be estimated as $F_{\rm \nu,max} \propto r^3 B' \Gamma [\theta_j^2 / (1/\Gamma)^2] \propto r^3 \Gamma^4 \propto t^{(2-5q)/4}$. For $\nu_m < \nu < \nu_c$ which is relevant for X-rays at such a late epoch, the flux density evolution should satisfy
\begin{equation}
F_\nu \propto t^{\frac{2-5q}{4}-\frac{(p-1)(2+q)}{4}}.
\end{equation}
This expression is consistent with the pre-jet-break energy injection theory \citep{Zhang06} if the edge effect correction factor $[\theta_j^2 / (1/\Gamma)^2]$ is removed. For $q=0$ relevant to pulsar injection in the pre-spindown phase, this gives $F_\nu \propto t^{(2-p)/2}$, which is nearly flat (for our best fit $p = 2.17$, this gives $t^{-0.085}$). This is consistent with the numerical result presented in Figure~\ref{fig:predictions}. For this first scenario, energy injection should be achromatic. The same flattening feature should appear in the radio band as well.
\begin{figure}
\includegraphics[width=0.98\columnwidth]{predictions.pdf}
\caption{X-ray and radio light curves of GW170817, showing the possible future evolution of the emission components:
the relativistic jet (solid line), the kilonova afterglow (dotted lines) , and the remnant neutron star (dashed line). Emission from the counter-jet causes a flattening of the jet lightcurve at $\approx$10 years after the merger.
For the kilonova afterglow, we report the two upper bounds (95\% and 68\% confidence intervals) on the estimated flux distribution (cf. Figure~\ref{fig:KNlc}).
On the right we report the 5\,$\sigma$ sensitivity of typical observing settings
for ATCA, VLA, and \textit{Chandra} (CXO), as well as
for the next generation ngVLA \citep{CorsiWP} and the Athena X-ray observatory. }
\label{fig:predictions}
\end{figure}
The second scenario invokes an internal dissipation of the pulsar wind, which has been manifested by the so-called ``internal plateaus'' as observed in both long \citep{Troja07,Lvzhang14} and short \citep{rowlinson10,lv15} GRBs. The temporal profile should directly follow $\propto t^{-q}$, which is also flat for $q=0$. The light curve should be chromatic, as seen in GRB afterglows \citep{Troja07,rowlinson10,Lvzhang14,lv15}, and the radio band may not show a simultaneous flattening as the X-ray band. Since all the other flattening mechanisms (discussed earlier in Sect.3.1 and 3.2) also predict achromatic behaviors, a detection of chromatic behavior between X-ray and radio will provide a definite clue about a long-lived central engine.
If the flattening is indeed caused by energy injection of a long-lived pulsar, the spindown time scale should be at least this long, i.e. \citep{dai98,Zhang01}
\begin{equation}
T_{\rm sd} \sim (2\times 10^7 \ {\rm s}) \ B_{p,13}^{-2} P_{0,-3}^2 > 1,000 \ {\rm d},
\end{equation}
where $B_p = 10^{13} \, {\rm G} \ B_{p,13}$ is the surface polar magnetic field strength, and $P_0 = 1 \, {\rm ms} \ P_{0,-3}$ is the initial spin period of the pulsar. This condition is readily satisfied if $B_p$ is below a few times of $10^{12}$~G, which is consistent with the constraints from other observations from this event \citep{Ai18,Ai19,Piro19}. Within the energy injection model, lightcurve flattening appears when the injected energy exceeds the original energy in the blastwave, and the ceases when the total available spin energy is injected. According to our structured jet modeling, the total kinetic energy in the jet is $\sim 10^{50}-10^{52}$ erg with medium value $5\times 10^{50}$ erg. This is smaller than the typical available spin energy of a new-born millisecond pulsar from an NS-NS merger (typically a few $10^{52}$ erg, but could be smaller due to possible a secular gravitational wave loss, \citealt{fan13,gao16}). As a result, such an energy injection is expected if the merger product is indeed a long-lived neutron star. The injection energy may be up to a factor of a few to a few hundreds of the existing energy in the jet, so that the injection episode may last for years according to this model.
\section{Conclusions}
Whereas optical and radio emission from GW170817 have now faded below detection threshold, its X-ray counterpart continues to be visible at 2.5 years after the NS merger. Earlier predictions of the structured jet model systematically underestimate the latest {\it Chandra} detections. A Gaussian structured jet can still reproduce the afterglow temporal evolution by increasing the viewing angle to $\approx$30$^{\circ}$, although this updated model underpredicts the centroid motion, as constrained by high-resolution radio imaging. Alternatively, the slow X-ray decline could indicate a genuine new feature
of the afterglow, originating from the dynamics of the GRB jet, changes to the emitted synchrotron spectrum, or possibly an additional emission component. The latter contribution is constrained by our modeling to $F_X$\,$\approx$8$\times$10$^{-15}$\,erg\,cm$^{-2}$\,s$^{-1}$,
corresponding to an X-ray luminosity
$L_X$\,$\approx$1.5$\times$10$^{38}$\,erg~s$^{-1}$ (0.3-10~keV).
Continued energy injection by a long-lived central engine would cause a persistent flattening of the X-ray lightcurve. Depending on the origin of this emission (internal or external), the same flattening could be observed in the radio band. The observed behavior could mark instead the onset of a non-thermal ``kilonova afterglow", produced by the interaction of the sub-relativistic merger ejecta with the surrounding medium.
We find that the current dataset is not sufficient to meaningfully constrain any of the parameters, including the velocity distribution index $k$. Our results do not support earlier predictions
of $k$\,$\geq$\,6 and find a wide range of allowed values.
Future multi-band observations of this component would be essential to determine the velocity profile of the sub-relativistic ejecta, thus complementing earlier kilonova studies, based on the thermal optical/nIR emission.
\section*{Acknowledgements}
The authors wish to thank the ATCA staff for the support in carrying out the observations during the current health emergency.
ET thanks A. Hornschemeier and A. Basu-Zych for their
helpful feedback, and is grateful to B. A. Vekstein
for her fundamental cooperation during the writing of this manuscript.
This work was partially supported by the National Aeronautics and Space Administration through Chandra Award Number G0920071A issued by the Chandra X-ray Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of the National Aeronautics Space Administration under contract NAS8-03060."
GR acknowledges the support from the University of Maryland through the Joint Space Science Institute Prize Postdoctoral Fellowship. Analysis was performed on the YORP cluster administered by the Center for Theory and Computation, part of the Department of Astronomy at the University of Maryland.
LP acknowledges partial support by the
European Union Horizon 2020 Programme under the AHEAD2020 project (grant
agreement number 871158).
\bibliographystyle{mnras} |
1511.02525 | \section{Introduction}
A sensor network consists of a large number of low-cost autonomous devices, called \emph{sensors}. Communication between the sensors is performed by wireless radio with very limited range, e.g., via the Bluetooth protocol. To make the network connected, a number of additional devices, called \emph{relays}, must be judiciously placed within the sensor field. Relays are typically more advanced and more expensive than sensors, and, in particular, have a larger communication range. For instance, in addition to a Bluetooth chip, each relay may be equipped with a WLAN transceiver, enabling communication between distant relays. The problem we study in this paper is that of placing a \emph{minimum number} of relays to ensure the connectivity of a sensor network.
Two models of communication have been considered in the literature \cite{%
bredin10deploying,chen00approximations,chen01approximations,%
cheng08relay,liu06optimal,lloyd07relay,srinivas06mobile,%
zhang07fault-tolerant%
}. In both models, a sensor and a relay can communicate if the distance between them is at most~1, and two relays can communicate if the distance between them is at most~$r$, where $r\ge1$ is a given number. The models differ in whether direct communication between sensors is allowed. In the \emph{one-tier} model two sensors can communicate if the distance between them is at most~1. In the \emph{two-tier} model the sensors do not communicate at all, no matter how close they are. In other words, in the two-tier model the sensors may only link to relays, but not to other sensors.
Formally, the input to the relay placement problem is a set of $n$ sensors, identified with their locations in the plane, and a number $r\ge1$, the communication range of a relay (by scaling, without loss of generality, the communication range of a sensor is~$1$). The objective in the \emph{one-tier} relay placement is to place a minimum number of relays so that between every pair of sensors there exists a path, \emph{through sensors and/or relays}, such that the consecutive vertices of the path are within distance $r$ if both vertices are relays, and within distance~1 otherwise. The objective in the \emph{two-tier} relay placement is to place a minimum number of relays so that between every pair of sensors there exists a path \emph{through relays} such that the consecutive vertices of the path are within distance $r$ if both vertices are relays, and within distance~1 if one of the vertices is a sensor and the other is a relay (going directly from a sensor to a sensor is forbidden).
\subsection{Previous Work}
One-tier relay placement in the special case of $r=1$ \cite{bredin10deploying,cheng08relay} is equivalent to finding a Steiner tree with minimum number of Steiner nodes and bounded edge length -- the problem that was studied under the names STP-MSPBEL \cite{lin99steiner}, SMT-MSPBEL \cite{lloyd07relay,zhang07fault-tolerant}, MSPT \cite{mandoiu00note}, and STP-MSP \cite{chen00approximations,chen01approximations,cheng08relay,liu06optimal,srinivas06mobile}.
\citet{lin99steiner} proved that the problem is NP-hard and gave a 5-approximation algorithm. Chen et al.\ \cite{chen00approximations,chen01approximations} showed that the algorithm of Lin and Xue is actually a 4-approximation algorithm, and gave a 3-approximation algorithm; \citet{cheng08relay} gave a 3-approximation algorithm with an improved running time, and a randomised 2.5-approximation algorithm. Chen et al.\ \cite{chen00approximations,chen01approximations} presented a polynomial-time approximation scheme (PTAS) for minimising the \emph{total} number of vertices in the tree (i.e., with the objective function being the number of the original points plus the number of Steiner vertices) for a restricted version of the problem, in which in the minimum spanning tree of the set the length of the longest edge is at most constant times the length of the shortest edge.
For the general case of \emph{arbitrary} $r\ge1$, the current best approximation ratio for one-tier relay placement is due to \citet{lloyd07relay}, who presented a simple 7-approximation algorithm, based on ``Steinerising'' the minimum spanning tree of the sensors. In this paper we give an algorithm with an improved approximation ratio of 3.11.
Two-tiered relay placement (under the assumptions that the sensors are uniformly distributed in a given region and that $r\ge4$) was considered by \citet{hao04fault-tolerant} and \citet{tang06relay} who suggested constant-factor approximation algorithms for several versions of the problem. \citet[Thm.~4.1]{lloyd07relay} and \citet[Thm.~1]{srinivas06mobile} developed a general framework whereby given an $\alpha$-approximate solution to Disk Cover (finding minimum number of unit disks to cover a given set of points) and a $\beta$-approximate solution to STP-MSPBEL (see above), one may find an approximate solution for the two-tier relay placement. In more details, the algorithm in \citet{lloyd07relay} works for arbitrary $r\ge1$ and has an approximation factor of $2\alpha+\beta$; the algorithm in \citet{srinivas06mobile} works for $r\ge2$ and guarantees an $(\alpha+\beta)$-approximate solution. Combined with the best known approximation factors for the Disk Cover \cite{hochbaum85approximation} and STP-MSPBEL \cite{chen00approximations,chen01approximations,cheng08relay}, these lead to $5+\varepsilon$ and $4+\varepsilon$ approximations for the relay placement respectively. In this paper we present a PTAS for the two-tiered relay placement; the PTAS works directly for the relay placement, without combining solutions to other problems.
A different line of research \cite{misra08constrained,carmi07covering} concentrated on a ``discrete'' version of relay placement, in which the goal is to pick a minimum subset of relays from a \emph{given} set of possible relay locations. In this paper we allow the relays to reside anywhere in the plane.
\subsection{Contributions}
We present new results on approximability of relay placement:
\begin{itemize}[itemsep=0.5ex]
\item In Section~\ref{sec_apx1tier} we give a simple $O(n\log n)$-time 6.73-approximation algorithm for the one-tier version.
\item In Section~\ref{sec_apx1tierim} we present a polynomial-time 3.11-approximation algorithm for the one-tier version.
\item In Section~\ref{sec_inapx1tier} we show that there is no PTAS for one-tier relay placement (assuming that $r$ is part of the input, and P${}\ne{}$NP).
\item In Section~\ref{sec_ptas} we give a PTAS for two-tier relay placement.
\end{itemize}
Note that the \emph{number} of relays in a solution may be exponential in the size of the input (number of bits). Our algorithms produce a succinct representation of the solution. The representation is given by a set of points and a set of line segments; the relays are placed on each point and equally-spaced along each segment.
\section{Blobs, Clouds, Stabs, Hubs, and Forests}\label{sec_prelim}
In this section we introduce the notions, central to the description of our algorithms for one-tier relay placement. We also provide lower bounds.
\subsection{Blobs and Clouds}
We write $\myd{x}{y}$ for the Euclidean distance between $x$ and~$y$. Let $V$ be a given set of sensors (points in the plane). We form a unit disk graph $\mathcal{G} = (V,E)$ and a disk graph $\mathcal{F} = (V,F)$ where
\begin{align*}
E &= \bigl\{ \{u,v\} : \myd{u}{v} \le 1 \bigr\}, \\
F &= \bigl\{ \{u,v\} : \myd{u}{v} \le 2 \bigr\};
\end{align*}
see Figure~\ref{fig:clouds}a.
\begin{figure}[t]
\centering
\scalebox{0.9}{\input{clouds2.pdf_t}}
\caption{(a)~Dots are sensors in $V$, solid lines are edges in $E$ and $F$, and dashed lines are edges in $F$ only. There are 5 blobs in $\myB$ (one of them highlighted) and 2 clouds $C_1, C_2 \in \myC$. The wide grey line is the only edge in $\mathrm{MStFN}(\myC)$, which happens to be equal to $\mathrm{MSFN}(\myC)$ here. (b)~Stabs. (c)~Hubs.}\label{fig:clouds}
\end{figure}
A \emph{blob} is defined to be the union of the unit disks centered at the sensors that belong to the same connected component of $\mathcal{G}$. We use $B$ to refer to a blob, and $\myB$ for the set of all blobs.
Analogously, a \emph{cloud} $C \in \myC$ is the union of the unit disks centered at the sensors that belong to the connected component of the graph $\mathcal{F}$. The sensors in a blob can communicate with each other without relays, while the ones in a cloud might not, even though their disks may overlap. Each cloud $C \in \myC$ consists of one or more blobs $B \in \myB$; we use $\myB_C$ to denote the blobs that form the cloud~$C$.
\subsection{Stabs and Hubs}
A \emph{stab} is a relay with an infinite communication range ($r=\infty$). A \emph{hub} is a relay without the ability to communicate with the other relays (thus hubs can enable communication within one cloud, but are of no use in communicating between clouds). As we shall see, a solution to stab or hub placement can be used as the first step towards a solution for relay placement.
If we are placing stabs, it is necessary and sufficient to have a stab in each blob to ensure communication between all sensors (to avoid trivialities we assume there is more than one blob). Thus, stab placement is a special case of the set cover problem: the universe is the blobs, and the subsets are sets of blobs that have a point in common. We use $\mathrm{Stab}(\myB')$ to denote the minimum set of stabs that stab each blob in $\myB' \subseteq \myB$. In the example in Figure~\ref{fig:clouds}b small rectangles show an optimal solution to the stab placement problem; 3 stabs are enough.
If we are placing hubs, it is necessary (assuming more than one blob in the cloud), but not sufficient, to have a hub in each blob to ensure communication between sensors within one cloud. In fact, hub placement can be interpreted as a special case of the \emph{connected} set cover problem \cite{cerdeira05requiring,shuai06connected}. In the example in Figure~\ref{fig:clouds}c small rectangles show an optimal solution to the hub placement problem for the cloud $C = C_1$; in this particular case, 2 stabs within the cloud $C$ were sufficient to ``pierce'' each blob in $\myB_C$ (see Figure~\ref{fig:clouds}b), however, an additional hub (marked red in Figure~\ref{fig:clouds}c) is required to ``stitch'' the blobs together (i.e., to establish communication between the blobs). The next lemma shows that, in general, the number of additional hubs needed is less than the number of stabs:
\begin{lemma}\label{lem_stab2hub}
Given a feasible solution\/ $S$ to stab placement on\/ $\myB_C$, we can obtain in polynomial time a feasible solution to hub placement on\/ $\myB_C$ with\/ $2 \s{S}-1$ hubs.
\end{lemma}
\begin{proof}
Let $\mathcal{H}$ be the graph, whose nodes are the sensors in the cloud $C$ and the stabs in $S$, and whose edges connect two devices if either they are within distance~1 from each other or if both devices are stabs (i.e., there is an edge between \emph{every} pair of the stabs). Switch off communication between the stabs, thus turning them into hubs. Suppose that this breaks $\mathcal{H}$ into $k$ connected components. There must be a stab in each connected component. Thus, $\s{S} \ge k$.
If $k > 1$, by the definition of a cloud, there must exist a point where a unit disk covers at least two sensors from two different connected components of $\mathcal{H}$. Placing a hub at the point decreases the number of the connected components by at least~1. Thus, after putting at most $k-1$ additional hubs, all connected components will merge into one.
\end{proof}
\subsection{Steiner Forests and Spanning Forests with Neighbourhoods}
Let $\myP$ be a collection of planar subsets; call them \emph{neighbourhoods}. (In Section~\ref{sec_apx1tier} the neighbourhoods will be the clouds, in Section~\ref{sec_apx1tierim} they will be ``clusters'' of clouds.) For a plane graph $G$, let $\mathcal{G}_{\myP}=(\myP, E(G))$ be the graph whose vertices are the neighbourhoods and two neighbourhoods $P_1,P_2\in\myP$ are adjacent whenever $G$ has a vertex in $P_1$, a vertex in $P_2$, and a path between the vertices.
The \emph{Minimum Steiner Forest with Neighbourhoods} on $\myP$, denoted $\mathrm{MStFN}(\myP)$, is a \emph{minimum-length} plane graph $G$ such that $\mathcal{G}_{\myP}=(\myP, E(G))$ is connected. The $\mathrm{MStFN}$ is a generalisation of the Steiner tree of a set of points. Note that $\mathrm{MStFN}$ is slightly different from Steiner tree with neighbourhoods (see, e.g., \citet{yang07minimum}) in that we are only counting the part of the graph \emph{outside} $\myP$ towards its length (since it is not necessary to connect neighbourhoods beyond their boundaries).
Consider a complete weighted graph whose vertices are the neighbourhoods in $\myP$ and whose edge weights are the distances between them. A minimum spanning tree in the graph is called the \emph{Minimum Spanning Forest with Neighbourhoods} on $\myP$, denoted $\mathrm{MSFN}(\myP)$. A natural embedding of the edges of the forest is by the straight-line segments that connect the corresponding neighbourhoods; we will identify $\mathrm{MSFN}(\myP)$ with the embedding. (As with $\mathrm{MStFN}$, we count the length of $\mathrm{MSFN}$ only \emph{outside}~$\myP$.)
We denote by $\s{\mathrm{MStFN}(\myP)}$ and $\s{\mathrm{MSFN}(\myP)}$ the total length of the edges of the forests. It is known that
\[
\s{\mathrm{MSFN}({P})} \,\le\, \frac{2}{\sqrt{3}} \s{\mathrm{MStFN}({P})}
\]
for a \emph{point} set $P$, where $2/\sqrt{3}$ is the \emph{Steiner ratio} \cite{du90approach}. The following lemma generalises this to neighbourhoods.
\begin{lemma}\label{lem_StRatio}
For any\/ $\myP$, $\s{\mathrm{MSFN}(\myP)} \le (2/\sqrt{3}) \s{\mathrm{MStFN}(\myP)}$.
\end{lemma}
\begin{proof}
If $\myP$ is erased, $\mathrm{MStFN}(\myP)$ falls off into a forest, each tree of which is a minimum Steiner tree on its leaves; its length is within the Steiner ratio of minimum spanning tree length.
\end{proof}
\subsection{Lower Bounds on the Number of Relays}\label{ssec:lower-bounds}
Let $R^{*}$ be an optimal set of relays. Let $\mathcal{R}$ be the communication graph on the relays $R^{*}$ alone, i.e., without sensors taken into account; two relays are connected by an edge in $\mathcal{R}$ if and only if they are within distance $r$ from each other. Suppose that $\mathcal{R}$ is embedded in the plane with vertices at relays and line segments joining communicating relays. The embedding spans all clouds, for otherwise the sensors in a cloud would not be connected to the others. Thus, in $\mathcal{R}$ there exists a forest $\mathcal{R}'$, whose embedding also spans all clouds. Let $\s{\mathcal{R}'}$ denote the total length of the edges in $\mathcal{R}'$. By definition of $\mathrm{MStFN}(\myC)$, we have $\s{\mathcal{R}'} \ge \s{\mathrm{MStFN}(\myC)}$.
Let $m$, $v$, and $k$ be the number of edges, vertices, and trees of $\mathcal{R}'$. Since each edge of $\mathcal{R}'$ has length at most $r$, we have $\s{\mathcal{R}'} \le m r = (v - k) r$. Since $v \le \s{R^{*}}$, since there must be a relay in every blob and every cloud, and since the clouds are disjoint, it follows that
\begin{align}
\s{R^{*}} &\ge \s{\mathrm{MStFN}(\myC)} / r, \label{eq_lower_bound_MStTN}\\
\s{R^{*}} &\ge \s{\mathrm{Stab}(\myB)}, \label{eq_lower_bound_Stab}\\
\s{R^{*}} &\ge \s{\myC}. \label{eq_lower_bound_C}
\end{align}
\section{A 6.73-Approximation Algorithm for\texorpdfstring{\\}{ } One-Tier Relay Placement}\label{sec_apx1tier}
In this section we give a simple 6.73-approximation algorithm for relay placement. We first find an approximately optimal stab placement. Then we turn a stab placement into a hub placement within each cloud. Then a spanning tree on the clouds is found and ``Steinerised''.
Finding an optimal stab placement is a special case of the set cover problem. The maximum number of blobs pierced by a single stab is~$5$ (since this is the maximum number of unit disks that can have non-empty intersection while avoiding each other's centers). Thus, in this case the greedy heuristic for the set cover has an approximation ratio of $1+1/2+1/3+1/4+1/5=137/60$ \cite[Theorem~35.4]{cormen01introduction}.
Based on this approximation, a feasible hub placement $R_C$ within one cloud $C \in \myC$ can be obtained by applying Lemma~\ref{lem_stab2hub}; for this set of hubs it holds that
\[
\s{R_C}
\,\le\, \frac{137}{30} \s{\mathrm{Stab}(\myB_C)} - 1.
\]
We can now interpret hubs $R_C$ as relays; if the hubs make the cloud $C$ connected, surely it holds for relays.
Let $R' = \bigcup_C{R_C}$ denote all relays placed this way. Since the blobs $\myB_C$ for different $C$ do not intersect, $\s{\mathrm{Stab}(\myB)} = \sum_C \s{\mathrm{Stab}(\myB_C)}$, so
\begin{equation}\label{eq_R1}
\s{R'}
\,\le\, \sum_C{\s{R_C}}
\,\le\, \sum_C \left( \frac{137}{30} \s{\mathrm{Stab}(\myB_C)} - 1 \right)
\,=\, \frac{137}{30} \s{\mathrm{Stab}(\myB)} - \s{\myC}.
\end{equation}
Next, we find $\mathrm{MSFN}(\myC)$ and place another set of relays, $R''$, along its edges. Specifically, for each edge $e$ of the forest, we place $2$ relays at the endpoints of $e$, and $\floor{\s{e}/r}$ relays every $r$ units starting from one of the endpoints. This ensures that all clouds communicate with each other; thus $R = R' \cup R''$ is a feasible solution. Since the number of edges in $\mathrm{MSFN}(\myC)$ is $\s{\myC}-1$,
\begin{equation}\label{eq_R2}
\s{R''}
\,=\, 2(\s{\myC} - 1) + \sum_e \left\lfloor \frac{\s{e}}{r} \right\rfloor
\,<\, 2\s{\myC} + \frac{\s{\mathrm{MSFN}(\myC)}}{r}.
\end{equation}
We obtain
\[
\s{R}
\,=\, \s{R'} + \s{R''}
\,\le\, \left(\frac{137}{30} + 1 + \frac{2}{\sqrt{3}}\right) \s{R^{*}}
\,<\, 6.73 \s{R^{*}}
\]
from \eqref{eq_lower_bound_MStTN}--\eqref{eq_R2} and Lemma~\ref{lem_StRatio}.
\subsection{Running Time}
To implement the above algorithm in $O(n\log n)$ time, we construct the blobs (this can be done in $O(n\log n)$ time since the blobs are the union of disks centered on the sensors), assign each blob a unique colour, and initialise a Union-Find data structure for the colours. Next, we build the arrangement of the blobs, and sweep the arrangement $4$ times, once for each $d=5,4,3,2$; upon encountering a $d$-coloured cell of the arrangement, we place the stab anywhere in the cell, merge the corresponding $d$ colours, and continue. Finally, to place the hubs we do one additional sweep.
As for the last step -- building $\mathrm{MSFN}(\myC)$ -- it is easy to see that just as the ``usual'' minimum spanning tree of a set of points, $\mathrm{MSFN}(\myC)$ uses only the edges of the relative neighbourhood graph of the sensors (refer, e.g., to \citet[p.~217]{berg08computational} for the definition of the graph).
Indeed, let $pq$ be an edge of $\mathrm{MSFN}(\myC)$; let $p'$ and $q'$ be the sensors that are within distance $1$ of $p$ and $q$, respectively. If there existed a sensor $s'$ closer than $\myd{p'}{q'}$ to both $p'$ and $q'$, the edge $pq$ could have been swapped for a shorter edge (Figure~\ref{fig:nlogn}).
\begin{figure}[ht]
\centering
\scalebox{1.1}{\input{nlogn.pdf_t}}
\caption{Edge $pq$ could be swapped for a shorter edge.}\label{fig:nlogn}
\end{figure}
It remains to show how to build and sweep the arrangement of blobs in $O(n\log n)$ time.
Since the blobs are unions of unit disks, their total complexity is linear (see, e.g., \citet[Theorem~13.9]{berg08computational}). Moreover, the arrangement of the blobs also has only linear complexity (see Lemma~\ref{lem:arr} below); this follows from the fact that every point can belong to only a constant number of blobs (at most $5$). Thus, we can use sweep to build the arrangement in $O(n\log n)$ time, and also, of course, sweep the arrangement within the same time bound.
\begin{lemma}\label{lem:arr}
The arrangement of the blobs has linear complexity.
\end{lemma}
\begin{proof}
The vertices of the arrangement are of two types -- the vertices of the blobs themselves and the vertices of the intersection of two blobs (we assume that no three blobs intersect in a single point). The total number of the vertices of the first type is linear, so we focus on the vertices of the second type.
Let $A$ be a tile in the infinite unit-square tiling of the plane. There is not more than a constant number $K$ of blobs that intersect $A$ (since there is not more than a constant number of points that can be placed within distance $1$ from $A$ so that the distance between any two of the points is larger than~$1$). Let $n_i$ be the number of disks from blob $i$ that intersect $A$. Every vertex of the arrangement inside $A$ is on the boundary of the union of some two blobs. Because the union of blobs has linear complexity, the number of vertices that are due to intersection of blobs $i$ and $j$ is $O(n_i +n_j)$. Since there is at most $K$ blobs for which $n_i \ne 0$, we have
\[
\sum_{i,j} (n_i+n_j) \le \binom{K}2 n(A),
\]
where $n(A)$ is the total number of disks intersecting $A$. Clearly, each unit disk intersects only a constant number of the unit-square tiles, and only a linear number of tiles is intersected by the blobs. Thus, summing over all tiles, we obtain that the total complexity of the arrangement is $O(K^2 n)=O(n)$.
\end{proof}
\section{A 3.11-Approximation Algorithm for\texorpdfstring{\\}{ } One-Tier Relay Placement}\label{sec_apx1tierim}
In this section we first take care of clouds whose blobs can be stabbed with few relays, and then find an approximation to the hub placement by greedily placing the hubs themselves, without placing the stabs first, for the rest of the clouds. Together with a refined analysis, this gives a polynomial-time $3.11$-approximation algorithm. We focus on nontrivial instances with more than one blob.
\subsection{Overview}
The basic steps of our algorithm are as follows:
\begin{enumerate}[noitemsep]
\item Compute optimal stabbings for the clouds that can be stabbed with few relays.
\item Connect the blobs in each of these clouds, using Lemma~\ref{lem_stab2hub}.
\item Greedily connect all blobs in each of the remaining clouds (``stitching'').
\item Greedily connect clouds into clusters, using 2 additional relays per cloud.
\item Connect the clusters by a spanning forest.
\end{enumerate}
Our algorithm constructs a set $A_r$ of ``red'' relays (for connecting blobs in a cloud, i.e., relays added in steps~1--3), a set $A_g$ of ``green'' relays (two per cloud, added in steps~4--5) and a set $A_y$ of ``yellow'' relays (outside of sensor range, added in step~5). Refer to Figures~\ref{fig:red} and~\ref{fig:greenyellow}. In the analysis, we compare an optimal solution $R^{*}$ to our approximate one by subdividing the former into a set $R^{*}_d$ of ``dark'' relays that are within reach of sensors, and into a set $R^{*}_\ell$ of ``light'' relays that are outside of sensor range. We compare $\s{R^{*}_d}$ with $\s{A_r}+\s{A_g}$, and $\s{R^{*}_\ell}$ with $\s{A_y}$, showing in both cases that the ratio is less than~$3.11$.
\begin{figure}\centering
\input{red.pdf_t}
\caption{Red relays placed by our algorithm (the sensors are the solid circles); the numbers indicate the order in which the relays are placed within each cloud. (a)~Stab the clouds that can be stabbed by placing few relays; the clouds are then stitched by placing the hubs as in Lemma~\ref{lem_stab2hub}. (b)~Greedily stitch the other clouds.}\label{fig:red}
\end{figure}
\begin{figure}\centering
\includegraphics[page=1]{figs.pdf}
\caption{(a)~Green relays connect clouds into clusters -- on average, we use at most 2 green relays per cloud. (b)~Green (inside clouds) and yellow (outside clouds) relays interconnect the cloud clusters by a spanning tree.}\label{fig:greenyellow}
\end{figure}
\subsection{Clouds with Few Stabs}
For any constant $k$, it is straightforward to check in polynomial time whether all blobs in a cloud $C\in\myC$ can be stabbed with $i<k$ stabs. (For any subset of $i$ cells of the arrangement of unit disks centered on the sensors in $C$, we can consider placing the relays in the cells and check whether this stabs all blobs.) Using Lemma~\ref{lem_stab2hub}, we can connect all blobs in such a cloud with at most $2i-1$ red relays. We denote by $\myC^{i}$ the set of clouds where the minimum number of stabs is $i$, and by $\myC^{k+}$ the set of clouds that need at least $k$ stabs.
\subsection{Stitching a Cloud from \texorpdfstring{$\myC^{k+}$}{Ck+}}
We focus on one cloud $C \in \myC^{k+}$. For a point $y$ in the plane, let
\[
\myB(y) = \{ B \in \myB_C : y \in B \}
\]
be the set of blobs that contain the point; obviously $\s{\myB(y)} \le 5$ for any $y$. For any subset of blobs $\mathcal{T} \subseteq \myB_C$, define $ \mathcal{S}(\mathcal{T},y) = \myB(y) \setminus \mathcal{T}$ to be the set of blobs \emph{not from $\mathcal{T}$} containing $y$, and define $V(\mathcal{T})$ to be the set of sensors that form the blobs in~$\mathcal{T}$.
Within $C$, we place a set of red relays $A_r^C = \{ y_j : j = 1, 2, \dotsc \}$, as follows:
\begin{enumerate}
\item Choose arbitrary $B_0 \in \myB_C$.
\item Initialise $j \gets 1$, $\mathcal{T}_j \gets \{ B_0 \}$.
\item While $\mathcal{T}_j \ne \myB_C$:
\\[1ex]\hspace*{1em}%
$\begin{aligned}
y_j &\gets \arg \textstyle\max_{y} \{ \s{\mathcal{S}(\mathcal{T}_j,y)} : \myB(y) \cap \mathcal{T}_j \ne \emptyset \}, \\
\mathcal{S}_j &\gets \mathcal{S}(\mathcal{T}_j,y_j), \\
\mathcal{T}_{j+1} &\gets \mathcal{T}_j \cup \mathcal{S}_j, \\
j &\gets j + 1.
\end{aligned}$
\end{enumerate}
That is, $y_j$ is a point contained in a maximum number of blobs \emph{not from $\mathcal{T}_j$} that intersect a blob from $\mathcal{T}_j$. In other words, we stitch the clouds greedily; the difference from the usual greedy (used in the previous section) is that we insist that {\em some} blob stabbed by $y_j$ is already in $\mathcal{T}_j$.
By induction on $j$, after each iteration, there exists a path through sensors and/or relays between any pair of sensors in $V(\mathcal{T}_j)$. By the definition of a cloud, there is a line segment of length at most $2$ that connects $V(\mathcal{T}_j)$ to $V(\myB_C \setminus \mathcal{T}_j)$; the midpoint of the segment is a location $y$ with $\mathcal{S}(\mathcal{T}_j,y) \ne \emptyset$. Since each iteration increases the size of $\mathcal{T}_j$ by at least $1$, the algorithm terminates in at most $\s{\myB_C}-1$ iterations, and $\s{A_r^C} \le \s{\myB_C} - 1$. The sets $\mathcal{S}_j$ form a partition of $\myB_C \setminus \{ B_0 \}$.
We prove the following performance guarantee (the proof is similar to the analysis of greedy set cover.)
\begin{lemma}\label{lem:stitching}
For each cloud\/ $C$ we have\/ $\s{A_r^C} \, \le \, 37 \s{R^{*}_d \cap C} / 12 - 1$.
\end{lemma}
\begin{proof}
For each $B \in \myB_C \setminus \{B_0\}$, define the weight $w(B) = 1 / \s{\mathcal{S}_j}$, where $\mathcal{S}_j$ is the unique set for which $B \in \mathcal{S}_j$. We also set $w(B_0) = 1$. We have
\begin{equation}\label{eq_J+1}
\sum_{B\in \myB_C} \!\! w(B) \,=\, \s{A_r^C}+1.
\end{equation}
Consider a relay $z \in R^{*}_d \cap C$, and find the smallest $\ell$ with $\mathcal{T}_\ell \cap \myB(z) \ne \emptyset$, that is, $\ell = 1$ if $B_0 \in \myB(z)$, and otherwise $y_{\ell-1}$ is the first relay that pierced a blob from $\myB(z)$. Partition the set $\myB(z)$ into $\mathcal{U}(z) = \mathcal{T}_\ell \cap \myB(z)$ and $\mathcal{V}(z) = \myB(z) \setminus \mathcal{U}(z)$. Note that $\mathcal{V}(z)$ may be empty, e.g., if $y_{\ell-1}=z$.
First, we show that
\[
{\sum_{B \in \mathcal{U}(z)} \!\!\! w(B)} \,\le\, 1 .
\]
We need to consider two cases. It may happen that $\ell = 1$, which means that $B_0 \in \myB(z)$ and $\mathcal{U}(z) = \{B_0\}$. Then the total weight assigned to the blobs in $\mathcal{U}(z)$ is, by definition, $1$. Otherwise $\ell > 1$ and $\mathcal{U}(z) \subseteq S_{\ell-1}$, implying $ w(B) = 1/\s{S_{\ell-1}} \le 1/{\s{\mathcal{U}(z)}}$ for each $B \in \mathcal{U}(z)$.
Second, we show that
\[
{\sum_{B \in \mathcal{V}(z)} \!\!\! w(B)} \,\le\, \frac{1}{\s{\mathcal{V}(z)}} + \frac{1}{\s{\mathcal{V}(z)} - 1} + \dotsb + \frac{1}{1} .
\]
Indeed, at iterations $j \ge \ell$, the algorithm is able to consider placing the relay $y_j$ at the location $z$. Therefore $\s{\mathcal{S}_j} \ge \s{\mathcal{S}(\mathcal{T}_j, z)}$. Furthermore,
\[
\mathcal{S}(\mathcal{T}_j, z) \setminus \mathcal{S}(\mathcal{T}_{j+1}, z)
\,=\, \myB(z)\cap \mathcal{S}_j
\,=\, \mathcal{V}(z) \cap \mathcal{S}_j .
\]
Whenever placing the relay $y_j$ makes $\s{\mathcal{S}(\mathcal{T}_j, z)}$ decrease by a number $a$, exactly $a$ blobs of $\mathcal{V}(z)$ get connected to $\mathcal{T}_j$. Each of them is assigned the weight $w(C) \le 1/\s{\mathcal{S}(\mathcal{T}_j, z)}$. Thus,
\[
\sum_{B \in \mathcal{V}(z)} w(B)
\,\le\, \frac{a_1}{a_1+a_2+\dotsb+a_n} + \frac{a_2}{a_2+a_3+\dotsb+a_n} + \dotsb + \frac{a_n}{a_n} ,
\]
where $a_1,a_2,\dotsc,a_n$ are the number of blobs from $\mathcal{V}(z)$ that are pierced at different iterations, $\sum_i a_i = \s{\mathcal{V}(z)}$. The maximum value of the sum is attained when $a_1=a_2=\dotsb=a_n=1$ (i.e., every time $\s{\mathcal{V}(z)}$ is decreased by 1, and there are $\s{\mathcal{V}(z)}$ summands).
Finally, since $\s{\myB(z)} \le 5$, and $\mathcal{U}(z) \ne \emptyset$, we have $\s{\mathcal{V}(z)}\le4$. Thus,
\begin{equation}\label{eq_Wz<}
W(z) \,= {\sum_{B\in \mathcal{U}(z)} \!\!\! w(B) \,+\, \sum_{B\in \mathcal{V}(z)} \!\!\! w(B)}
\,\le\, 1 + \frac14 + \frac13 + \frac12 + \frac11
\,=\, \frac{37}{12}.
\end{equation}
The sets $\myB(z)$, $z \in R^{*}_d \cap C$, form a cover of $\myB_C$. Therefore, from \eqref{eq_J+1} and \eqref{eq_Wz<},
\[
\frac{37}{12} \s{R^{*}_d \cap C}
\,\ge\! \sum_{z \in R^{*}_d \cap C} \!\! W(z)
\,\ge\! \sum_{B\in \myB_C} \!\! w(B) \,=\, \s{A_r^C} + 1. \qedhere
\]
\end{proof}
\subsection{Green Relays and Cloud Clusters}
At any stage of the algorithm, we say that a set of clouds is \emph{interconnected} if, with the current placement of relays, the sensors in the clouds can communicate with each other. Now, when all clouds have been stitched (so that the sensors within any one cloud can communicate), we proceed to interconnecting the clouds. First we greedily form the collection of cloud \emph{clusters} (interconnected clouds) as follows. We start by assigning each cloud to its own cluster. Whenever it is possible to interconnect two clusters by placing one relay within each of the two clusters, we do so. These two relays are coloured green. After it is no longer possible to interconnect 2 clusters by placing just 2 relays, we repeatedly place 4 green relays wherever we can use them to interconnect clouds from 3 different clusters. Finally, we repeat this for 6 green relays that interconnect 4 clusters.
On average we place 2 green relays every time the number of connected components in the communication graph on sensors plus relays decreases by~one.
\subsection{Interconnecting the Clusters}
Now, when the sensors in each cloud and the clouds in each cluster are interconnected, we interconnect the clusters by a minimum Steiner forest with neighbourhoods. The forest is slightly different from the one used in the previous section. This time we are minimising the total number of relays that need to be placed along the edges of the forest in order to interconnect the clusters; we denote this forest by $\mathrm{MSFN}'$. The forest can be found by assigning appropriate weights to the edges of the graph on the clusters -- the weight of an edge is the number of relays that are necessary to interconnect two clusters.
After $\mathrm{MSFN}'$ is found, we place relays along edges of the forest just as we did in the simple algorithm from the previous section. This time though we assign colours to the relays. Specifically, for each edge $e$ of the forest, we place $2$ green relays at the endpoints of $e$, and $\floor{\s{e}/r}$ yellow relays every $r$ units starting from one of the endpoints.
As with interconnecting clouds into the clusters, when interconnecting the clusters we use 2 green relays each time the number of connected components of the communication graph decreases by one. Thus, overall, we use at most $2\s{\myC}-2$ green relays.
\subsection{Analysis: Red and Green Relays}
Recall that for $i<k$, $\myC^i$ is the class of clouds that require precisely $i$ relays for stabbing, and $\myC^{k+}$ is the class of clouds that need at least $k$ relays for stabbing. An optimal solution $R^{*}$ therefore contains at least
\[
\s{R^{*}_d} \ge k\s{\myC^{k+}}+\sum_{i=1}^{k-1}i \s{\myC^i}
\]
dark relays (relays inside clouds, i.e., relays within reach of sensors). Furthermore, $\s{R^{*}_d \cap C} \ge 1$ for all $C$.
Our algorithm places at most $2i-1$ red relays per cloud in $\myC^i$, and not more than $37 \s{R^{*}_d\cap C} / 12 -1$ red relays per cloud in $\myC^{k+}$. Adding a total of $2\s{\myC}-2$ green relays used for clouds interconnections, we get
\begin{align*}
\s{A_r}+\s{A_g}
&\,\le \sum_{C \in \myC^{k+}} \left( \frac{37}{12} \s{R^{*}_d \cap C} - 1 \right) +
\sum_{i=1}^{k-1} (2i-1)\s{\myC^{i}} + 2 \s{\myC} - 2 \\
&\,\le\, \frac{37}{12} \biggl( \s{R^{*}_d} - \sum_{i=1}^{k-1} i\s{\myC^{i}} \biggr) + \s{\myC^{k+}} + \sum_{i=1}^{k-1} (2i+1)\s{\myC^{i}} - 2 \\
&\,\le\, \frac{37}{12} \s{R^{*}_d} + \s{\myC^{k+}}
\,<\, \left( 3.084 +\frac{1}{k} \right) \s{R^{*}_d}.
\end{align*}
\subsection{Analysis: Yellow Relays}
As in Section~\ref{ssec:lower-bounds}, let $\mathcal{R}$ be the communication graph on the optimal set $R^{*}$ of relays (without sensors). In $\mathcal{R}$ there exists a forest $\mathcal{R}'$ that makes the clusters interconnected. Let $R'\subsetR^{*}$ be the relays that are vertices of $\mathcal{R}'$. We partition $R'$ into ``black'' relays $R^{*}_b = R' \cap R^{*}_d$ and ``white'' relays $R^{*}_w = R' \cap R^{*}_\ell$ -- those inside and outside the clusters, respectively.
Two black relays cannot be adjacent in $\mathcal{R}'$: if they are in the same cluster, the edge between them is redundant; if they are in different clusters, the distance between them must be larger than $r$, as otherwise our algorithm would have placed two green relays to interconnect the clusters into one. By a similar reasoning, there cannot be a white relay adjacent to 3 or more black relays in $\mathcal{R}'$, and there cannot be a pair of adjacent white relays such that each of them is adjacent to 2 black relays. Refer to Figure~\ref{fig:forbid}. Finally, the maximum degree of a white relay is~5. Using these observations, we can prove the following lemma.
\begin{figure}\centering
\scalebox{1.2}{\input{forbid.pdf_t}}
\caption{Forbidden configurations and grey relays.}\label{fig:forbid}
\end{figure}
\begin{lemma}\label{lem_yellow}
There is a spanning forest with neighbourhoods on cloud clusters that requires at most
\[
\left(\frac{4}{\sqrt{3}} + \frac{4}{5}\right) \s{R^{*}_w} < 3.11 \s{R^{*}_w}
\]
yellow relays on its edges.
\end{lemma}
\begin{proof}
Let $\myD$ be the set of cloud clusters. We partition $\mathcal{R}'$ into edge-disjoint trees induced by maximal connected subsets of white relays and their adjacent black relays. It is enough to show that for each such tree $T$ that interconnects a subset of clusters $\myD' \subseteq \myD$, there is a spanning forest on $\myD'$ such that the number of yellow relays on its edges is at most $3.11$ times the number of white relays in $T$. As no pair of black relays is adjacent in $\mathcal{R}'$, these edge-disjoint trees interconnect all clusters in $\myD$. The same holds for the spanning forests, and the lemma follows.
Trees with only one white relay (and thus exactly two black relays) are trivial: the spanning forest needs only one edge with one yellow relay (and one green in each end). Therefore assume that $T$ contains at least two white relays.
We introduce yet another colour. For each white relay with two black neighbours, arbitrarily choose one of the black relays and change it into a ``grey'' relay (Figure~\ref{fig:forbid}). Let $w$ be the number of white relays, let $b$ be the number of remaining black relays, and let $g$ be the number of grey relays in~$T$.
First, we clearly have
\begin{equation}\label{eq_yellow_bw}
b \le w.
\end{equation}
Second, there is no grey--white--white--grey path, and each white relay is adjacent to another white relay. Therefore the ratio $(b+g)/w$ is at most $9/5$. To see this, let $w_2$ be the number of white relays with a grey and a black neighbour, let $w_1$ be the number of white relays with a black neighbour but no grey neighbour, and let $w_0$ be the number of white relays without a black neighbour. By degree bound, $w_2 \le 4 w_1 + 5 w_0 = 4 w_1 + 5 (w - w_2 - w_1)$; therefore $5w \ge 6 w_2 + w_1$. We also know that $w \ge w_2 + w_1$. Therefore
\begin{equation}\label{eq_yellow_95wbg}
\frac95 w
\,\ge\, \frac15 (6 w_2 + w_1) + \frac45 (w_2 + w_1)
\,=\, (w_2 + w_1) + w_2
\,=\, b + g.
\end{equation}
(The worst case is a star of $1+4$ white relays, $5$ black relays and $4$ grey relays.)
Now consider the subtree induced by the black and white relays. It has fewer than $b+w$ edges, and the edge length is at most $r$. By Lemma~\ref{lem_StRatio}, there is a spanning forest on the black relays with total length less than ${(2/\sqrt{3})(b+w)r}$; thus we need fewer than ${(2/\sqrt{3})(b+w)}$ yellow relays on the edges.
Now each pair of black relays in $T$ is connected. It is enough to connect each grey relay to the nearest black relay: the distance is at most $2$, and one yellow relay is enough. In summary, the total number of yellow relays is less than
\[
\begin{split}
\frac{2}{\sqrt{3}} (b+w) + g
&\,=\, \left(\frac{2}{\sqrt{3}} - 1\right) (b+w) + b+g + w \\
&\,\le\, \left(\frac{2}{\sqrt{3}} - 1\right) 2 w + \frac{9}{5} w + w
\,=\, \left(\frac{4}{\sqrt{3}} + \frac45 \right)w
\,<\, 3.11 w .
\end{split}
\]
The inequality follows from \eqref{eq_yellow_bw} and \eqref{eq_yellow_95wbg}.
\end{proof}
Thus, $\s{A_y} < 3.11 \s{R^{*}_w} \le 3.11 \s{R^{*}_\ell}$, and the overall approximation ratio of our algorithm is less than $3.11$.
\section{Inapproximability of One-Tier Relay Placement}\label{sec_inapx1tier}
We have improved the best known approximation ratio for one-tier relay placement from~7 to~3.11. A natural question to pose at this point is whether we could make the approximation ratio as close to 1 as we wish. In this section, we show that no PTAS exists, unless P${}={}$NP.
\begin{theorem}\label{thm:inapx}
It is NP-hard to approximate one-tier relay placement within factor\/ $1 + 1/687$.
\end{theorem}
The reduction is from minimum vertex cover in graphs of bounded degree. Let $\mathcal{G} = (V,E)$ be an instance of vertex cover; let $\Delta \le 5$ be the maximum degree of $\mathcal{G}$. We construct an instance $\mathcal{I}$ of the relay placement problem that has a feasible solution with\/ $k + 2\s{E} + 1$ relays if and only if $\mathcal{G}$ has a vertex cover of size~$k$.
Figure~\ref{fig:inapx} illustrates the construction. Figure~\ref{fig:inapx}a shows the \emph{vertex gadget}; we have one such gadget for each vertex $v \in V$. Figure~\ref{fig:inapx}b shows the \emph{crossover gadget}; we have one such gadget for each edge $e \in E$. Small dots are sensors in the relay placement instance; each solid edge has length at most $1$. White boxes are \emph{good locations} for relays; there is one good location in each vertex gadget, and two good locations per crossover gadget. Dashed line shows a connection for relays in good locations in a crossover.
\begin{figure}[t]
\centering
\scalebox{0.9}{\input{inapx3.pdf_t}}
\caption{(a)~Vertex gadget for $v \in V$. (b)~Crossover gadget for $\{v,u\} \in E$. (c)~Reduction for $K_5$. (d)~Normalising a solution, step~1.}\label{fig:inapx}
\end{figure}
We set $r = 16(\s{V}+1)$, and we choose $\s{E}+1$ disks of diameter $r$ such that each pair of these disks is separated by a distance larger than $\s{V} r$ but at most $\poly(\s{V})$. One of the disks is called $S(0)$ and the rest are $S(e)$ for $e \in E$. All vertex gadgets and one isolated sensor, called $p_0$, are placed within disk $S(0)$. The crossover gadget for edge $e$ is placed within disk $S(e)$. There are noncrossing paths of sensors that connect the crossover gadget $e = \{u,v\} \in E$ to the vertex gadgets $u$ and $v$; all such paths (\emph{tentacles}) are separated by a distance at least~$3$. Good relay locations and $p_0$ cannot be closer than $1$ unit to a disk boundary.
Figure~\ref{fig:inapx}c is a schematic illustration of the overall construction in the case of $\mathcal{G} = K_5$; the figure is highly condensed in $x$ direction. There are $11$ disks. Disk $S(0)$ contains one isolated sensor and $5$ vertex gadgets. Each disk $S(e)$ contains one crossover gadget. Outside these disks we have only parts of tentacles.
There are $4 \s{E} + 1$ blobs in $\mathcal{I}$. The isolated sensor $p_0$ forms one blob. For each edge there are 4 blobs: two tentacles from vertex gadgets to the crossover gadget, and two isolated sensors in the crossover gadget.
Theorem~\ref{thm:inapx} now follows from the following two lemmata.
\begin{lemma}\label{lem:inapx-a}
Let\/ $C$ be a vertex cover of\/ $\mathcal{G}$. Then there is a feasible solution to relay placement problem\/ $\mathcal{I}$ with\/ $\s{C} + 2\s{E} + 1$ relays.
\end{lemma}
\begin{proof}
For each $v \in C$, place one relay at the good location of the vertex gadget~$v$. For each $e \in E$, place two relays at the good locations of the crossover gadget~$e$. Place one relay at the isolated sensor~$p_0$.
\end{proof}
\begin{lemma}\label{lem:inapx-b}
Assume that there exists a feasible solution to relay placement problem\/ $\mathcal{I}$ with\/ $k + 2\s{E} + 1$ relays. Then\/ $\mathcal{G}$ has a vertex cover of size at most\/ $k$.
\end{lemma}
\begin{proof}
If $k \ge \s{V}$, then the claim is trivial: $C = V$ is a vertex cover of size at most $k$. We therefore focus on the case $k < \s{V}$.
Let $R$ be a solution with $k + 2\s{E} + 1$ relays. We transform the solution into a canonical form $R'$ of the same size and with the following additional constraints: there is a subset $C \subseteq V$ such that at least one relay is placed at the good relay location of each vertex gadget $v \in C$; two relays are placed at the good locations of each crossover gadget; one relay is placed at $p_0$; and there are no other relays. If $R'$ is a feasible solution, then $C$ is a vertex cover of $\mathcal{G}$ with $\s{C} \le k$.
Now we show how to construct the canonical form $R'$. We observe that there are $2\s{E}+1$ isolated sensors in $\mathcal{I}$: sensor $p_0$ and two sensors for each crossover gadget. In the feasible solution $R$, for each isolated sensor $p$, we can always identify one relay within distance $1$ from $p$ (if there are several relays, pick one arbitrarily). These relays are called \emph{bound relays}. The remaining $k < \s{V}$ relays are called \emph{free relays}.
\emph{Step~1.} Consider the communication graph formed by the sensors in $\mathcal{I}$ and the relays $R$. Since each pair of disks $S(i)$, $i \in \{0\} \cup E$, is separated by a distance larger than $\s{V} r$, we know that there is no path that extends from one disk to another and consists of at most $k$ free relays (and possibly one bound relay in each end). Therefore we can shift each connected set of relays so that it is located within one disk (see Figure~\ref{fig:inapx}d). While doing so, we do not break any relay--relay links: all relays within the same disk can communicate with each other. We can also maintain each relay--blob link intact.
\emph{Step~2.} Now we have a clique formed by a set of relays within each disk $S(i)$, there are no other relays, and the network is connected. We move the bound relay in $S(0)$ so that it is located exactly on $p_0$. For each $e \in E$, we move the bound relays in $S(e)$ so that they are located exactly on the good relay locations. Finally, any free relays in $S(0)$ can be moved to a good relay location of a suitable vertex gadget. These changes may introduce new relay--blob links but they do not break any existing relay--blob or relay--relay links.
\emph{Step~3.} What remains is that some disks $S(e)$, $e \in E$, may contain free relays. Let $x$ be one of these relays. If $x$ can be removed without breaking connectivity, we can move $x$ to the good relay location of any vertex gadget. Otherwise $x$ is adjacent to exactly one blob of sensors, and removing it breaks the network into two connected components: component~$A$, which contains $p_0$, and component~$B$. Now we simply pick a vertex $v \in V$ such that the vertex gadget $v$ contains sensors from component $B$, and we move $x$ to the good relay location of this vertex gadget; this ensures connectivity between $p_0$ and $B$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:inapx}.]
Let $\Delta, A, B, C \in \mathbb{N}$, with $\Delta \le 5$ and $C > B$. Assume that there is a factor
\[
\alpha \,=\, 1 + \frac{C-B}{B + \Delta A + 1}
\]
approximation algorithm $\mathcal{A}$ for relay placement. We show how to use $\mathcal{A}$ to solve the following \emph{gap-vertex-cover} problem for some $0 < \varepsilon < 1/2$: given a graph $\mathcal{G}$ with $A n$ nodes and maximum degree $\Delta$, decide whether the minimum vertex cover of $\mathcal{G}$ is smaller than $(B+\varepsilon)n$ or larger than $(C-\varepsilon)n$.
If $n < 2$, the claim is trivial. Otherwise we can choose a positive constant $\varepsilon$ such that
\[
\alpha - 1 \,<\, \frac{C-B-2\varepsilon}{B +\varepsilon + \Delta A+1/n}
\]
for any $n \ge 2$. Construct the relay placement instance $\mathcal{I}$ as described above.
If minimum vertex cover of $\mathcal{G}$ is smaller than $(B+\varepsilon)n$, then by Lemma~\ref{lem:inapx-a}, the algorithm $\mathcal{A}$ returns a solution with at most
$b = {\alpha ((B+\varepsilon)n + 2\s{E} + 1)}$
relays. If minimum vertex cover of $\mathcal{G}$ is larger than $(C-\varepsilon)n$, then by Lemma~\ref{lem:inapx-b}, the algorithm $\mathcal{A}$ returns a solution with at least
$c = (C-\varepsilon)n + 2\s{E} + 1$
relays. As $2\s{E} \le \Delta A n$, we have
\[
\begin{split}
c - b
&\,\ge\, (C-\varepsilon)n + 2\s{E} + 1 - \alpha \bigl((B+\varepsilon)n + 2\s{E} + 1\bigr) \\
&\,\ge\, \bigl(C - B - 2 \varepsilon - (\alpha-1)(B+ \varepsilon + \Delta A + 1/n)\bigr) n
\,>\, 0,
\end{split}
\]
which shows that we can solve the gap-vertex-cover problem in polynomial time.
For $\Delta = 4$, $A = 152$, $B = 78$, $C = 79$, and any $0 < \varepsilon < 1/2$, the gap-vertex-cover problem is NP-hard \cite[Theorem~3]{berman99some}.
\end{proof}
\begin{remark}
We remind the reader that throughout this work we assume that radius $r$ is part of the problem instance. Our proof of Theorem~\ref{thm:inapx} heavily relies on this fact; in our reduction, $r = \Theta(\s{V})$. It is an open question whether one-tier relay placement admits a PTAS for a small, e.g., constant,~$r$.
\end{remark}
\section{A PTAS for Two-Tier Relay Placement}\label{sec_ptas}
In the previous sections we studied one-tier relay placement, in which
the sensors could communicate with each other, as well as with the
relays. We gave a 3.11-approximation algorithm, and showed that the
problem admits no PTAS (for general $r$). In this section we turn to
the two-tier version, in which the sensors cannot communicate with
each other, but only with relays.
The two-tier relay placement problem asks that we determine a set $R$
of relays such that there exists a tree $T$ whose internal nodes are
the set $R$ and whose leaves are the $n$ input points (sensors) $V$,
with every edge of $T$ between two relays having length at most $r$
and every edge of $T$ between a relay and a leaf (sensor) having
length at most 1.
We give a PTAS for this version of the problem, summarized in the
following theorem.
\begin{theorem}\label{thm:ptas}
The two-tier relay placement problem has a PTAS.
\end{theorem}
We give an overview of the method here; details and proofs appear in
the Appendix. Let $m$ be a (sufficiently large) positive integer constant; we will give a $(1+O(1/m))$-approximate solution. We distinguish between two cases: the \emph{sparse} case in which $\diam(V) \ge mnr$,
and the \emph{dense} case, in which $\diam(V) < mnr$.
In the sparse case, a solution can consist of long chains of relays,
with a number of relays not bounded by a polynomial in $n$; thus, we
output a succinct representation of such chains, specifying the
endpoints (which come from a regular grid of candidate locations).
The algorithm, then, is a straightforward reduction to the Euclidean
minimum Steiner tree problem. See Appendix~\ref{sec_ptas_sparse}.
In the dense case, we compute and output an explicit solution. In
this case, the set of possible locations of relays that we need to
consider is potentially large (but polynomial); we employ an
``iterated circle arrangement'' to construct the set, $G$, of
candidate locations. Analysis of this set of candidates is done in
Appendix~\ref{sec_lem_rounding}, where we prove the structure lemma,
Lemma~\ref{lem_rounding}. Armed with a discrete candidate set, we
then employ the $m$-guillotine method \cite{mitchell99guillotine} to
give a PTAS for computing a set of relays (a subset of $G$) that is
within factor $1+O(1/m)$ of being a minimum-cardinality set.
The main idea is to optimize over the class of ``m-guillotine solutions'', which can be done using dynamic programming. An $m$-guillotine solution has a recursive property determined by ``guillotine cuts'' of the bounding box of the optimal solution (axis-parallel cuts of constant ($O(m)$) description complexity). We prove that an optimal solution that uses $k^*$ relays can be augmented with a set of $O(k^*/m)$ additional relays so that it has the $m$-guillotine property.
\section{Discussion}
In Section~\ref{sec_apx1tier} we presented a simple $O(n\log n)$-time 6.73-approximation algorithm for the one-tier relay placement. If one is willing to spend more time finding the approximation to the set cover, one may use the semi-local optimisation framework of \citet{duh97approximation}, which provides an approximation ratio of $1+1/2+1/3+1/4+1/5-1/2$ for the set cover with at most 5 elements per set; hence we obtain a 5.73-approximation.
One can form a bipartite graph on the blobs and candidate stab locations as follows. Pick a point within each maximal-depth cell of the arrangement of the blobs (maximal w.r.t\ the blobs that contain the cell); call these points ``red''. Pick a point within each blob; call these points ``blue''. Connect each blue point to the red points contained in the blob, represented by the blue point. It is possible to pick the points so that the bipartite graph on the points is planar. Then the stab placement is equivalent to the Planar Red/Blue Dominating Set Problem \cite{downey99parameterized} -- find fewest red vertices that dominate all blue ones. We believe that the techniques of \citet{baker94approximation} can be used to give a PTAS for the problem. Combined with the simple algorithm in Section~\ref{sec_apx1tier}, this would result in a $4.16$-approximation for the relay placement.
A more involved geometric argument may improve the analysis of yellow relays in Section~\ref{sec_apx1tierim}, bringing the constant $3.11$ in Lemma~\ref{lem_yellow} down to~$3$, which would improve the approximation factor to $3.09$. Combining this with the possible PTAS for the Planar Red/Blue Dominating Set would yield an approximation factor of $3+\varepsilon$. We believe that a different, integrated method would be needed for getting below $3$: various steps in our estimates are tight with respect to~$3$. In particular, as the example in Figure~\ref{fig:worstcase} shows, our algorithm may find a solution with (almost) $3$ times more relays than the optimum.
\begin{figure}[h]
\centering
\input{worstcase.pdf_t}
\caption{Unit disks centered on the sensors are shown. The optimum has $1$ relay per blob (hollow circles). Our algorithm may place $1$ red relay in every blob plus $2$ green relays in (almost every) blob.}\label{fig:worstcase}
\end{figure}
\section*{Acknowledgments}
We thank Guoliang Xue for suggesting the problem to us and for fruitful discussions, and Marja Hassinen for comments and discussions. We thank the anonymous referees for their helpful suggestions.
A preliminary version of this work appeared in the \emph{Proceedings of the 16th European Symposium on Algorithms} (ESA 2008) \cite{efrat08improved}.
Parts of this research were conducted at the Dagstuhl research center. AE is supported by NSF CAREER Grant 0348000. Work by SF was conducted as part of EU project FRONTS (FP7, project 215270.) JM is partially supported by grants from the National Science Foundation (CCF-0431030, CCF-0528209, CCF-0729019, CCF-1018388, CCF-1540890), NASA Ames, Metron Aviation, and Sandia National Labs. JS is supported in part by the Academy of Finland grant 116547, Helsinki Graduate School in Computer Science and Engineering (Hecse), and the Foundation of Nokia Corporation.
\bibliographystyle{plainnat} |
1812.07439 | \section{Introduction}\label{sec:intro}
Probabilistic programming
\cite{gordon2014probabilistic,goodman2008church,wood2014a,dippl,murray2018delayed}
is a programming paradigm for expressing probabilistic models. A
\emph{\gls{ppl}} includes two constructs: one for \emph{sampling} from
probability distributions, and one for \emph{conditioning} on data. We use a
construct called \texttt{weight} for the latter, which simply adds its argument
to a \emph{logarithmic\footnote{This is commonly done for numerical
stability.} weight} attached to the current execution. One motivation for using
probabilistic programming is greater \emph{expressive power} compared to
classical approaches to probabilistic modeling, such as Bayesian networks. This
increase in expressive power comes from two properties: \emph{stochastic
branching}, i.e. that control flow can depend on randomness, and
\emph{recursion}. A \gls{ppl} with these two properties is called a
\emph{universal} \gls{ppl}~\cite{goodman2008church}.
The most important component of a \gls{ppl} is its \emph{inference algorithm},
which is loosely analogous to the execution semantics of ordinary programming
languages. \emph{\Gls{smc}} methods \cite{liu1998sequential} are commonly used
as such inference algorithms \cite{dippl,wood2014a,murray2018delayed}. They
perform inference by executing a number of instances of a probabilistic program
in parallel, pausing the executions when they encounter a conditioning on data.
When all executions have been paused, the algorithm looks at the weights of the
different executions given the data, and \emph{resamples} the set of executions
proportional to these weights. That is, more probable executions are
replicated, and less probable executions are discarded. This process repeats
until the program has reached its end. There are, however, problems with this
approach. The toy program in Fig.~\ref{fig:introex} encodes a probability
distribution over booleans using a stochastich branch. The different executions
in \gls{smc} inference for the program will encounter a different number of
calls to \texttt{weight}, either three or two. Furthermore, they will not
always \emph{align} at the same \texttt{weight} statements simultaneously---it
is possible that one execution can pause at line 3, while another pauses at
line 7. In Fig.~\ref{fig:introex}, if we are only running a moderate number of
executions in total (say 10\,000), with overwhelmingly high probability, all
executions ending up at line 3 will be discarded; this is because of their low
weight ($e^{5+10}$) relative to the other weight at line 7 ($e^{5+95}$). We can
clearly see, however, that in the end both branches should be equally weighted.
\begin{figure}[tb]
\centering
\begin{tabular}{c}
\begin{lstlisting}
weight(5)
if flip() then {
weight(10)
weight(85)
false
} else {
weight(95)
true
}
\end{lstlisting}
\end{tabular} \qquad
\begin{tikzpicture}[baseline=(current bounding box.center)]
\begin{axis}[
ybar, ymin=0,
bar width=10mm,
width=5cm, height=5cm, enlarge x limits=0.5,
ylabel={Probability},
ytick={0,0.25,0.5},
xtick=data,
xticklabels={\texttt{false},\texttt{true}}
]
\addplot [black] coordinates {(0,0.5) (1,0.5)};
\end{axis}
\end{tikzpicture}
\caption{%
A toy example illustrating when resampling can be problematic. The example
is written in our own functional, higher-order, \gls{ppl} (under
development). The function \texttt{flip} represents a coin flip. The bar
plot shows the true distribution encoded by the example---that is, on
average, there should be equally many executions resulting in \texttt{true}
as executions resulting in \texttt{false} (with the constant weight of 100).
}
\label{fig:introex}
\end{figure}
The problem illustrated in Fig.~\ref{fig:introex} is not handled optimally by a
direct implementation of \gls{smc}. Such implementations are, for instance,
available in WebPPL \cite{dippl} and Anglican \cite{wood2014a}. When performing
\gls{smc} inference on an equivalent program in WebPPL, the algorithm performs
inference without any visible errors, but only returns \texttt{true}. In
Anglican, an error is given at runtime, stating that some
\emph{observes}\footnote{Anglican uses a different construct for conditioning
on data called \texttt{observe}.} are not global. This error is given for all
programs where different executions do not have the same number of calls to
\texttt{weight}.
It is possible for users to \emph{manually} align unaligned programs, taking
care to only place calls to \texttt{weight} where they are aligned. However,
for larger programs, manual alignment can become an error-prone process, and a
nuisance for the programmer. In this paper, we propose an \emph{automatic}
solution for aligning higher-order probabilistic programs using static
analysis. The static analysis is used to find all \emph{dynamic} terms in a
program---that is, terms that may be reached from within a stochastic branch.
In Fig.~\ref{fig:introex}, all terms within both branches of the \texttt{if}
expression are dynamic, since the condition is random. In particular, the calls
to \texttt{weight} on lines 3, 4, and 7 are dynamic, and hence unaligned. The
call to \texttt{weight} on line 1 is not dynamic, however, and is therefore
aligned. By identifying all unaligned \texttt{weight} calls, we can handle
these specially when running \gls{smc}, making the \gls{smc} inference
\emph{aligned}. The contributions are:
\begin{itemize}
\item
A static analysis algorithm, based on 0-CFA \cite{shivers1988control,shivers1991control},
for discovering dynamic terms in higher-order probabilistic programs
(Section~\ref{sec:disc}).
\item
An application of the above analysis, where the resulting dynamic
terms are used to automatically align \gls{smc}
inference for higher-order probabilistic programs (Section~\ref{sec:utilize}).
\item
An evaluation of our automatic alignment approach for \gls{smc} inference,
compared to the unaligned \gls{smc} implementation, as found in
WebPPL\footnote{We compare to the \gls{smc} algorihm found in WebPPL, since
the Anglican \gls{smc} algorithm does not handle unaligned programs.}.
This evaluation is performed through a case study on a model from
phylogenetics (Section~\ref{sec:case}).
\end{itemize}
Before describing our contributions in detail, Section~\ref{sec:prelim} will
provide some necessary background.
\section{Preliminaries}\label{sec:prelim}
In this section, we give a brief introduction to a classical \gls{smc} method
for Bayesian networks. This background is needed to understand the inference
semantics of the \gls{ppl} presented in the later sections.
\paragraph{Bayesian networks.}
\begin{figure}[b]
\centering
\begin{tikzpicture}[node distance = 3mm,
baseline=(current bounding box.center)]
\node[draw,circle] (X1) {$X_1$};
\node[draw,circle,right=of X1] (X2) {$X_2$};
\node[draw,circle,right=of X2] (X3) {$X_3$};
\node[draw,circle,right=of X3] (X4) {$X_4$};
\node[fill=black!30,draw,circle,below=of X1] (Y1) {$Y_1$};
\node[fill=black!30,draw,circle,below=of X2] (Y2) {$Y_2$};
\node[fill=black!30,draw,circle,below=of X3] (Y3) {$Y_3$};
\draw[-latex] (X1) -- (X2);
\draw[-latex] (X2) -- (X3);
\draw[-latex] (X3) -- (X4);
\draw[-latex] (X1) -- (Y1);
\draw[-latex] (X2) -- (Y2);
\draw[-latex] (X3) -- (Y3);
\end{tikzpicture}
\quad
\begin{minipage}{0.55\textwidth}
\[
\begin{aligned}
& Y_1 = 2.1 \quad Y_2 = 6.3 \quad Y_3 = 10.7 \\
& p(x_1) = \mathcal{N}(0,2^2) \\
& p(x_i \mid x_{i-1}) = \mathcal{N}(x_{i-1} + 4,1^2),
\enspace i \in \{ 2, 3, 4 \} \\
& p(y_i \mid x_{i}) = \mathcal{N}(x_{i}, 1^2),
\enspace i \in \{ 1, 2, 3 \}
\end{aligned}
\]
\end{minipage}
\caption{%
A Bayesian network representation of a simple linear Gaussian state space
model. The symbol $\mathcal{N}$ is a notation for the ubiquitous normal
distribution.
}
\label{fig:baynet}
\end{figure}
A Bayesian network~\cite{pearl1985bayesian} is a \emph{directed acyclic graph}
where the vertices are \emph{random variables} and the edges direct
dependencies between them. An example of a Bayesian network is
given in Fig.~\ref{fig:baynet}. The random variables $X_i$ are the exact
positions of some moving object at time $i$. The random variables $Y_1$,
$Y_2$, and $Y_3$ are noisy observations of the positions with values given in
the figure (shaded in the graph).
For more details on probability theory and Bayesian networks, see e.g., Bishop
\cite{bishop2006pattern}.
\paragraph{Sequential Monte Carlo.}
Consider again the example of a Bayesian network given in
Fig.~\ref{fig:baynet}. We are now interested in inferring the
\emph{marginal}\footnote{Meaning that we are only interested in some of the
unobserved random variables.} probability distribution $p(x_4 \mid
y_1,y_2,y_3)$---that is, the distribution over the next location of the moving
object given all of our observations up until this point. For this particularly
simple model, we can compute the exact solution in closed form by using
standard results from probability theory applied to the equations in
Fig.~\ref{fig:baynet}. In more complex
probabilistic models, an exact solution is most often not available. Instead,
approximate inference such as \gls{smc}~\cite{liu1998sequential} or
\gls{mcmc}~\cite{metropolis1953equation,hastings1970monte} methods must be
used. A basic Monte Carlo method is \emph{likelihood weighting}---simply
\emph{simulate} the model repeatedly, and weigh each simulation based on the
observed variables. This does not perform well for most models of interest, and
we can instead use an \gls{smc} method---the \emph{bootstrap
particle filter}~\cite{gordon2014probabilistic}. The key idea in the bootstrap
particle filter is that we run many simulations in parallel, and
\emph{resample} simulations whenever encountering an observation. Intuitively,
resampling means that less likely simulations are discarded and replaced by
more likely simulations. This is illustrated in Fig.~\ref{fig:resample} for
the model in Fig.~\ref{fig:baynet}. The resampling is especially obvious when
encountering the first observation $Y_1$---only two simulations of $X_1$ make
sense according to $Y_1$, and these simulations are the only two surviving to
the next step. In general, we can always run \gls{smc} inference on a Bayesian
network by finding a topological ordering over the random variables in the
network, and then simulating the network in that order. \Gls{smc} is, however,
not always the preferred method of inference, depending on the network
structure. \Gls{mcmc} is, for instance, sometimes a better alternative for
networks where observed nodes do not occur sequentially enough throughout the
network.
\begin{figure}[tb]
\centering
\begin{tikzpicture}[node distance = 1mm,minimum width = 2.4cm]
\node[draw,rounded rectangle, fill=black!0
] (S11) {$X_1 \approx -2.5$};
\node[draw,rounded rectangle,below = of S11, fill=black!46
] (S12) {$X_1 \approx 4.7$};
\node[draw,rounded rectangle,below = of S12, fill=black!54
] (S13) {$X_1 \approx 4.6$};
\node[draw,rounded rectangle,below = of S13, fill=black!0
] (S14) {$X_1 \approx -3.2$};
\node[draw,rounded rectangle,below = of S14, fill=black!0
] (S15) {$X_1 \approx -2.8$};
\node[draw,rounded rectangle,right = 1cm of S11, fill=black!10
] (S21) {$X_2 \approx 8.9$};
\node[draw,rounded rectangle,below = of S21, fill=black!46
] (S22) {$X_2 \approx 8.0$};
\node[draw,rounded rectangle,below = of S22, fill=black!4
] (S23) {$X_2 \approx 9.2$};
\node[draw,rounded rectangle,below = of S23, fill=black!44
] (S24) {$X_2 \approx 8.1$};
\node[draw,rounded rectangle,below = of S24, fill=black!0
] (S25) {$X_2 \approx 9.9$};
\node[draw,rounded rectangle,right = 1cm of S21, fill=black!30
] (S31) {$X_3 \approx 11.5$};
\node[draw,rounded rectangle,below = of S31, fill=black!11
] (S32) {$X_3 \approx 12.3$};
\node[draw,rounded rectangle,below = of S32, fill=black!21
] (S33) {$X_3 \approx 11.9$};
\node[draw,rounded rectangle,below = of S33, fill=black!15
] (S34) {$X_3 \approx 12.1$};
\node[draw,rounded rectangle,below = of S34, fill=black!23
] (S35) {$X_3 \approx 11.8$};
\node[draw,rounded rectangle,right = 1cm of S31, fill=black!20
] (S41) {$X_4 \approx 15.9$};
\node[draw,rounded rectangle,below = of S41, fill=black!20
] (S42) {$X_4 \approx 16.1$};
\node[draw,rounded rectangle,below = of S42, fill=black!20
] (S43) {$X_4 \approx 15.4$};
\node[draw,rounded rectangle,below = of S43, fill=black!20
] (S44) {$X_4 \approx 18.0$};
\node[draw,rounded rectangle,below = of S44, fill=black!20
] (S45) {$X_4 \approx 15.7$};
\node[above=of S11] {Observe $Y_1$};
\node[above=of S21] {Observe $Y_2$};
\node[above=of S31] {Observe $Y_3$};
\node[above=of S41] {Result \vphantom{$Y_1$}};
\draw[-latex] (S12.east) -- (S21.west);
\draw[-latex] (S12.east) -- (S23.west);
\draw[-latex] (S13.east) -- (S22.west);
\draw[-latex] (S13.east) -- (S24.west);
\draw[-latex] (S13.east) -- (S25.west);
\draw[-latex] (S22.east) -- (S32.west);
\draw[-latex] (S22.east) -- (S33.west);
\draw[-latex] (S22.east) -- (S31.west);
\draw[-latex] (S24.east) -- (S34.west);
\draw[-latex] (S24.east) -- (S35.west);
\draw[-latex] (S31.east) -- (S43.west);
\draw[-latex] (S32.east) -- (S44.west);
\draw[-latex] (S33.east) -- (S41.west);
\draw[-latex] (S34.east) -- (S45.west);
\draw[-latex] (S35.east) -- (S42.west);
\end{tikzpicture}
\caption{%
A resampling illustration for a bootstrap particle filter with 5
simulations. The nodes are colored according to their weight---the darker
nodes indicate more likely samples given the observation. The lines
indicate how simulations survive, and possibly replicate, to the next
step. No lines means a simulation is discarded. In the result, all
samples of $X_4$ have the same weight, because there is no $Y_4$
observation.
}
\label{fig:resample}
\end{figure}
Fig.~\ref{fig:hist} shows a histogram of the samples produced by
running the bootstrap particle filter with $10\,000$ simulations (also
known as \emph{particles}) on the model in Fig.~\ref{fig:baynet}. Note
that the exact solution $p(x_4\mid y_1,y_2,y_3)$ is shown with the
dashed line. For more details on \gls{smc}, see
e.g., Doucet et al. \cite{doucet2001introduction}.
\begin{figure}[tb]
\centering
\begin{tikzpicture}[trim axis left, trim axis right]
\begin{axis}[
area style,
yticklabel style={/pgf/number format/fixed},
xlabel=$x_4$,
ylabel={$p(x_4 \mid y_1,y_2,y_3)$},
xmin=9,xmax=20,
ymin=0,ymax=0.35,
width=0.9\textwidth,
height=4cm
]
\addplot [ybar interval,mark=no,hist={density,bins=100}]
table [y index=0] {case-study/histogram};
\addplot [samples=100,dashed,thick,domain=9:20]
{1 / sqrt(2*pi*1.6216216216216215) *
exp(-(x-14.464864864864865)^2/(2*1.6216216216216215) )};
\end{axis}
\end{tikzpicture}
\caption{%
The result of running a bootstrap particle filter with $10\,000$ simulations
for the model in Fig.~\ref{fig:baynet}. The normalized histogram shows the samples
from the particle filter, and the dashed line shows the exact solution,
which is available for this particular model.
}
\label{fig:hist}
\end{figure}
\paragraph{Probabilistic programming: an example.}
\begin{figure}[tb]
\centering
\begin{tabular}{c}
\begin{lstlisting}[basicstyle=\scriptsize\ttfamily,mathescape]
function sim(stop, lambda) {
t = sample(exponential(lambda))
if t <= stop then {
weight(2.0)
sim(stop-t, lambda+0.1)
} else t
}
lambda = sample(gamma(1.0, 1.0))
stop = sample(gamma(1.0, 1.0))
sim = sim(stop, lambda)
weight(sim+lambda)
lambda
\end{lstlisting}
\end{tabular}
\caption{A probabilistic program, written in our own functional, higher-order \gls{ppl}.}
\label{fig:probprog}
\end{figure}
We gave a small toy example of a probabilistic program in
Section~\ref{sec:intro}. Here, we give a slightly bigger example, shown in
Fig.~\ref{fig:probprog}. The language contains a construct \texttt{sample} for
sampling from probability distributions, and a \texttt{weight} construct as seen before. The \texttt{sample} construct is equivalent to the unobserved random variables in a Bayesian network, and \texttt{weight} is related\footnote{%
Observing a random variable $Y$ with probability distribution $p(y)$ as in a
Bayesian network can be expressed as $\texttt{weight(}\log p(y)\texttt{)}$, where
$y$ is the concrete observation.
}
to the observed random variables in a network.
The program is a smaller version of the
phylogenetic model used for the case study in Section~\ref{sec:case}, but still
demonstrates the alignment problem because \texttt{sim} recursively calls
itself from a stochastic branch (line 5) and contains a call to \texttt{weight}
(line 4). Hence, this call to \texttt{weight} should intuitively be marked
dynamic, since it might not be properly aligned.
Besides having stochastic
branches and recursion, probabilistic programming languages also differ from
Bayesian networks by defining an explicit ordering over random variables in the
program. Such an ordering has to be provided separately for Bayesian networks
before performing inference.
\section{Discovering dynamic terms}\label{sec:disc}
As a first step, we perform a static analysis of our input program. The goal of
this analysis is to, for every term in the program, decide whether or not this
term can appear within a branch of an \texttt{if} expression with a stochastic
condition. We say that such a term is \emph{dynamic}. As we will see in
Section~\ref{sec:utilize}, the information produced by the analysis is key for
aligning the \gls{smc} inference correctly. We begin by introducing the target
language of the analysis. After this, we outline the analysis with examples and
give a formalization. Lastly, we discuss the limitations of the approach.
\subsection{The target language}
In order to simplify the presentation of the upcoming analysis, we begin by
introducing a \gls{ppl} with just enough constructs to make it universal.
Fig.~\ref{fig:lang} states the abstract syntax for such a language, based on
the untyped lambda calculus. Most importantly, the language contains \texttt{sample} and
\texttt{weight} constructs. Furthermore, the language also includes \texttt{if}
expressions, for which sampled values can be passed as the conditions. This,
together with the inherent recursion available in the untyped lambda calculus,
makes the language a minimal universal \gls{ppl}. Extending the language to a
more complete probabilistic programming language such as the language in
Fig.~\ref{fig:probprog} (which also contains various syntactic sugars) is
straightforward, and has been done for the case study in
Section~\ref{sec:case}.
\begin{figure}[tb]
\[
\begin{aligned}
\mathbf{e} \Coloneqq &\enspace \mathbf{t}^l\\
\mathbf{t} \Coloneqq &\enspace
x
\enspace | \enspace
c
\enspace | \enspace
\lambda x. \mathbf{e}
\enspace | \enspace
\mathbf{e}_1 \enspace \mathbf{e}_2
\enspace | \enspace
\texttt{fix } \mathbf{e}
\enspace | \enspace
\texttt{if } \mathbf{e}_1 \texttt{ then } \mathbf{e}_2 \texttt{ else } \mathbf{e}_3
\\
| &\enspace
\texttt{sample } \mathbf{e}
\enspace | \enspace
\texttt{weight } \mathbf{e} \\
&\hspace{-8mm}
\begin{aligned}
x \in &\enspace X &
&(\text{Variables})\\
%
c \in &\enspace
C &
&(\text{Constants})\\
%
%
%
l \in &\enspace \mathbb{N} &
&(\text{Labels}) \\
&\hspace{-8mm}\{ \mathit{false}, \mathit{true}, () \}
\cup \mathbb{R} \cup D \subseteq C
\end{aligned}
\end{aligned}
\]
\caption{A small \gls{ppl}. $D$ denotes a set of
probability distributions, $()$ is the unit element.}
\label{fig:lang}
\end{figure}
\begin{figure}[tb]
\[
\begin{gathered}
\begin{aligned}
\mathbf{v} \Coloneqq& \enspace
c \enspace | \enspace
\lambda x. \mathbf{t} \\
\mathbf{F} \Coloneqq& \enspace
\Box \enspace \mathbf{t}_2
\enspace | \enspace
\mathbf{v}_1 \enspace \Box
\enspace | \enspace
\texttt{fix } \Box
\enspace | \enspace
\texttt{if } \Box \texttt{ then } \mathbf{t}_2 \texttt{ else } \mathbf{t}_3
\enspace | \enspace
\texttt{sample } \Box
\enspace | \enspace
\texttt{weight } \Box \\[1em]
&\hspace{-10mm}\boxed{\rightarrow}
\end{aligned} \\
%
\frac{\mathbf{t} \mid w \rightarrow \mathbf{t}' \mid w'}
{F[\mathbf{t}] \mid w \rightarrow F[\mathbf{t}'] \mid w'}
(\textsc{Cong}) \qquad
\frac{}
{(\lambda x. \mathbf{t}_1) \enspace \mathbf{v}_1 \mid w
\rightarrow [x \mapsto \mathbf{v}_1]\mathbf{t}_1 \mid w}
(\textsc{App}) \\
\frac{}
{\texttt{fix } (\lambda x. \mathbf{t}_1) \mid w \rightarrow
[x \mapsto \texttt{fix } (\lambda x. \mathbf{t}_1)]\mathbf{t}_1 \mid w}
(\textsc{Fix}) \\
\frac{}
{\texttt{if } \mathit{true} \texttt{ then } \mathbf{t}_2 \texttt{ else } \mathbf{t}_3
\mid w \rightarrow \mathbf{t}_2 \mid w}
(\textsc{IfTrue}) \\
\frac{}
{\texttt{if } \mathit{false} \texttt{ then } \mathbf{t}_2 \texttt{ else } \mathbf{t}_3
\mid w \rightarrow \mathbf{t}_3 \mid w}
(\textsc{IfFalse}) \\
\frac{c \in D}
{\texttt{sample} \enspace c \mid w \rightarrow \mathit{sample}(c) \mid w}
(\textsc{Sample}) \qquad
\frac{c \in \mathbb{R}}
{\texttt{weight} \enspace c \mid w \rightarrow () \mid w + c}
(\textsc{Weight}) \\
\end{gathered}
\]
\caption{%
An evaluation relation $\rightarrow$ for the language given in
Fig.~\ref{fig:lang} with all labels ignored. The function $\mathit{sample}$
correctly produces a sample from the provided distribution. All congruence
rules are compactly described by \textsc{Cong}, which specifies one rule
for every case in $\mathbf{F}$. $\mathbf{F}[\mathbf{t}]$ means that we replace
the $\Box$ in one case in $F$ with $\mathbf{t}$.
}
\label{fig:sem}
\end{figure}
For convenience when later defining our algorithm, we split the language into
two production rules, $\mathbf{e}$ and $\mathbf{t}$, where $\mathbf{e}$ is a \emph{labeled}
version of $\mathbf{t}$. For all programs in the language, we assume a unique
labeling of all expressions, and that all variables are bound in at most one
place (which means that all variable names are unique). Any program can be
transformed to fulfill this without any input from the programmer. The unique
labels and variables are requirements for the static analysis.
Also included in the language is a set of constants $C$. We leave this set
unspecified, with booleans, real numbers, and the unit element as exceptions.
The reason for explicitly including booleans in the set of constants is because
they are needed for \texttt{if} expressions. Additionally, real numbers are
needed as arguments for \texttt{weight}, and the unit element as the result of
a call to \texttt{weight}. We also assume that various probability
distributions $\mathit{dist} \in D$ from which to sample are included in the set
of constants, $C$. We do, however, limit these distributions to not range over
lambda abstractions, since this would complicate the analysis significantly.
Lastly, the language includes an explicit fixpoint operator \texttt{fix}. Since
we are dealing with the untyped lambda calculus, we could construct such an
operator (the $Y$ combinator) in the language itself. There is, however, an
important difference between the two: the explicit fix point operator cannot be
passed around as a value---it must be applied directly. As a consequence, we
can make the analysis less conservative. That is, fewer terms will be marked as
dynamic in comparison to using the $Y$ combinator.
To give some more intuition for the language, we give a small-step operational
semantics for it in Fig.~\ref{fig:sem}. It is an ordinary call-by-value
semantics for the untyped lambda calculus, with a weight $w$ added in the
evaluation relation. This weight is updated at calls to \texttt{weight}, which
is reflected in the rule \textsc{Weight}. This semantics corresponds to
obtaining a single sample from the distribution encoded by the program in a
likelihood weighting inference algorithm. Likelihood weighting was briefly
mentioned in Section~\ref{sec:prelim}. We will see how this semantics relates
to \gls{smc} and resampling in Section~\ref{sec:utilize}.
\subsection{The analysis}
Finding dynamic terms is not straightforward, as can be seen from two simple examples. The
first example is given by the following program (labels omitted)
\begin{equation}\label{eq:ex1}
(\lambda x. \texttt{if } \texttt{sample} \enspace \mathit{dist} \texttt{ then }
(x \enspace c_1)
\texttt{ else } c_2) \enspace (\lambda y. y),
\end{equation}
where $\mathit{dist}$ is a distribution over booleans and $c_1$ and $c_2$ are constants.
The analysis result for this program should, intuitively, be
\begin{equation}\label{eq:ex1s}
(\lambda x. \texttt{if } \texttt{sample} \enspace \mathit{dist} \texttt{ then }
\underline{(\underline{x} \enspace \underline{c_1})}
\texttt{ else } \underline{c_2})
\enspace \underline{(\lambda y. \underline{y})}
\end{equation}
where the underlining shows all parts of the program which can appear within a
stochastic branch. The right-hand side of the outermost application is bound by
the left-hand side lambda abstraction, and can therefore appear in one of the
branches. By regarding the entire program as a tree structure, we see that
information has been propagated from the left-hand side of a node, to its
right-hand side.
The reverse is also possible. Consider the following program:
\begin{equation}\label{eq:ex2}
(\lambda a. (\lambda b. a \enspace b) \enspace
(\lambda c. c)) \enspace
(\lambda d. \texttt{if } \texttt{sample } \mathit{dist} \texttt{ then }
(d \enspace c_1)
\texttt{ else } c_2)
\end{equation}
The analysis result for this program is given by
\begin{equation}
(\lambda a. (\lambda b. a \enspace b) \enspace
\underline{(\lambda c. \underline{c})}) \enspace
(\lambda d. \texttt{if } \texttt{sample } \mathit{dist} \texttt{ then }
\underline{(\underline{d} \enspace \underline{c_1})}
\texttt{ else } \underline{c_2}),
\end{equation}
showing that information from the right-hand side of a node can propagate to
its left-side.
We propose a solution for finding dynamic terms based on \emph{0-CFA}, a control-flow analysis
algorithm for higher-order functional programming languages originally
introduced by Shivers \cite{shivers1988control,shivers1991control}. The 0 in
0-CFA stands for \emph{context insensitivity}. Many other, less conservative,
approaches to control-flow in higher-order functional languages also exist
\cite{midtgaard2012control}. We give details on the limitations of context insensitivity in Section~\ref{sec:limitations} An example of a more accurate analysis is $k$-CFA,
where $k$ levels of context sensitivity are included in the analysis. This
causes the analysis to run in exponential time, already for $k = 1$. 0-CFA has
worst-case time complexity $O(n^3)$, where $n$ is the size of the program. This
is an upper bound, and might not affect how large programs can be handled in
practice.
The version of 0-CFA that we present here is based on Nielson et al.
\cite{nielson1999principles}.
\paragraph{Generating the constraints.}
To give some intuition for the algorithm, we describe it with the program
\eqref{eq:ex1} as a running example.
The first step is to assign each subterm a unique label:
\begin{equation}\label{eq:exlabel}
((\lambda x. (\texttt{if } (\texttt{sample} \enspace \mathit{dist}^1)^2 \texttt{ then }
(x^3 \enspace c_1^4)^5
\texttt{ else } c_2^6)^7)^8 \enspace (\lambda y. y^9)^{10})^{11}
\end{equation}
As we will see, this labeling enables reasoning about possible flows of control
in the program. We also define
$ \mathbf{T} = \{ \enspace (\lambda x. \cdot^7)^8, (\lambda y. \cdot^9)^{10} \enspace \}$,
which is the set of all lambda terms in the program. The bodies of the lambda
terms are replaced by $\cdot$, since they are not required in the analysis.
Next, we generate a set of \emph{constraints} for the program. These
constraints capture how both data and lambdas might flow between different
locations in the program. Our goal is to find a minimal assignment to the
\emph{unknown sets} occurring in the constraints, such that the constraints are
not violated. Such a solution is guaranteed to exist, and is key to finding
all dynamic terms. The constraints generated for \eqref{eq:exlabel} are
\begin{equation}\label{eq:genex}
\begin{aligned}
\mathit{gen}(t) = \{ \enspace
&\{ \mathbf{stoch} \} \subseteq S_{2},
\{ (\lambda y. \cdot^9)^{10} \} \subseteq S_{10},
\{ (\lambda x. \cdot^7)^8 \} \subseteq S_{8}, \\
&S_{y} \subseteq S_{9},
S_{5} \subseteq S_{7},
S_{6} \subseteq S_{7},
S_{x} \subseteq S_{3}, \\
&\{ (\lambda x. \cdot^7)^8 \} \subseteq S_{8}
\Rightarrow S_{10} \subseteq S_{x},
\{ (\lambda x. \cdot^7)^8 \} \subseteq S_{8}
\Rightarrow S_{7} \subseteq S_{11},\\
&\{ (\lambda y. \cdot^9)^{10} \} \subseteq S_{8}
\Rightarrow S_{10} \subseteq S_{x},
\{ (\lambda y. \cdot^9)^{10} \} \subseteq S_{8}
\Rightarrow S_{9} \subseteq S_{11},\\
&\{ (\lambda x. \cdot^7)^8 \} \subseteq S_{3}
\Rightarrow S_{4} \subseteq S_{x},
\{ (\lambda x. \cdot^7)^8 \} \subseteq S_{3}
\Rightarrow S_{7} \subseteq S_{5},\\
&\{ (\lambda y. \cdot^9)^{10} \} \subseteq S_{3}
\Rightarrow S_{4} \subseteq S_{y},
\{ (\lambda y. \cdot^9)^{10} \} \subseteq S_{3}
\Rightarrow S_{9} \subseteq S_{5}
%
\enspace \}
\end{aligned}
\end{equation}
The variables $S_1,S_2,\ldots,S_{11}, S_x, S_y$ denotes the unknown sets
associated with each label or variable in the program. There are three types
of constraints: direct, flow, and implication flow
constraints. \emph{Direct} constraints force a set $S$ to contain a single
\emph{abstract value} $\mathbf{av}$, which can either be $\mathbf{stoch}$ or a lambda
abstraction:
$\mathbf{av} \Coloneqq \enspace \mathbf{stoch} \enspace
| \enspace (\lambda x. \cdot^{l_1})^{l}$.
The first constraint in \eqref{eq:genex}, $\{ \mathbf{stoch} \}
\subseteq S_{2}$, states that the term at label 2 in the program may be stochastic.
By looking at \eqref{eq:exlabel}, this is clearly true---the term at label 2
contains a sample from a distribution. We also have two other direct
constraints, which states that lambda expressions may occur at the label where
they syntactically originate. This must also clearly be true. The flow and
implication flow constraints state how the abstract values flow between the
sets. \emph{Flow} constraints declare an immediate link between two sets. For
instance, two of the flow constraints state that $S_5$ and $S_6$ must flow to
$S_7$, because the \texttt{if} expression at label 7 can evaluate to both its branches.
\emph{Implication flow} constraints, on the other hand, states that if an abstract
value is in one set, this causes a flow between other sets. One such constraint
is $\{ (\lambda y.
\cdot^9)^{10} \} \subseteq S_{3} \Rightarrow S_{4} \subseteq S_{y}$ which
states that if the lambda with variable $y$ occurs at the term with label 3,
then the term at label 4 must flow to the variable y. This is a simple
consequence of how applications are evaluated. Formally, the constraints are given by
\begin{equation}
\begin{aligned}
\set \Coloneqq &\enspace S_l \enspace | \enspace S_x \\
\mathbf{cstr} \Coloneqq &\enspace \{ \mathbf{av} \} \subseteq \set &
&(\text{Direct})\\
| &\enspace \set_1 \subseteq \set_2 &
& (\text{Flow})\\
| &\enspace \{ \mathbf{av} \} \subseteq \set_1 \Rightarrow \set_2 \subseteq \set_3 &
& (\text{Implication flow})
\end{aligned}
\end{equation}
\begin{figure}[tb]
\[
\begin{aligned}
&\mathit{gen}(x^l) = \{S_{x} \subseteq S_{l}\} \\
&\mathit{gen}(c^l) = \varnothing \\
&
\begin{aligned}
\mathit{gen}((\lambda x. \mathbf{t}^{l_1})^{l})
= \enspace
&\{\{(\lambda x. \mathbf{t}^{l_1})^{l}\} \subseteq S_{l} \}
\cup \mathit{gen}(\mathbf{t}^{l_1})
\end{aligned} \\
%
&
\begin{aligned}
\mathit{gen}((\mathbf{t}_1^{l_1} \enspace \mathbf{t}_2^{l_2})^{l})
= \enspace \mathit{gen}(\mathbf{t}_1^{l_1}) \cup \mathit{gen}(\mathbf{t}_2^{l_2})
&\cup \{
\{\mathbf{t}\} \subseteq S_{l_1} \Rightarrow S_{l_2} \subseteq S_{x}
\mid \mathbf{t} =
(\lambda x. \mathbf{t}_3^{l_3})^{l_4} \in \mathbf{T}
\} \\
&\cup \{
\{\mathbf{t}\} \subseteq S_{l_1} \Rightarrow S_{l_3} \subseteq S_{l}
\mid \mathbf{t} =
(\lambda x. \mathbf{t}_3^{l_3})^{l_4} \in \mathbf{T}
\}
\end{aligned} \\
%
&
\begin{aligned}
\mathit{gen}((\texttt{fix } \mathbf{t}^{l_1})^{l})
= \enspace \mathit{gen}(\mathbf{t}^{l_1})
&\cup \{
\{\mathbf{t}\} \subseteq S_{l_1} \Rightarrow S_{l_2} \subseteq S_{x}
\mid \mathbf{t} =
(\lambda x. \mathbf{t}^{l_2})^{l_3} \in \mathbf{T}
\} \\
&\cup \{
\{\mathbf{t}\} \subseteq S_{l_1} \Rightarrow S_{l_2} \subseteq S_{l}
\mid \mathbf{t} =
(\lambda x. \mathbf{t}^{l_2})^{l_3} \in \mathbf{T}
\} \\
\end{aligned} \\
%
&
\begin{aligned}
\mathit{gen}((\texttt{if } \mathbf{t}_1^{l_1} \texttt{ then }
\mathbf{t}_2^{l_2} \texttt{ else } \mathbf{t}_3^{l_3})^l) =
&\enspace \mathit{gen}(\mathbf{t}_1^{l_1}) \cup \mathit{gen}(\mathbf{t}_2^{l_2}) \cup \mathit{gen}(\mathbf{t}_3^{l_3}) \\
&\enspace \qquad \cup \{S_{l_2} \subseteq S_{l}\} \cup \{S_{l_3} \subseteq S_{l}\}
\end{aligned} \\
&
\begin{aligned}
\mathit{gen}((\texttt{sample } \mathbf{t}^{l_1})^l) = \mathit{gen}(\mathbf{t}^{l_1}) \cup
\{\{\mathbf{stoch}\} \subseteq S_{l}\} \\
\end{aligned} \\
&
\begin{aligned}
\mathit{gen}((\texttt{weight } \mathbf{t}^{l_1})^l) = \mathit{gen}(\mathbf{t}^{l_1})
\end{aligned} \\
\end{aligned}
\]
\caption{The constraint generation function $\mathit{gen}$.}
\label{fig:gen}
\end{figure}
The constraint generation function $\mathit{gen}$ is defined recursively in
Fig.~\ref{fig:gen}. The most intricate part of $\mathit{gen}$ is the constraint
generation for applications and fixpoints. Both produce two flow implication
constraints for each lambda in $\mathbf{T}$, which we defined earlier. The
application case is fairly intuitive: if a lambda can occur at the left hand
side of an application, it must be the case that the right hand side flows to
the variable bound by the lambda, and that the term enclosed in the lambda can
flow to the result of the application. Fixpoints are a bit more difficult. If a
lambda term $(\lambda x. \mathbf{t}^{l_2})^{l_3}$ can occur as the argument to a
\texttt{fix} operator, two things must hold. Because of how \texttt{fix} is
defined, the enclosed lambda term with label $l_2$ is the actual (recursive)
function being computed---\texttt{fix} simply binds the function itself to the
variable $x$. Therefore, label $l_2$ must flow to $x$ since we need to be able
to use the function recursively through this binding, and $l_2$ can also flow
to $l$, because $l_2$ is the actual function produced by the \texttt{fix}
operator.
\paragraph{Solving the constraints.}
In order to solve the constraints, we refer to the full description of 0-CFA in
Nielson et al. \cite{nielson1999principles}. For the constraints in
\eqref{eq:genex}, the minimal solution is given by
\begin{equation}\label{eq:gensol}
\begin{aligned}
&S_y = \varnothing &
&S_x = \{ (\lambda y. \cdot^9)^{10} \} &
&S_1 = \varnothing &
&S_2 = \{ \mathbf{stoch} \} \\
&S_3 = \{ (\lambda y. \cdot^9)^{10} \} &
&S_4 = \varnothing &
&S_5 = \varnothing &
&S_6 = \varnothing \\
&S_7 = \varnothing &
&S_8 = \{ (\lambda x. \cdot^7)^8 \} &
&S_9 = \varnothing &
&S_{10} = \{ (\lambda y. \cdot^9)^{10} \} \\
&S_{11} = \varnothing.
\end{aligned}
\end{equation}
This can easily be verified to be a minimal solution satisfying all the
constraints in \eqref{eq:genex}.
\paragraph{Finding the dynamic terms.}
\begin{algorithm}[tb]
\caption{%
The final phase of the analysis. Uses the 0-CFA output to discover dynamic
parts of the program. The input consists of the labeled program $\mathbf{t}^l$,
and the results of the 0-CFA analysis $S$ (that is, all the sets produced
by the analysis). The function $\mathit{labels}$ returns all labels within
a term. The function $\mathit{subexpr}$ returns all direct subexpressions
of a term $\mathbf{t}$.
}
\label{alg:disc}
\begin{algorithmic}[1]
\Function{Dynamic}{$\mathbf{t}^l$, $S$}
\For{$l' \in \mathit{labels}(\mathbf{t}^l)$} $\mathit{Dyn}(l') \gets \mathit{false}$
\Comment{Initialization} \EndFor
\State $mod \gets \mathit{true}$
\While{$mod$} \Comment{Iterate until fixpoint}
\State $mod \gets \mathit{false}$; \Call{Recurse}{$\mathit{false}$, $\mathbf{t}^l$}
\EndWhile
\State \Return $\{ l \mid l \in labels(\mathbf{t}^l), \mathit{Dyn}(l) = \mathit{true} \}$
\EndFunction
\State
\Function{Recurse}{$\mathit{flag}$, $\mathbf{t}^l$}
\If{$\mathit{flag} \lor \mathit{Dyn}(l)$} \Comment{Mark dynamic terms}
\If{$\neg \mathit{Dyn}(l)$}
\State $\mathit{Dyn}(l) \gets \mathit{true}$
\State $mod \gets \mathit{true}$
\EndIf
\For{$(\lambda x. \cdot^{l_1})^{l_2} \in S_l$}
\If{$\neg \mathit{Dyn}(l_2)$}
\State $\mathit{Dyn}(l_2) \gets \mathit{true}$
\State $mod \gets \mathit{true}$ \EndIf
\EndFor
\EndIf
\State \textbf{match} $t$ \textbf{with}
\State \hspace{3mm}
$\texttt{if } \mathbf{t}_1^{l_1} \texttt{ then }
\mathbf{t}_2^{l_2} \texttt{ else } \mathbf{t}_3^{l_3}$:
\Comment{Detect stochastic branches}
\State \hspace{6mm} \Call{Recurse}{$\mathit{flag}$,$\mathbf{t}_1^{l_1}$}
\State \hspace{6mm} $\mathit{flag} \gets \mathit{flag} \lor \mathbf{stoch} \in S_{l_1}$
\State \hspace{6mm} \Call{Recurse}{$\mathit{flag}$,$\mathbf{t}_2^{l_2}$};
\Call{Recurse}{$\mathit{flag}$,$\mathbf{t}_3^{l_3}$}
\State \hspace{3mm}
$\lambda x. \mathbf{t}_1^{l_1}$:
\Comment{Detect previously marked lambdas}
\State \hspace{6mm} \Call{Recurse}{$\mathit{Dyn}(l) \lor \mathit{flag}$, $\mathbf{t}_1^{l_1}$}
\State \hspace{3mm} otherwise:
\textbf{for} $\mathbf{t}_1^{l_1} \in \mathit{subexpr}(\mathbf{t})$ \textbf{do}
\Call{Recurse}{$\mathit{flag}$, $\mathbf{t}_1^{l_1}$}
\EndFunction
\end{algorithmic}
\end{algorithm}
The last step is to use the 0-CFA results to find dynamic terms. To do this, we do a
depth-first left-to-right traversal of the program, flagging all terms (or,
equivalently, their labels) occurring in the branch of a stochastic branch as
dynamic. We can identify stochastic branches by checking if $\mathbf{stoch}$ is
a member of $S_l$, where $l$ is the label of the condition term of an \texttt{if}
expression. In \eqref{eq:exlabel}, during traversal, we first go down the left
branch of the outermost application and eventually reach
\begin{equation}
(\texttt{if } (\texttt{sample} \enspace \mathit{dist}^1)^2 \texttt{ then }
(x^3 \enspace c_1^4)^5
\texttt{ else } c_2^6)^7.
\end{equation}
We see that $\mathbf{stoch}$ is in $S_2$, and the branch is therefore
stochastic and we recursively flag the terms in the branches. Additionally, we
flag the lambda term $(\lambda y. y^9)^{10}$, since it is in the set $S_3$.
Because of this, when we return to the outermost application and traverse down
the right hand side, we can see that $(\lambda y. y^9)^{10}$ is flagged.
Therefore, we also flag all terms enclosed in this lambda, which in this case
is $y^9$. To summarize, the result of performing this analysis on
\eqref{eq:exlabel} with the help of \eqref{eq:gensol} is
$\{ 3, 4, 5, 6, 9, 10 \}$,
which matches the result in \eqref{eq:ex1s} with the labels in
\eqref{eq:exlabel}. Note that $y^9$ would not have been flagged if we would
have done a right-to-left traversal. In general, we need to repeatedly traverse
the program until fixpoint, allowing all terms reachable from a stochastic
branch to be flagged as dynamic. The complete algorithm is shown in
Algorithm~\ref{alg:disc}. We can reason about the time complexity as follows:
on every iteration, at least one label is flagged or the program terminates.
Since we have $n$ labels, where $n$ is the size of the program, and every
iteration is performed in $n$ steps, it follows that the algorithm (in the
worst case) terminates in $O(n^2)$ steps---less than the $O(n^3)$ for the 0-CFA
analysis. Therefore, the overall complexity is still $O(n^3)$.
\subsection{Limitations}\label{sec:limitations}
The main limitation of the algorithm presented in this paper is the lack of
context sensitivity in the analysis. In practice, this will cause problems when
reusing functions in both stochastic and non-stochastic contexts---the
non-stochastic contexts will sometimes be unnecessarily marked as stochastic.
As an example, consider running the analysis on a program written in the same
language as in Fig.~\ref{fig:introex} and Fig.~\ref{fig:probprog}:
\\[1em]
\texttt{function plus(a, b) \string{ a + b \string}} \\
\texttt{plus(sample(normal(0,1)), 2)} \\
\texttt{if plus(1, 3) < 5 then \underline{true} else \underline{false}}
\\[1em]
Our analysis has marked the branches of the \texttt{if} expression as
dynamic, even though the condition is clearly not stochastic. This is because
of context insensitivity: the analysis cannot distinguish between the two
applications of plus. Since one of the applications produces a stochastic
value, \emph{all} applications of plus in the program are marked as
stochastic---even if they are in fact not stochastic. In this paper, we avoid
this problem by using built in operators which cannot be passed around the
program as values in the same way as user-defined lambda abstractions. This
makes the analysis less conservative when using 0-CFA. An obvious direction for
future work is exploring other approaches to higher-order control flow analysis
that do take context into account \cite{midtgaard2012control}.
\section{Utilizing the Analysis Results for Sequential Monte Carlo Inference}
\label{sec:utilize}
In this section, we use the analysis result from Section~\ref{sec:disc} to
transform the input program, enabling \emph{aligned} \gls{smc} inference. Most
importantly, we indicate how to modify the semantics of Fig.~\ref{fig:sem} to
accommodate such inference, and also give the aligned \gls{smc} algorithm for
probabilistic programming. We use the program from Section~\ref{sec:prelim},
Fig.~\ref{fig:probprog} as a running example, assuming that the semantics
include proper extensions for arithmetic and comparison.
\paragraph{Transforming the program.}
We begin by extending our language with one additional construct:
\texttt{dweight} (dynamic weight). In contrast to \texttt{weight}, the
\texttt{dweight} construct will not cause resampling to be
performed. By using the information about dynamic terms from our static
analysis, we do a simple transformation of our input program: we replace all
dynamic \texttt{weight} terms with \texttt{dweight} (ignoring
labels, since they are no longer required). The remaining calls to
\texttt{weight} are now \emph{aligned}---they are (1) always executed, and (2) always
executed in the same order. This is a simple consequence of them not being
reachable from stochastic branches. The transformation allows \gls{smc}
inference to only resample at aligned calls to \texttt{weight} in the original
program. As an example, for the program in Fig.~\ref{fig:probprog}, the call
to \texttt{weight} at line 4 will be replaced by \texttt{dweight}, and the
\texttt{weight} at line 13 will be untouched.
\paragraph{Modifying the semantics.}
Next, we modify our semantics to support \gls{smc} inference. In
order to do this, we first need to do another program transformation to enable
\emph{pausing} and \emph{resuming} executions when resampling. We will not go
into detail about this transformation here, but the result is a program in
\emph{\gls{cps}} \cite{steele1978rabbit,appel2007compiling}. Such a
transformation is commonly used in \glspl{ppl}, for instance in WebPPL and
Anglican. The essential property of having the program in \gls{cps} is that
functions never return. Instead, every function takes an additional argument, a
\emph{continuation} function, which is applied to the result of the function
application in order to continue evaluation. The continuation can be thought of
as a representation of the call stack that is explicitly available at each
function call. In essence, this
enables us to modify our evaluation relation $\rightarrow$ so that we can pause
and resume evaluation at calls to \texttt{weight}. To enable pausing, we
explicitly add a \texttt{pause} term in our language. In the
\gls{cps} transformed language, the \texttt{weight}, \texttt{dweight}, and
\texttt{pause} terms all take one extra continuation argument $\mathbf{t}_c$. That
is, $ \mathbf{t} \Coloneqq \enspace \ldots \enspace | \enspace
\texttt{weight } \mathbf{t}_c \enspace \mathbf{t} \enspace | \enspace
\texttt{dweight } \mathbf{t}_c \enspace \mathbf{t} \enspace | \enspace \texttt{pause } \mathbf{t}_c$.
The key modification in the semantics for \texttt{weight} is shown in
Fig.~\ref{fig:modsem}. For \texttt{dweight}, we simply update the weight and take a step to $\mathbf{t}_1$, the body of the continuation. This is a \gls{cps}
equivalent of the previous rule for \texttt{weight} in Fig.~\ref{fig:sem}. For
the new \texttt{weight}, we instead want to indicate to the inference algorithm
that the program is paused. Therefore, we return a \texttt{pause} term, with
the continuation as argument. There is no evaluation rule for \texttt{pause},
so the evaluation halts, and it is up to the \gls{smc} inference algorithm to
decide the next course of action.
\begin{figure}[tb]
\[
\begin{gathered}
\frac{c \in \mathbb{R}}
{\texttt{dweight} \enspace (\lambda x. \mathbf{t}_1) \enspace c \mid w \rightarrow
\mathbf{t}_1 \mid w + c}
(\textsc{DWeightCPS}) \\
\frac{c \in \mathbb{R}}
{\texttt{weight} \enspace (\lambda x. \mathbf{t}_1) \enspace c \mid w \rightarrow
\texttt{pause } (\lambda x. \mathbf{t}_1) \mid w + c}
(\textsc{WeightCPS})
\end{gathered}
\]
\caption{The \gls{cps} evaluation rules for \texttt{dweight} and
\texttt{weight}}
\label{fig:modsem}
\end{figure}
\paragraph{Aligned sequential Monte Carlo.}
\begin{algorithm}[tb]
\caption{%
The algorithm for aligned \gls{smc} inference in probabilistic programs.
The $\mathit{eval}$ function repeatedly applies $\rightarrow$ on a program
$t$ with weight $w$ until no evaluation rule is applicable. The input
$n$ gives the number of executions, or particles.
}\label{alg:smcalign}
\begin{algorithmic}[1]
\Function{AlignedSMC}{$t$, $n$}
\For{$i \gets 1$ to $n$}
$t_i \gets t$ \Comment{Create $n$ copies of $t$}
\EndFor
\For{$i \gets 1$ to $n$}
$r_i \gets \mathit{eval}(t_i,0)$
\EndFor
\While{$r_1 = \texttt{pause } (\lambda x. \mathbf{t}) \mid w_1$}
\Comment{Check if \texttt{weight} has been encountered.}
\For{$i \gets 1$ to $n$}
$(\texttt{pause }t_i \mid w_i) \gets r_i$
\EndFor
\State $t_{1:n} \gets \mathit{resample}(t_{1:n}, w_{1:n})$
\For{$i \gets 1$ to $n$}
$r_i \gets \mathit{eval}(t_i \, (),0)$
\EndFor
\EndWhile
\For{$i \gets 1$ to $n$}
$(t_i \mid w_i) \gets r_i$
\EndFor
\State $t_{1:n} \gets \mathit{resample}(t_{1:n}, w_{1:n})$
\State \Return $\{ t_1, t_2, \ldots, t_n \}$
\EndFunction
\end{algorithmic}
\end{algorithm}
The algorithm for aligned \gls{smc} is shown in Algorithm~\ref{alg:smcalign}.
The intuition is quite simple: do $n$ executions of the program using
$\rightarrow$ and stop whenever encountering an aligned \texttt{weight} to
resample before continuing the executions by applying $()$ to the
continuations. Note that we use the alignment property at line 4, assuming that
if $r_1$ is a \texttt{pause} term, then this will also be true for all other
$r_i$. Also note that we set all weights to 0 after resampling. This is because
resampling, by definition, produces a set of unweighted samples (in our case
executions) from a set of weighted samples.
When finished, the $eval$ function will return a final value
with an attached weight. After doing a final resample (the weights can have
been modified by calls to \texttt{dweight} since the last resample), the values
are returned as samples. For the program in Fig.~\ref{fig:probprog}, the
algorithm would run all particles until encountering the single call to
\texttt{weight} (line 13), accumulating the weights for each particle when
encountering differing number of calls to \texttt{dweight} (line 6). Hence,
there will only be two resamples: one at the \texttt{weight} at line 13, and
one at the end of the program.
\section{Case study}\label{sec:case}
In this section, we give the details on a case study for a probabilistic model
from phylogenetics, expressed as a probabilistic program. We begin by briefly
describing the implementation of the analysis presented in
Sections~\ref{sec:disc}~and~\ref{sec:utilize}. This is followed by a
description of the model, and the quantity of interest that we wish to estimate
using \gls{smc}. Lastly, we present the results of the case study in the form
of a comparison between aligned and unaligned \gls{smc}, and discuss the main
limitations of our algorithms. All source code used in this case study is
available at \url{https://github.com/miking-lang/pplcore}.
\paragraph{Implementation.}
The implementation language extends the abstract syntax and semantics of the
language in Fig.~\ref{fig:lang} and Fig.~\ref{fig:sem} with various operators
for arithmetic and comparison. Examples of the concrete syntax is given in
Fig.~\ref{fig:introex} and Fig.~\ref{fig:probprog}. Our implementation of
aligned \gls{smc} implements the analysis from Section~\ref{sec:disc}, and
follows Algorithm~\ref{alg:smcalign}. Additionally, we implement unaligned
\gls{smc}, based on the approach used in WebPPL~\cite{dippl}. In this version
of \gls{smc}, dynamic calls to weight also participate in resampling, as well
as executions that have already terminated (which can occur since alignment is
not guaranteed). We use \emph{systematic resampling} \cite{douc2005comparison}
for both versions of \gls{smc}. Everything is implemented in OCaml.
\paragraph{The model and the inference problem.}
We test the performance of the algorithms by using an example from statistical
phylogenetics, in which a birth-death model is used to describe the rates of
speciation and extinction in a group of organisms. Such models are of
considerable interest to evolutionary biologists, as they can be used to study
many important phenomena, such as the effects of various life-history traits or
of environmental factors on net diversification rates~\cite{nee2006birth}. A famous
research problem that can be addressed using birth-death models is the question
of whether the extinction of dinosaurs at the end of the Cretaceous epoch caused
an increased diversification rate in mammals~\cite{ronquist2016closing}.
In typical cases, we only have reliable observations of the extant species of
the group, that is, the lineages that have survived until the present---the
extinct lineages are unknown to us. From DNA sequence data and calibration
fossils, we can reconstruct a time tree that describes how and when the extant
lineages diverged from each other; this is known as a \emph{reconstructed
tree}~\cite{nee1994reconstructed}. The task is now to estimate the speciation
(birth) and extinction (death) rates from such a reconstructed time tree.
We focus on the basic task of estimating the \emph{normalizing constant} for a
particular set of birth and death rates and a given reconstructed time tree.
That is, we force the model to always produce the same sample of the rates, and
instead produce estimates of how likely this sample is given the data. The
logarithm of this quantity can be estimated with \gls{smc} through
\begin{equation}\label{eq:norm}
\sum_{t=1}^T\left(
\log \sum_{i=1}^N \exp(w_t^i) - \log N
\right)
\end{equation}
where $t$ ranges over all resampling points in the program, and
$w_i^t$ denotes the weight of execution $i$ at resampling point $t$. $N$ is the
total number of executions.
The normalizing constant can be used for Bayesian model comparison of different
scenarios; they can also be used in a nested particle \gls{mcmc}
approach, in which \gls{smc} is combined with \gls{mcmc} to estimate a
posterior distribution over birth and death rates.
Specifically, we use a consensus estimate of the divergence times of the
28 extant species of pitheciid monkeys provided by the TimeTree
project~\cite{timetree}. The tree has one trichotomy involving
\emph{Chiropotes albinasus}. We resolve this ambiguity by assuming that
\emph{C. albinasus} belongs to \emph{Chiropotes}, and that the stem lineage of
\emph{Chiropotes} existed for 0.2 Ma before branching into extant species. This
is similar to the shortest branch length observed in other parts of the tree.
The birth rate is set to 0.2 Ma-1 and the death rate to 0.1 Ma-1.
In summary, the input data is a tree over which we simulate a birth-death
process. Because of this, we have a mix of aligned and unaligned calls to
\texttt{weight}---aligned calls occur when traversing the nodes in the input
tree, and unaligned calls occur when simulating along edges in the tree.
\paragraph{Result.}
The result of our case study is shown in Fig.~\ref{fig:result}, using box plots
for $100$ estimates produced from \eqref{eq:norm} on our phylogenetic model for
different number of executions. The exact solution is available analytically
for this model and is shown with the dashed line. We see that the aligned
version gives better estimates in all cases. In addition, we measured aligned
\gls{smc} to be approximately 1.66 times faster on average than unaligned
\gls{smc} for this model.
\begin{figure}[tb]
\centering
\begin{tikzpicture}
\begin{axis}[trim axis left, trim axis right, width=0.77\textwidth,
height=5cm,
ymin=0,ymax=7,
ytick={1,2,3,4,5,6},
yticklabel style={align=right},
yticklabels={%
{Aligned, $10\,000$ executions},
{Aligned, $1000$ executions},
{Aligned, $100$ executions},
{Unaligned, $10\,000$ executions},
{Unaligned, $1000$ executions},
{Unaligned, $100$ executions},
}]
\addplot [boxplot]
table [y index=0] {case-study/smca10000};
\addplot [boxplot]
table [y index=0] {case-study/smca1000};
\addplot [boxplot]
table [y index=0] {case-study/smca100};
\addplot [boxplot]
table [y index=0] {case-study/smcu10000};
\addplot [boxplot]
table [y index=0] {case-study/smcu1000};
\addplot [boxplot]
table [y index=0] {case-study/smcu100};
\addplot [mark=none,dashed] coordinates{%
(-56.33242285520951, 0)
(-56.33242285520951, 7)};
\end{axis}
\end{tikzpicture}
\caption{The result of the case study, showing the increase in accuracy from
using aligned \gls{smc}.}
\label{fig:result}
\end{figure}
\paragraph{Discussion.}
Looking back at the nonoptimal results for unaligned \gls{smc} in
Section~\ref{sec:intro}, the improvement of aligned over unaligned \gls{smc}
(for the same number of executions) in this case study intuitively makes sense.
However, it seems that, even when using the unaligned version, the result
converges on the true value as the number of total executions increase. We can
make the same observation for the example in Fig.~\ref{sec:intro}, if we reduce
the differences between the \texttt{weight}s. By, for instance, setting
\texttt{weight(85)} to \texttt{weight(5)}, and \texttt{weight(95)} to
\texttt{weight(15)}, we do get approximately the same number of executions
(taking the weights into account) for
each branch when running enough executions in total. As
long as a single execution from the \texttt{false} branch survives, it will
have much higher
weight in the end, thus offsetting the bias in the initial resampling. This
implies that unaligned \gls{smc} is most likely correct, but that an enourmous
number of executions might be required, even for very simple models such as the
model in Fig.~\ref{fig:introex}.
The increase in speed from using aligned \gls{smc} most likely comes from
simply doing less resampling while running the \gls{smc} algorithm.
\section{Related work}
Naturally, the work most closely related to ours can be found in papers on
universal probabilistic programming languages using \gls{smc}, such as WebPPL
\cite{dippl}, Anglican \cite{wood2014a}, and Birch \cite{murray2018delayed}.
Both WebPPL and Anglican are higher-order, functional \glspl{ppl}, while Birch
is an imperative, object-oriented \gls{ppl}. Anglican includes many \gls{smc}
algorithms, including different variations of \emph{particle \gls{mcmc}
}~\cite{wood2014a}. Anglican also includes various \gls{mcmc} methods. WebPPL
includes fewer inference algorithms, but both \gls{smc} and \gls{mcmc} methods
are available. Birch performs \gls{smc} inference in combination with using
closed-form optimizations at runtime, automatically yielding a more optimized
version of \gls{smc} taking advantage of \emph{locally-optimal proposals} and
\emph{Rao--Blackwellization}. None of the languages above, however, address the
alignment issue presented in this article. In essence, the programmer needs to
be aware of the internals of the \gls{smc} inference algorithm to write
efficient models---\emph{the model and the inference algorithm have become
coupled}. Optimally, we would like the model and the inference to be as
independent as possible. This is the goal of the work in this paper.
There also exists more theoretical work on \gls{smc} for probabilistic
programming. One example is a recent denotational validation of \gls{smc} in
probabilistic programming given by Ścibior et
al.~\cite{scibior2017denotational}. This work also includes a denotational
validation of \emph{trace \gls{mcmc}}, another common inference algorithm for
\glspl{ppl}. Trace \gls{mcmc} has also been proven correct by Borgström et. al.
\cite{borgstrom2016a} through an operational semantics for a probabilistic
untyped lambda calculus.
\section{Conclusion}
In this paper, we have introduced an approach for aligning \gls{smc} inference
in \glspl{ppl}. This approach consists of performing a static analysis using
0-CFA, and using this analysis result to automatically align \gls{smc}
inference through a program transformation. We have also evaluated this
approach on a phylogenetic model, showing significant improvements. In
conclusion, we have shown that alignment of \gls{smc} inference in
probabilistic programming can be done automatically, and that it also has a
significant effect on both execution time and accuracy.
\subsection*{Acknowledgements}
This project is financially supported by the Swedish Foundation for Strategic
Research (ASSEMBLE RIT15-0012). We would also like to thank Elias Castegren for
his helpful comments and support. |
1812.07260 | \section{Introduction}
\IEEEPARstart{I}{mage} segmentation is not a trivial task, especially for images that contain multiple objects and cluttered backgrounds. Interactive image segmentation, or image segmentation with human in the loop, can make the region of interest more clearly defined for obtaining accurate segmentation. Popular interactive image segmentation algorithms allow users to guide the segmentation with some feedback, \djc{in forms of seeds or line-drawings \cite{BoykovJ01,DongSSY15,FengPCC16,Grady06,GulshanRCBZ10,WangHC14,XuPCYH16,LiewWXOF17,ManinisCTG18}, contours \cite{KassWT88,MortensenB95,XianZCXD16,BadoualSUU17,XuPCYH17}, bounding boxes \cite{ChengPZTR15,RotherKB04,XuPCYH17},} or queries \cite{ChenCC16,RupprechtPN15}. In this article, we propose a novel interaction mechanism for acquiring labels via swipe gestures, see Fig.~\ref{fig:teaser} for illustration. The main novelty is that, with the new mechanism, the user does not need to annotate meticulously to prevent crossing the region boundaries while specifying the region of interest. It is particularly suitable for imprecise input interface such as touchscreen.
\begin{figure}[t]
\centering
\subfigure[] { \includegraphics[height=0.11\textwidth]{teaser1.png} }
\subfigure[] { \includegraphics[height=0.11\textwidth]{teaser2.png} }
\subfigure[] { \includegraphics[height=0.11\textwidth]
{teaser3.png} }
\subfigure[] { \includegraphics[height=0.11\textwidth]{teaser4.png} }
\vspace{-4mm}
\caption{\label{fig:teaser}
A label acquiring process of the SwipeCut algorithm. We present a seed proposal algorithm for assisting the user in an interactive image segmentation scenario. The algorithm singles out a few informative seeds as the queries so that the user only needs to \emph{swipe} through the relevant query seeds to specify the region of interest.
(a) An input image with five scattered crosses (in red) generated by our algorithm as the query seeds.
(b) The user's gesture (in yellow) swipes through the relevant query seeds that are inside the region of interest.
(c) The collected seeds of ROI labels (in green) and non-ROI labels (in blue) according to the user's gesture.
(d) The result of label propagation according to the labels in (c). }
\end{figure}
Consider the task of segmenting a given image to achieve acceptable segmentation accuracy. Three main factors may affect the overall processing time of the entire interactive segmentation process. The first factor is the algorithm's \emph{segmentation effectiveness} with respect to the user feedback. The second factor is the algorithm's \emph{response time} for completing one round of segmentation in the loop. The third factor is \emph{under-qualified annotations} during interaction. User annotations could be of insufficient quality due to the constraint of the user interface or the unfamiliarity with the segmentation algorithm, see Fig.~\ref{fig:failLabels}. Both the two cases increase the required number of interactions to revise the user annotations.
In addition, a careful user might intend to avoid bad annotations and thus spends more time to finish the segmentation. Therefore, under-qualified annotations could be the time bottleneck of the entire interactive segmentation process. Most existing approaches consider the first two factors to make the segmentation algorithms effective (the first factor) and efficient (the second factor) under the assumption that qualified user-annotations are easy to obtain (the third factor). Our approach addresses the third factor to help the users unambiguously and effortlessly label the query seeds and thus can reduce the turnaround time for interaction. Notice that, some previous works propose error-tolerant segmentation algorithms for handling erroneous scribbles \cite{BaiW14,SubrPSK13}. However, our approach attempts to prevent under-qualified annotations being generated from the very beginning.
Using fingers to manipulate small touchscreen devices means every stroke contains many pixels: ``The average width of the index finger for most adults translates to about 45-57 pixels\footnote{\url{http://mashable.com/2013/01/03/tablet-friendly-website/}}.''
While it is inconvenient to draw scribbles with high precision on a touchscreen, swiping through a specified point on the touchscreen, in contrast, is much simpler.
Hence, for small touchscreen devices, an interactive mechanism that proposes a few pixels for user to assign binary labels is more accessible. This fact motivates us to design an interaction mechanism tailored for segmenting images on the small touchscreen devices. The mechanism aims to propose sparse pixels for acquiring the ROI labels or non-ROI labels, according to whether the user swipes through the sparse pixels or not. Since we only care about the proposed pixels being touched or not, we allow the finger to pass through other irrelevant pixels. This kind of interaction greatly reduces the chance of annotating wrong labels. A labeling example of the proposed algorithm is shown in Fig.~\ref{fig:teaser}.
To implement the novel interaction mechanism for segmenting images on small touchscreen devices, we first propose an effective query-seed proposal scheme built upon a two-layer graph. The two-layer graph consists of moderate-granularity vertices and large-granularity vertices, in which the large-granularity vertices formulate a higher-order soft constraint to make the covered moderate-granularity vertices tend to have the same label. We then diversify the seeds to make them sparse enough in spatial domain for swiping with finger. After acquiring the labels from user's swipe gesture, we propagate the labels to all vertices via calculating the graph distance on the two-layer graph, and hence obtain the segmentation result.
One running example and an overview of the proposed approach are shown in Fig.~\ref{fig:illustration} and Fig.~\ref{fig:overview}, respectively.
The contributions and advantages of this work are summarized as follows:
\begin{enumerate}[]
\item The interaction mechanism of proposing multiple seeds for user labeling via \emph{swipe gestures}, which is tailored to small touchscreen devices, is new to the image segmentation problem.
\item The user is able to \emph{annotate unambiguously} via swiping through the seeds, since our approach makes the query seeds sparse and separated far enough from each other.
\item Our method is effective and efficient owing to the proposed informative query seeds selection and label propagation, where both are improved with our higher-order two-layer graph that entails the label consistency.
\item The proposed approach has \emph{high flexibility} for the devices with different touchscreen sizes. The number of query seeds can be adjusted according to the touchscreen size, and hence the additional operations such as zoom-in/zoom-out or drag-and-drop is not needed.
\end{enumerate}
\begin{figure}[t]
\centering
\subfigure[] { \includegraphics[height=0.11\textwidth]{failLabels1.png} }
\subfigure[] { \includegraphics[height=0.11\textwidth]{failLabels2.png} }
\vspace{-3mm}
\caption{\label{fig:failLabels}
Examples of under-qualified annotations. Green strokes are ROI annotations and blue strokes are non-ROI annotations.
(a) The constrained interface might cause the blue strokes to straddle the object boundary, which would confuse a segmentation algorithm. Examples of the constrained interface are smart phones or tablets since the small touchscreen devices are inconvenient to draw subtle scribbles with high precision using fingers.
(b) Unfamiliarity of the segmentation algorithm might result in redundant annotations like the green strokes. Increasing the number of ROI strokes on the dog does not improve much the segmentation accuracy. Adding a non-ROI stroke on the top corners of the image in this case would be more helpful for segmentation. }
\end{figure}
The rest of this article is organized as follows.
Section II reviews related methods on interactive image segmentation and object proposal generation.
Section III formulates the problem to be addressed.
Section IV introduces the proposed interactive image segmentation algorithm, SwipeCut.
Section V shows the experimental results.
Section VI concludes this paper.
\section{Related Work}
We roughly divide interactive image segmentation methods into two categories according to their interaction models: \emph{direct interactive image segmentation} and \emph{indirect interactive image segmentation}. We also refer to several proposal generation methods since some of the ideas and principles are shared with our algorithm.
\begin{figure*}[t]
\centering
\subfigure[] { \includegraphics[height=0.22\textwidth]{illustration1.png} }
\subfigure[] { \includegraphics[height=0.22\textwidth]{illustration2.png} }
\subfigure[] { \includegraphics[height=0.22\textwidth]{illustration3.png} }
\subfigure[] { \includegraphics[height=0.22\textwidth]{illustration5.png} }
\subfigure[] { \includegraphics[height=0.22\textwidth]{illustration10.png} }
\subfigure[] { \includegraphics[height=0.22\textwidth]{illustrationGT.png} }
\vspace{-3mm}
\caption{\label{fig:illustration} A running example of our approach. In (a)-(e), the top images show the scattered crosses (in red) corresponding to five query seeds per round, and the bottom images show the corresponding results of label propagation. (a) The result of the first round. (b) The third round. (c) The fifth round. (d) The seventh round. (e) The tenth round. (f) The ground truth (top) and the segmentation result of the 30th round (bottom).}
\end{figure*}
\paragraph{Direct interactive image segmentation.}
\djc{Many well-known interactive image segmentation algorithms are in this category, \textit{i.e.}, \cite{BoykovJ01,ChengPZTR15,DongSSY15,FengPCC16,WangHC14,Grady06,GulshanRCBZ10,KassWT88,MortensenB95,RotherKB04,XianZCXD16,BadoualSUU17,XuPCYH16,XuPCYH17,LiewWXOF17,ManinisCTG18}, in which the user directly specifies the location of each label via seeds/scribbles \cite{BoykovJ01,DongSSY15,FengPCC16,Grady06,GulshanRCBZ10,WangHC14,XuPCYH16,LiewWXOF17,ManinisCTG18}, contours \cite{KassWT88,MortensenB95,XianZCXD16,BadoualSUU17,XuPCYH17}, or bounding boxes \cite{ChengPZTR15,RotherKB04,XuPCYH17}. These algorithms use {\em graph cuts}, {\em random walks}, {\em level set}, {\em geodesic distance}, or {\em deep network} to segment the images according to the user annotations.}
In general, different assignments of label locations often yield different segmentation results, which means that the user has the responsibility to specify good label locations for generating satisfactory segmentation results.
In contrast, our algorithm takes the responsibility to actively explore the informative image regions as the query seeds for the user.
\paragraph{Indirect interactive image segmentation.}
Another line is the indirect interactive image segmentation \cite{BatraKPLC10,FathiBRR11,KowdleCGC11,RupprechtPN15,StraehleKKBDH12}, in which the algorithms usually {\em recommend} several uncertain regions to the user, and then the segmentation algorithms adopt the user-selected regions for updating the segmentation results.
Batra~\textit{et al.} \cite{BatraKPLC10} propose a co-segmentation algorithm that provides the suggestion about where the user should draw scribbles next.
Based on the active learning method, Fathi~\textit{et al.} \cite{FathiBRR11} present an incremental self-training video segmentation method to ask the user to provide annotations for gradually labeling the frames.
For scene reconstruction, Kowdle~\textit{et al.} \cite{KowdleCGC11} also employ an active learning algorithm to query the user's scribbles about the uncertain regions.
To segment a large 3D dataset, Straehl~\textit{et al.} \cite{StraehleKKBDH12} provide various uncertainty measurements to suggest the user some candidate locations, and then segment the dataset using the watershed cut according to the user-selected locations.
Rupprecht~\textit{et al.} \cite{RupprechtPN15} model the segmentation uncertainty as a probability distribution over the set of sampled figure-ground segmentations, the collected segmentations are used to calculate the most uncertain region to ask the label from the user.
Chen~\textit{et al.} \cite{ChenCC16} select the query-pixel with the highest uncertainty referred to the transductive inference measurement.
This category of interactive image segmentation proposes the candidate label locations for the user, which eases the user's responsibilities of selecting good label locations for guiding the segmentation.
However, to provide the user with candidate label locations, the algorithms in this category usually take perceivable time to estimate the label locations and the user usually has to carefully label these locations in several clicks per round. In contrast, our seed proposal is very efficient and the user only has to effortlessly and unambiguously provide one swipe stroke per round.
\paragraph{Proposal generation.}
The purpose of object proposal generation \cite{ArbelaezPBMM14,CarreiraS10,ManenGG13,UijlingsSGS13,WangZLZJW15,XiaoLTLT15} is to provide a relatively small set of bounding boxes or segments covering probable object locations in an image, so that an object detector does not have to examine exhaustively all possible locations in a sliding window manner.
To increase the recall rate for object detection, a common solution in proposal generation is to diversify the proposals.
For example, Carreira and Sminchisescu \cite{CarreiraS10} present a diversifying strategy, which is based on the maximal marginal relevance measure \cite{CarbonellG98}, to improve the object detection recall.
Besides diversifying the proposals in spatial domain, diversifying the proposals by their similarities in feature domain has also been adopted \cite{ManenGG13,UijlingsSGS13,WangZLZJW15,XiaoLTLT15}.
In a similar manner, we diversify the selected query seeds in spatial and feature domains to improve the segmentation recall derived from the relatively small set of query seeds.
\begin{figure*}[t]
\centering
\includegraphics[width=0.86\textwidth]{overview.png}
\vspace{-3mm}
\caption{An overview of our interactive image segmentation algorithm. Each input image is first represented by a weighted two-layer graph. At each interaction, we select the first $\theta_k$ top-scored and diversified seeds to acquire true labels from the user. Then, the labeled seeds are used to propagate the labels to all other unlabeled vertices for generating the corresponding segmentation. The next interaction can then be launched with new query seeds derived from the information of label propagation.}
\label{fig:overview}
\end{figure*}
\section{Problem Statement}
Consider an image $\mathcal{I}$ that is represented as a graphical model over a vertex set $\mathcal{V} = \{ v_1, \cdots, v_{|\mathcal{V}|} \}$, where a vertex can be, for example, a pixel or a superpixel, or even an aggregation of neighboring superpixels. Assume that we have two kinds of labels $\mathsf{1}$ and $\mathsf{0}$, where $\mathsf{1}$ denotes the ROI and $\mathsf{0}$ denotes the non-ROI. If there exists a set of seeds $\mathcal{Q} = \{ q_1, \cdots, q_{|\mathcal{Q}|} \} \subseteq \mathcal{V}$ for user to label, then the user's labeling can be defined as a mapping $\eta: \mathcal{Q} \rightarrow \{\mathsf{0}, \mathsf{1}\}$, and hence the labeled-seed set can be defined as $\mathcal{U} = \{ \eta(q_1), \cdots, \eta(q_{|\mathcal{Q}|}) \}$. Based on the information \djc{from} the labeled-seed set $\mathcal{U}$, an interactive image segmentation algorithm aims to automatically partition the entire vertex set $\mathcal{V}$ as non-ROI vertex set $\mathcal{V}_\mathsf{0} \subseteq \mathcal{V}$ and ROI vertex set $\mathcal{V}_\mathsf{1} \subseteq \mathcal{V}$. In general, a segmentation algorithm may include a label propagation procedure, which can be defined as another mapping $\pi:\mathcal{V} \rightarrow [\mathsf{0}, \mathsf{1}]$ with $\mathcal{U}$ as its hints. We denote the segmentation generated via label propagation mapping $\pi$ as $\mathbf{s}^\mathbf{\pi}$, and all possible segments of $\mathcal{V}$ as the set $\mathcal{S} =\{\mathsf{0}, \mathsf{1}\}^{|\mathcal{V}|}$.
\subsection{Interactive Image Segmentation}
The general interactive image segmentation problem can be formulated as follows: Given an image $\mathcal{I}$ and a label propagation algorithm $\pi$, the user specifies a labeled-seed set $\mathcal{U}$ to make the machined-generated segmentation $\mathbf{s}^\pi$ approach the expected segmentation $\mathbf{s}^*$.
We use a conditional probability $p( \mathbf{s}^\pi = \mathbf{s}^* |\mathcal{U})$ over $\mathcal{S}$ to state how likely it can be for a segmentation $\mathbf{s}^\pi$ to approximate $\mathbf{s}^*$. For brevity, we denote the probability as $p( \mathbf{s}^\pi |\mathcal{U})$ in the rest of the paper. We would like to model the distribution over $\mathcal{S}$, but it is intractable to do exhaustive computation of $p( \mathbf{s}^\pi |\mathcal{U})$ over $\mathcal{S}$, since the cardinality of $\mathcal{S}$ is extremely large ($2^{|\mathcal{V}|}$). There are basically two strategies to make $\mathbf{s}^\pi$ approach $\mathbf{s}^*$ and hence to maximize the conditional probability $p( \mathbf{s}^\pi |\mathcal{U})$. The first one is done by improving the performance of label propagation algorithm $\pi$ and the second one is to improve the quality of the seeds $\mathcal{Q}$. Improving label propagation may help to achieve better label inference for unlabeled vertices. \djc{Many interactive image segmentation algorithms have explored this direction~\cite{BoykovJ01,ChengPZTR15,DongSSY15,FengPCC16,WangHC14,Grady06,GulshanRCBZ10,KassWT88,MortensenB95,RotherKB04,XianZCXD16,BadoualSUU17,XuPCYH16,XuPCYH17,LiewWXOF17,ManinisCTG18}. With the aids of deep networks, the deep learning based algorithms \cite{XuPCYH16,XuPCYH17,LiewWXOF17,ManinisCTG18} especially show good performance in this direction. However, the issue of seed-selection is left to the user.} Our approach, on the other hand, focuses on how to select the informative query seeds $\mathcal{Q}$ for the user to annotate and therefore eases the annotation burden.
\subsection{Diversified Seed Proposals}
Given a label propagation algorithm $\pi$, in order to make a segmentation $\mathbf{s}^\pi$ approach the expected $\mathbf{s}^*$ in fewer rounds, we propose to select the \emph{informative query seeds} $\mathcal{Q}$ under the criterion of maximizing the improvement on conditional probability $p(\mathbf{s}^\pi|\mathcal{U})$.
Our idea of selecting new query seeds $\mathcal{Q}'$ at each round of interaction can be formulated as the following optimization problem:
\begin{equation}\label{eq:funcProblem}
\begin{aligned}
\underset{\footnotesize \begin{array}{c} \mathcal{Q}' \subseteq \mathcal{V} \\ |\mathcal{Q}'|=\theta_k\end{array}}{\text{max}}
& \sum_{v_i \in \mathcal{Q}'} p(\mathbf{s}^\pi | \widetilde{\mathcal{U}} \cup \eta(v_i) ) - p(\mathbf{s}^\pi | \widetilde{\mathcal{U}}) \\
\text{s.t.} \quad
& d(v_i, v_j) \geq \theta_d, \, \forall v_i,v_j \in \mathcal{Q}' \,,
\end{aligned}
\end{equation}
where $\theta_k$ denotes the number of query seeds being selected at each round, $\eta(v_i)$ defines the label of $v_i$, $\widetilde{\mathcal{U}}$ denotes all labels obtained in the previous rounds, $d(\cdot, \cdot)$ computes the Euclidean distance, and $\theta_d$ denotes the spatial distance on the touchscreen. The objective function in Eq.~(\ref{eq:funcProblem}) aims to find a new query set $\mathcal{Q}'$ of size $\theta_k$ that yields the maximal improvement. The distance constraint is to guarantee that the query seeds are separated far enough from each other on the touchscreen so that the user is able to swipe through the seeds effortlessly and unambiguously.
Expanding the set of labeled seeds $\widetilde{\mathcal{U}}$ is always helpful for approximating the expected segmentation $\mathbf{s}^*$ by $\mathbf{s}^\pi$ because more hints can be obtained from the user. However, the difficulty in optimizing Eq.~(\ref{eq:funcProblem}) is how to select the most informative query seeds that increase the probability most. Since the expected segmentation $\mathbf{s}^*$ is not given, it is hard to evaluate the contribution of each query seed. We tackle the problem through an observation: If the results of two label propagations are similar, their conditional probabilities should also be similar and thus do not yield significant improvements. Therefore, we propose to select the vertices that have higher chance to produce greater change in label propagation as the informative query seeds. The selection is carried out via our query-seed proposal scheme, which is described later in Section \ref{sec:seedProposal}.
In \djc{summary}, to make the estimated segmentation $\mathbf{s}^\pi$ approach the expected segmentation $\mathbf{s}^*$ more quickly in fewer interactions, we select and propose the informative query seeds $\mathcal{Q}'$ to the user for acquiring reliable labels that can be used to guide the label propagation algorithm $\pi$. Furthermore, Eq.~(\ref{eq:funcProblem}) is designed not only for proposing the informative query seeds $\mathcal{Q}'$ but also for making sure that the new query seeds $\mathcal{Q}'$ are easy to label via swipe gestures.
\section{Approach}
An intuitive description of our interactive segmentation approach is as follows. We represent the input image as a weighted two-layer graph. At each round of user-machine interaction, we propose $\theta_k$ query seeds to acquire the true labels from the user. Then, the labeled vertices propagate their labels to the rest unlabeled vertices, and thus yield the corresponding segmentation. According to the clues of label propagation, the other $\theta_k$ query seeds are then proposed to the user for the next interaction. In the experiments, the user-machine interaction is repeated until we have performed a predefined number of rounds. An overview of our approach is shown in Fig.~\ref{fig:overview}.
As previously mentioned, we use the conditional probability $p(\mathbf{s}^\pi | \mathcal{U})$ to state how likely the estimated segmentation $\mathbf{s}^\pi$ is equal to the expected segmentation. We use an EM-like procedure to maximize the conditional probability by alternately performing {\em i}) a query-seed proposal scheme to find $\mathcal{Q}'$ with respect to Eq.~(\ref{eq:funcProblem}) and {\em ii}) a label propagation scheme to find $\mathbf{s}^\pi$ guided by $\mathcal{U}$.
\subsection{Query-seed Proposal Scheme} \label{sec:seedProposal}
Selecting query seeds is based on \emph{seed assessment} and \emph{seed diversification}, which jointly find approximate solutions to Eq.~(\ref{eq:funcProblem}).
The step of seed assessment ranks the query seeds according to \emph{proposal confidence} and \emph{proposal influence}.
The step of seed diversification is to satisfy the constraint in Eq.~(\ref{eq:funcProblem}).
We denote the labeled vertex set and the unlabeled vertex set as $\mathcal{V}_L$ and $\overline{\mathcal{V}_L}$, respectively. The query-seed proposal scheme extracts a $\theta_k$-element subset $\mathcal{Q}' \subseteq \overline{\mathcal{V}_L}$ per round to acquire their true labels from the user. After the labels of $\mathcal{Q}'$ are acquired, we merge $\mathcal{Q}'$ into $\mathcal{V}_L$.
\subsubsection{Seed Assessment} \label{sec:assessment}
While building the query seed set $\mathcal{Q}'$, we use an assessing function $f:\mathcal{V} \rightarrow \mathbb{R}$ to assign each unlabeled seed $v_i$ a value $f(v_i)$ accounting for \djc{all labeled vertices in the previously labeled set $\widetilde{\mathcal{Q}}$, which is the union of two disjoint sets $\widetilde{\mathcal{Q}}_\mathsf{0}$ and $\widetilde{\mathcal{Q}}_\mathsf{1}$ according to the types of labels. The function $f$ has the following form:
\begin{equation}\label{eq:funcScore}
f(v_i|\widetilde{\mathcal{Q}}) = \Phi(v_i|\widetilde{\mathcal{Q}}) + \theta_s \Psi(v_i) \,,
\end{equation}
where $\widetilde{\mathcal{Q}}=\widetilde{\mathcal{Q}}_\mathsf{0} \bigcup \widetilde{\mathcal{Q}}_\mathsf{1}$, $\theta_s$ is a weighting factor. We use $\theta_s = 0.7$ in the experiments.}
The first term $\Phi$ in Eq.~(\ref{eq:funcScore}) computes the proposal confidence. Let $\rho:\mathcal{V} \times \mathcal{V} \rightarrow \mathbb{R}$ denote a metric that can estimate the graph distance\footnote{Note that the graph distance here is weighted with respect to the adopted features, not merely defined in the spatial domain.} of each unlabeled vertex to any specified vertex. The vertices within a short graph distance have high chance to share the same label, and thus are more likely to be redundant queries. Hence, a vertex that is distant from the other labeled vertices should be more informative and suitable to be selected as a query for acquiring label. Hence we let the proposal confidence of a vertex be proportional to its graph distance to the nearest labeled vertex.
The second term $\Psi$ in Eq.~(\ref{eq:funcScore}) calculates the proposal influence. We use this term to define the influence of a vertex. This term is inspired by semi-supervised learning \cite{ChapelleWS02,ZhouBLWS03} with the assumption of \emph{label consistency}. Label consistency means the vertices on the same manifold structure or nearby vertices are likely to have the same label. A vertex has more similar vertices around it should has larger influence, since there might be more similar vertices sharing the same label with it. We let the proposal influence of a vertex be proportional to the number of similar neighboring vertices around it.
Any graph distance measurement and clustering algorithm can be used to estimate the graph distance and to extract the similar neighboring vertices.
We choose to use the shortest path to estimate the graph distance for the proposal confidence term, and use a minimum spanning tree algorithm to extract similar neighboring vertices for the proposal influence term.
The two algorithms are chosen for their computational efficiency. Section \ref{sec:implementation} details the implementation of the two terms.
\subsubsection{Seed Diversification}
\label{sec:diversification}
Since we would like to acquire $\theta_k$ true labels from the user via swipe gestures per interaction, the multiple query seeds should be sufficiently distant from one another, as modeled in the constraint of Eq.~(\ref{eq:funcProblem}), so that the user is able to swipe through the seeds effortlessly and unambiguously.
The seed diversification step sorts the query vertices from high assessment values to low assessment values and then performs non-maximum suppression: If a vertex within the radius of $\theta_d$ pixels to any already selected higher-valued vertex, we just skip this vertex and move on to the next one until we get totally $\theta_k$ vertices as the query seeds. Note that the skipped vertices may be reconsidered in the subsequent rounds.
\subsection{Label Propagation Scheme}\label{sec:labelProp}
Label propagation is used to propagate the known labels to all other not-queried vertices.
A result of segmentation can be obtained by directly assigning each vertex the same label as its closest vertex that has already been labeled by the user.
Here we also use the shortest path on graph as in seed assessment to compute the closeness between vertices for label propagation.
\subsection{Implementation Details}\label{sec:implementation}
\subsubsection{Graph Construction}
We design a two-layer weighted graph $\mathcal{G}=(\mathcal{V},\mathcal{E},\omega)$ for selecting the query seeds and generating the segmentation. The graph consists of moderate-granularity vertices (superpixel-level) and large-granularity vertices (tree-level). The tree-level vertices are used to \emph{guide the superpixel-level vertices} to perform query-seed assessment and label propagation. The use of tree-level vertices is just to entail the aforementioned assumption of \emph{label consistency}, and we do not consider the tree-level vertices as query seeds.
\textbf{Vertices.}
We first over-segment an input image into a set of superpixels $\mathcal{R} = \{ r_1, r_2, \cdots, r_{|\mathcal{R}|} \}$ using the SLIC algorithm \cite{AchantaSSLFS12}. The set $\mathcal{R}$ is then partitioned into a minimum-spanning-tree (MST) set $\mathcal{T} = \{ t_1, t_2, \cdots, t_{|\mathcal{T}|} \}$ using the Felzenszwalb-Huttenlocher (FH) algorithm \cite{FelzenszwalbH04}. For each tree $t_i$ in the FH algorithm, the $\tau$ function is used as a threshold function for merging superpixels and is defined as
\begin{equation}\label{eq:tau}
\tau(t_i) = \frac{\theta_t}{|t_i|} \,,
\end{equation}
where $|t_i|$ is the size of $t_i$ in pixels, and $\theta_t$ controls the number of trees. \djc{In the FH algorithm, two spatially adjacent MSTs are merged if the feature difference between them is smaller than the respective internal feature difference. Since the value of $\tau$ is embedded as a fundamental internal feature difference of each MST, a larger value $\theta_t$ causes higher internal feature difference and thus encourages the merging. Therefore, a larger value $\theta_t$ means fewer yet larger trees will be generated.}
\djc{The minimum-spanning-tree algorithm is used to construct the tree-level vertices, in which each tree-level vertex is associated with some superpixel-level vertices. Fig.~\ref{fig:overview} shows a schematic diagram of the vertex sets $\mathcal{R}$ and $\mathcal{T}$. In our approach, the tree-level vertices provide shortcuts between superpixel-level vertices and information for calculating the proposal influence.}
Given the vertex set $\mathcal{V} = \{ \mathcal{R} \cup \mathcal{T} \}$, we have to compute the features of each vertex. We use the normalized color histograms $h_c$ with 25 bins for each CIE-Lab color channel. We also include the texture feature consisting of Gaussian derivatives in eight orientations, which are quantized by magnitude to form a normalized texture histogram $h_t$ with ten bins for each orientation per color channel. Hence, each vertex $v_i \in \mathcal{V}$ is represented by the histograms $h_c$ and $h_t$.
\textbf{Edges.}
The edge set $\mathcal{E}$ is defined according to the vertex types. An edge $e_{ij} \in \mathcal{E}$ exists if 1) two vertices $r_i,r_j \in \mathcal{R}$ are adjacent, 2) two vertices $t_i,t_j \in \mathcal{T}$ are adjacent, or 3) vertex $r_i \in \mathcal{R}$ is included in its corresponding tree-level vertex $t_j \in \mathcal{T}$. \djc{Here, by `adjacent' we mean two vertices are adjacent spatially. Please refer to the green lines of the `Graph Construction' in Fig.~\ref{fig:overview}.}
Given two vertices $v_i, v_j \in \{\mathcal{R} \cup \mathcal{T}\}$, we use the following equation \cite{GrundmannKHE10a} to measure the inter-vertex feature distance:
\begin{equation} \label{eq:colorTexture}
\omega(v_i,v_j) = \bigg (\, 1-\left(1-\chi^2_{h_c}(v_i,v_j) \right) \left(1-\chi^2_{h_t}(v_i,v_j) \right) \, \bigg )^2~,
\end{equation}
where $\chi^2_{h_c}(v_i,v_j)$ is the $\chi^2$ color distance, $\chi^2_{h_t}(v_i,v_j)$ is the $\chi^2$ texture distance. The equation makes the inter-vertex feature distance close to zero only if both the color and texture distances are close to zero.
Having the two-layer weighted graph $\mathcal{G}=(\mathcal{R} \cup \mathcal{T},\mathcal{E},\omega)$ been defined, we can now estimate the similarities between vertices and labeled seeds using the notion of shortest path on graph \cite{GulshanRCBZ10,KrahenbuhlK14,RupprechtPN15,WangSP15,WeiWZS12}.
\subsubsection{Graph Distance}
Given the two-layer graph $\mathcal{G}=(\mathcal{V},\mathcal{E},\omega)$ with a specified vertex $v_j \in \mathcal{V} = \{\mathcal{R} \cup \mathcal{T}\}$, the geodesic distance $\Phi(v_i|v_j)$ of the shortest path from vertex $v_i$ to the specified vertex $v_j$ is defined as the accumulated edge weights along the path.
The geodesic distance function $\Phi$ can be defined as
\begin{equation} \label{eq:geoDist}
\Phi(v_i|v_j) = \min_{v'_1=v_i,\ldots,v'_m=v_j} \sum_{k=1}^{m-1} \omega(v'_k,v'_{k+1}), \forall v'_k,v'_{k+1} \in \mathcal{V} \,,
\end{equation}
where $m$ denotes the path length.
\subsubsection{Proposal Confidence and Segmentation}
According to the geodesic distance function $\Phi$ in Eq.~(\ref{eq:geoDist}), a shorter distance between two vertices means they have higher label similarity, and we use the $\Phi$ function to derive seed assessment in Eq.~(\ref{eq:funcScore}) and label propagation in Section \ref{sec:labelProp}, \textit{i.e.}, the mapping function $\pi$ from $\mathcal{V}$ to $[0,1]$.
\subsubsection{Proposal Influence}
The influence of each vertex $v_i$ is defined as
\djc{
\begin{equation}\label{eq:szTree}
\Psi(v_i) = \frac{ \max \big\{ |t_j| \, \big\rvert v_i \in t_j \big\} }{| \mathcal{I} |}, \forall v_i \in \mathcal{V}, t_j \in \mathcal{T} \,,
\end{equation}
}
where the function $|\cdot|$ extracts the size of vertex in pixels. In the numerator, the larger size means a superpixel-level vertex $v_i$ is included in the corresponding tree-level vertex $t_j$ together with more similar superpixel-level vertices, which means $v_i$ has higher influence power.
\subsubsection{The Seed Assessment Function}
By plugging Eq.~(\ref{eq:geoDist}) and Eq.~(\ref{eq:szTree}) into Eq.~(\ref{eq:funcScore}), we can define the assessment function for each vertex $v_i$ with respect to \djc{the previously labeled vertex set $\widetilde{\mathcal{Q}}$:
\begin{equation}\label{eq:obj}
f(v_i|\widetilde{\mathcal{Q}}) = \Phi(v_i|\widetilde{\mathcal{Q}}) + \theta_s \Psi(v_i), \; \forall v_i \in \mathcal{R} \,.
\end{equation} }
The seed assessment criterion of Eq.~(\ref{eq:obj}) and the seed diversification constraint described in Section \ref{sec:diversification} are used for choosing the \djc{superpixel-level} vertices as the query seeds that help to solve the optimization in Eq.~(\ref{eq:funcProblem}).
Each selected \djc{superpixel-level} vertex should have a distinct feature and belong to a tree of larger size, and should be at a larger spatial distance to the previously labeled vertex set $\widetilde{\mathcal{Q}}$.
Notice that, we always use the centroid pixel of the selected vertex to represent the query seed on the display. The first query in our algorithm is the centroid superpixel of the largest tree since the labeled vertex set is empty. The selection of subsequent seeds then follows the rule of Eq.~(\ref{eq:obj}).
\section{Experimental Results}
We conduct four kinds of experiments to evaluate our approach in depth.
The first experiment compares the segmentation accuracy of different parameter settings in our approach.
The second and the third experiments compare our approach with the state-of-the-art algorithms in terms of segmentation accuracy and computation time, where the scenario of interaction could be one seed per interaction or multiple seeds per interaction.
The fourth experiment provides the user study.
More experimental results can be found in the supplementary material.
\textbf{Datasets.}
Fig.~\ref{fig:datasetGt} illustrates some examples of the ground-truth segments of the six datasets used in our experiments.
\begin{enumerate}[]
\item \emph{SBD} \cite{GouldFK09}: This dataset contains 715 natural images. Each image has average $4.22$ ground-truth segments in each individual annotation.
\item \emph{ECSSD} \cite{ShiYXJ16}: This dataset contains 1000 natural images. Each image has average $1.0$ ground-truth segment in each individual annotation.
\item \emph{MSRA}\footnote{The individual annotation are provided by Achanta~\textit{et~al.} \cite{AchantaHES09}. The natural images are provided by Liu~\textit{et al.} \cite{LiuSZTS07}.} \cite{AchantaHES09}: This dataset contains 1000 natural images. Each image has average $1.0$ ground-truth segment in each individual annotation.
\item \emph{VOC} \cite{VOC07}: We use the {\em trainval} segmentation set, which contains 422 images. Each image has average $2.87$ ground-truth segments in each individual annotation.
\item \emph{BSDS} \cite{FowlkesMM07}:
This dataset contains 300 natural images.
Each image has several hand-labeled segmentations as the ground truths. Each image has average $20.37$ ground-truth segments in each individual annotation.
\item \emph{IBSR}\footnote{The MR brain data sets and their manual segmentations were provided by the Center for Morphometric Analysis at Massachusetts General Hospital and are available at http://www.cma.mgh.harvard.edu/ibsr/.}: There are 18 subjects in this dataset. For each subject we extract 90 brain slices ranging from 20th slice to 109th slice.
\end{enumerate}
\begin{figure}[t]
\centering
\subfigure[SBD] { \includegraphics[height=0.09\textwidth]{SBDgt.png} }
\subfigure[{\tiny ECSSD}]{ \includegraphics[height=0.09\textwidth]{ECSSDgt.png} }
\subfigure[MSRA] { \includegraphics[height=0.09\textwidth]{MSRAgt.png} }
\subfigure[VOC] { \includegraphics[height=0.09\textwidth]{VOCgt.png} }
\subfigure[BSDS] { \includegraphics[height=0.09\textwidth]{BSDSgt.png} }
\subfigure[IBSR] { \includegraphics[height=0.09\textwidth]{IBSRgt.png} }
\vspace{-3mm}
\caption{\label{fig:datasetGt} Examples of the ground-truth segments from each dataset. Each color denotes a ground-truth ROI except the black color in (b-d) and the creamy-white color in (d). Note that we only show one human annotation in (e) for better visualization.}
\end{figure}
\begin{figure*}[t]
\centering
\subfigure[VOC-$|\mathcal{R}|$]
{ \includegraphics[width=0.23\textwidth]{f2_VOC_medianDice.png} }
\subfigure[VOC-$\theta_t$]
{ \includegraphics[width=0.23\textwidth]{f3_VOC_medianDice.png} }
\subfigure[VOC-$|t|$]
{ \includegraphics[width=0.23\textwidth]{f4_VOC_medianDice.png} }
\subfigure[VOC-$\theta_s$]
{ \includegraphics[width=0.23\textwidth]{f5_VOC_medianDice.png} }
\vspace{-3mm}
\caption{\label{fig:comparison1} Comparison results on various parameter settings of our approach. Each sub-figure depicts the median Dice score as the segmentation accuracy against the number of interactions. The complete results are shown in the supplementary material.}
\end{figure*}
\textbf{Evaluation metric.}
For evaluating the segmentation accuracy, every segment in each individual annotation is considered as an ROI. For each ROI, we perform 30 rounds of interactive segmentation. The evaluation metric used to measure the segmentation quality is the median of the Dice score\footnote{For fair comparison, we evaluate our approach with this metric as \cite{ChenCC16,RupprechtPN15}.}. The Dice score \cite{Soensen48} is defined as
\begin{equation}\label{eq:funcDice}
\mathrm{dice}(C,G) = \frac{2|C \bigcap G|}{|C|+|G|} \,,
\end{equation}
where $C$ denotes the computer-generated segmentation and $G$ denotes the ground-truth segmentation.
\subsection{Effects of Different Parameter Settings}
We compare four different parameter settings on MSRA datasets to explore the properties of the proposed algorithm. The performance is evaluated by the segmentation accuracy against the number of interactions.
Fig.~\ref{fig:comparison1}a shows the comparison results of choosing different settings on the number of superpixels during graph construction. For reference, we plot the optimal segmentation accuracy that can be achieved by different settings in dash lines. The optimal segmentation accuracy is obtained by assigning all superpixels the `correct' labels, which is equivalent to performing infinite rounds of interactions. Based on this experiment, we choose to use 700 superpixels for the subsequent experiments. Fig.~\ref{fig:comparison1}b and Fig.~\ref{fig:comparison1}c compare the different settings of building the minimum spanning trees. Setting a larger value of $\theta_t$ would favor constructing larger trees. The value of $|t|$ means the minimum size constraint of each tree, which is used to merge the trees smaller than a certain degree of average-superpixel-size to their adjacent trees. Based on the experimental result, we set $\theta_t=3{,}000$ and require that the minimum tree must contain at least three superpixels. Fig.~\ref{fig:comparison1}d compares different settings of the weighting factor in the seed assessment function \ref{eq:funcScore}. The legend `only' in Fig.~\ref{fig:comparison1}d means the seed assessment function contains only the proposal influence term. We set $\theta_s=0.7$.
Fig.~\ref{fig:comparison1} demonstrates that our approach is not sensitive to the parameter setting.
\begin{figure*}[t]
\centering
\subfigure[{\tiny VOC - one seed per interaction}] { \includegraphics[width=0.31\textwidth]{f1_VOC_medianDice.png} }
\subfigure[{\tiny BSDS - one seed per interaction}] { \includegraphics[width=0.31\textwidth]{f1_BSDS_medianDice.png} }
\subfigure[{\tiny IBSR - one seed per interaction}] { \includegraphics[width=0.31\textwidth]{f1_IBSR_medianDice.png} }
\vspace{-3mm}
\caption{\label{fig:comparison2} Comparison results on the state-of-the-art binary-query interactive segmentation algorithms \cite{ChenCC16,RupprechtPN15}. This figure shows the comparison of the segmentation accuracy. Each sub-figure depicts the median Dice score against the number of interactions. The results of Rupprecht~\textit{et al.} are reproduced according to \cite{RupprechtPN15}. Note that all methods in comparison propose seeds without the ground truth.}
\end{figure*}
\begin{figure*}[t]
\centering
\subfigure[{\tiny SBD - multi-seed per interaction}] { \includegraphics[width=0.31\textwidth]{f7_SBDR_medianDice.png} }
\subfigure[{\tiny ECSSD - multi-seed per interaction}]{ \includegraphics[width=0.31\textwidth]{f7_ECSSD_medianDice.png} }
\subfigure[{\tiny MSRA - multi-seed per interaction}] { \includegraphics[width=0.31\textwidth]{f7_MSRA_medianDice.png} }
\vspace{-3mm}
\caption{\label{fig:comparison3} Comparisons on the variants of our approach and state-of-the-art methods. The performance is evaluated by the segmentation accuracy with respect to the number of interactions on six datasets. Each sub-figure depicts the median Dice score against the number of interactions. Note that all methods except the SwipeCut are annotated with the ground truth.}
\end{figure*}
\subsection{One Query Seed Per Interaction}
If we select only one seed per interaction, \textit{i.e.}, $\theta_k=1$, our approach is actually similar to the other two binary-query interactive segmentation algorithms \cite{ChenCC16,RupprechtPN15}. For comparison, at each interaction, all the three methods actively propose one seed to the user for acquiring a binary label.
We also compare with another three baselines.
\djc{The first baseline `SwipeCut (1-layer)' uses the assessment function Eq.~(\ref{eq:funcScore}) on the superpixel-level graph.
The second baseline `Random (1-layer)' just randomly proposes seeds on the superpixel-level graph.
The third baseline `Random (2-layer)' randomly proposes seeds on the two-layer graph. In the baseline `SwipeCut (1-layer),' we also partition the superpixel-set $\mathcal{R}$ into a tree-set $\mathcal{T}$ for computing the proposal influence. However, its proposal confidence can only be calculated on the superpixel-level graph. }
\subsubsection{Segmentation Accuracy}
Note that, all variants of our method implement the segmentation on the superpixel-level graph. They differ only in the strategy of selecting query seeds. From Fig.~\ref{fig:comparison2} we can see that selecting the query seed in the two-layer graph is better than selecting the query seed in the single-layer graph. \djc{Notice that our approach `SwipeCut (1-layer),' which uses both color and texture features, just performs marginally better than \cite{ChenCC16} and \cite{RupprechtPN15}. However, our approach `SwipeCut (2-layer),' which uses both features and the different graph structure, has a noticeable improvement in segmentation accuracy. Therefore, we think that the improvement mainly comes from the use of the two-layer graph.} This is because the redundant query seeds (satisfying the label consistency assumption) are greatly suppressed.
The comparisons on our algorithm `SwipeCut (2-layer)' and the two previous methods of Chen~\textit{et al.}~\cite{ChenCC16} and Rupprecht~\textit{et al.}~\cite{RupprechtPN15} also show that our algorithm performs significantly better on those three datasets. The results imply that our approach is better than the existing methods on selecting the most informative query seed.
\subsubsection{Response Time}
The average response time per iteration of our method is less than $0.002$ seconds, which is far less than \cite{RupprechtPN15} ($< 1$ second) and slightly more than \cite{ChenCC16} ($<0.001$ seconds). The computation bottleneck of Rupprecht~\textit{et al.} is MCMC sampling, which is to approximate the image segmentation probability. The computation cost of Chen~\textit{et al.} is quite low, but it is outperformed by our approach on segmentation accuracy. Notice that the preprocessing step for over-segmentation and for building MST takes about $0.7$ seconds. However, it only needs to be done once before interaction and thus the efficiency of the entire algorithm would not be degraded. The time measurement is done on an Intel i7 $3.40$ GHz CPU with 8GB RAM.
\begin{table*}
\caption{\label{tab:comTimeTab} The average response time (seconds) per round of different algorithms. The measurement is done on an Intel i7-4770 $3.40$ GHz CPU with 8GB RAM. The timing results are obtained using the MSRA dataset. }
\vspace{-3mm}
\normalsize
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|c|}
\hline
\textbf{Algorithm} & LazySnapping & RandomWalks & InteractiveGraphCuts & GeodesicStar & OneCut & SwipeCut \\
\hline \hline
\textbf{Seconds} & 0.33 & 0.72 & 0.34 & 0.61 & 0.44 & 0.002 \\
\hline
\end{tabular}
\end{center}
\end{table*}
\begin{figure*}[t]
\centering
\subfigure[{\tiny ECSSD - various number of seeds}]
{ \includegraphics[width=0.31\textwidth]{f6_ECSSD_medianDice.png} }
\subfigure[{\tiny ECSSD - median Dice score}]
{ \includegraphics[width=0.31\textwidth]{f8_ECSSD_medianDice_user.png} }
\subfigure[{\tiny ECSSD - average time}]
{ \includegraphics[width=0.31\textwidth]{f8_ECSSD_avgTime_user.png} }
\vspace{-3mm}
\caption{\label{fig:comparison4} The evaluation of the number of query seeds per interaction on SwipeCut and the user study in comparison with different interactive segmentation algorithms using the ECSSD dataset. (a) The SwipeCut's performance on various number of query seeds per interaction. (b) The user study on the segmentation accuracy against the number of interactions. (c) The user study on the time cost against the number of interactions. The average time includes the user response time, I/O time, and computation time. }
\end{figure*}
\begin{figure*}[t]
\centering \vspace{-2mm}
\begin{tabular}{m{1.4cm}m{0.95\textwidth}}
& $\quad queries, GT \;\;\quad SwipeCut \qquad\quad\;\; LS \qquad\qquad\quad RW \qquad\qquad\; IGC \qquad\qquad\; GSC \qquad\qquad\; OC$ \\
Round 5 & \multirow{5}{*}{ \includegraphics[width=0.87\textwidth]{f9_ECSSD_visualCom_easy.png} } \\[10mm]
Round 10 & \\[11mm]
Round 15 & \\[13mm]
Round 20 & \\[10mm]
Round 30 & \\[18mm]
Round 5 & \multirow{5}{*}{ \includegraphics[width=0.87\textwidth]{f9_ECSSD_visualCom_hard.png} } \\[13mm]
Round 10 & \\[14mm]
Round 15 & \\[14mm]
Round 20 & \\[14mm]
Round 30 & \\[15mm]
\end{tabular}
\caption{\label{fig:visualCom} Visualization of running examples of our approach. The top image set and the bottom image set respectively show the segmentation results using a simple background image and a cluttered background image. In the two image sets, the rows show the query seeds (in red crosses) and the segments from each method at the 5th, 10th, 15th, 20th, and 30th round. The image in red box shows the ground truth of the corresponding example image. Abbreviation of the methods in comparison: Lazy Snapping (LS), Random Walks (RW), Interactive Graph Cuts (IGC), Geodesic Star Convexity sequential (GSC), OneCut with seeds (OC).}
\end{figure*}
\subsection{Multiple Query Seeds Per Interaction}
We evaluate our approach on three datasets in this experiment. Our approach is compared with five state-of-the-art interactive segmentation algorithms, which are seed/scribble based algorithms listed as follows\footnote{The programs of Lazy Snapping are implemented by Gupta and Ramnath \url{http://www.cs.cmu.edu/~mohitg/segmentation.htm}.
The code of Random Walks is from \url{http://cns.bu.edu/~lgrady/software.html}
The programs of InterGraphCuts and GeodesicStar are from \url{http://www.robots.ox.ac.uk/~vgg/research/iseg/}. The code of OneCut is from \url{http://vision.csd.uwo.ca/code/}.}:
Lazy Snapping \cite{LiSTS04},
Random Walks \cite{Grady06},
Interactive Graph Cuts \cite{BoykovJ01},
Geodesic Star Convexity \cite{GulshanRCBZ10},
OneCut with seeds \cite{TangGVB13}.
Except our approach, the other five methods in this experiment do not have a seed-proposal mechanism. Therefore, we use the procedure in \cite{FengPCC16} to automatically synthesize the next seed position as a new user-input: In each round, each algorithm will \emph{ideally} select the centroid of the largest connected component among the exclusive-or regions between the current segmentation and the ground-truth segmentation, as is guided by an oracle. \djc{Notice that, our approach reasons out the seeds without the ground truth segmentation.}
Fig.~\ref{fig:comparison3} shows the experimental results on segmentation accuracy. The notation `$(\cdot)$' in Fig.~\ref{fig:comparison3} means the number of query seeds per interaction of the proposed approach, SwipeCut. The first line in Fig.~\ref{fig:comparison3} depicts the upper bound of our superpixel-level segmentation accuracy using 700 superpixels. Fig.~\ref{fig:comparison4}(a) shows the comparison of different settings on the number of query seeds per interaction.
It can be seen from Fig.~\ref{fig:comparison3} and Fig.~\ref{fig:comparison4}(a) that the version of multiple queries per iteration of our algorithm greatly boosts the segmentation accuracy. Collecting multiple labels per interaction makes our approach reach the segmentation-accuracy upper bound within fewer rounds.
In our approach, proposing five query seeds per interaction is sufficient to get better segmentation accuracy than other methods, while they all rely on the ideal oracle to select the seed for them in the experiment.
It is also worth emphasizing that the interaction mechanism of multiple query seeds per round is made viable owing to the specific formulation in Eq.~(\ref{eq:funcProblem}).
Furthermore, our seed-proposal and swipe-based mechanisms can be combined with other segmentation algorithms for acquiring multiple labels from the user.
We also present the average response time per round of different algorithms in Table.~\ref{tab:comTimeTab}. The additional visualization of running examples of the compared methods are shown in Fig.~\ref{fig:visualCom}. The advantage of our approach on interaction efficiency is clearly demonstrated.
\subsection{Evaluating User Interactions}
Fig.~\ref{fig:comparison4}(b) and Fig.~\ref{fig:comparison4}(c) depict the user study on segmentation efficiency of various interactive image segmentation algorithms. We ask ten users to segment twenty images from ECSSD dataset in ten rounds of interactions over five algorithms. Each image is shown with the corresponding ground truth to the users.
The goal of each user is to segment the given images and reproduce the corresponding ground truth as similar as possible. Our approach selects the query seeds for the user to swipe through, and all other algorithms show the user the segmentation derived from the user's previous annotations. Notice that, the users are not restricted to input merely the seed labels. Hence, each user can provide pixel-wise seed labels or longer line-drawing labels for guiding each segmentation algorithms.
In the interactive image segmentation scenario, the results in Fig.~\ref{fig:comparison4}(b) and Fig.~\ref{fig:comparison4}(c) indicate that the users spend less time in average and achieve better segmentation accuracy via our algorithm. Therefore, the advantage of acquiring label using the `multiple-query-seeds with swipe gestures' strategy is evident, and our implementation approach carries out the strategy effectively and efficiently.
\section{Conclusion}
We have presented an effective approach to the interactive segmentation for small touchscreen devices. In our approach, the user only needs to swipe through the ROI-relevant query seeds, which is a common type of gesture for multi-touch user interface. Since the number of queries per interaction is constrained, the user has less burden to swipe trough the query seeds. Our label collection mechanism is flexible, and therefore other segmentation algorithms can also adopt our approach for acquiring multiple labels from the user in one round of interaction. \djc{Recently, deep learning based algorithms \cite{XuPCYH16,XuPCYH17,LiewWXOF17,ManinisCTG18} demonstrate good segmentation performance. The proposed interactive mechanism can be integrated with the deep features in addition to simple features like color and texture for improvements in segmentation accuracy.}
The experiments show that our interactive segmentation algorithm achieves the preferable properties of high segmentation accuracy and low response time, which are important for building user friendly applications of interactive segmentation.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran} |
1812.07363 | \section{Introduction}
Face detection is one of the important topics in the field of computer vision. It plays a fundamental role in basically all face related applications. Face detection is the problem of determining the presence of faces in images and their precise locations. Face detection is confronted with different challenges such as variations in scale, pose, expression, occlusion and illumination which all may have a negative influence on the performance of face detection methods. In Table \ref{table1}, we summarized the different characteristics of various face detection benchmarks. From the table, it can be derived that many datasets are limited in representing extreme poses, different scales and heavy occlusions. However, datasets containing face images under a wide variety of imaging conditions are required to develop face detectors which are robust to all variations of image formation process.
Face detectors are designed to address only a limited set of variations in real-world situations. For example, FAN\cite{wang2017face1} uses an attention based structure and data augmentation to cope with facial occlusion. PCN \cite{shi2018real} proposes rotation-invariant face detection in a coarse-to-fine manner by dividing the calibration process into several progressive steps. The HR detector \cite{hu2016finding} combines both features and image pyramids to make the algorithm robustness against extreme face scales. In Table \ref{table2}, we show that different face detectors are designed to cope with different imaging conditions. These face detectors heavily rely on the availability of large-scale annotated datasets. Collecting and annotating real-world datasets with different imaging conditions is tedious, time consuming and in some cases even unfeasible. Furthermore, it is difficult to systematically vary the imaging parameters and to avoid errors during the annotation process. Errors in ground truth may lead to far-reaching impact on training and testing of the networks. Therefore, our contribution is to generate synthetic data, as complementary to real data, to create fully controlled datasets by means of automatic and error-less annotation. To validate our methodology, we train different face detectors on a combination of real data and fully controlled synthetic dataset, to systematically address the imaging variations. Our synthetic images are rendered versions of real 3D faces with changes in viewpoint, scale, illumination, occlusion and background. Hence, the variation of imaging conditions is performed in 3D space.
Our contributions are (1) we provide a new face dataset (3DU-Face) with a large variety of imaging conditions such as scale, pose, occlusion, blur, etc., (2) large scale experiments to systematically study the impact of data augmentation on the performance of face detection, (3) a comparative study of state-of-the-art face detectors (Faster RCNN, SSH and HR) on different face benchmarks (MAFA, UFDD and Wider Face).
\begin{table}
\begin{center}
\begin{tabular}{|l|ccc|}
\hline
\diagbox{Feature}{Datasets}
&MAFA &UFDD&Wider\\
\hline\hline
landmark occlusion &\checkmark&\checkmark&\checkmark\\
complex background & & &\checkmark\\
extreme pose & &&\checkmark \\
extreme scale & &\checkmark&\checkmark \\
heavy occlusion &\checkmark&\checkmark&\checkmark\\
blur &\checkmark &\checkmark&\checkmark \\
extreme illumination &&\checkmark&\checkmark\\
misleading objects&&\checkmark&\checkmark\\
\hline
\end{tabular}
\end{center}
\caption{Three face detection benchmarks and their characteristics.
}
\label{table1}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|l|ccc|}
\hline
\diagbox{Feature}{Detector}&Faster RCNN&SSH&HR\\
\hline\hline
landmark occlusion &\checkmark&\checkmark&\checkmark\\
complex background &\checkmark &\checkmark &\checkmark\\
extreme pose & & \checkmark &\checkmark\\
extreme scale & &\checkmark&\checkmark \\
heavy occlusion &&&\checkmark\\
blur && &\checkmark\\
extreme illumination &&&\checkmark\\
misleading objects&&&\\
\hline
\end{tabular}
\end{center}
\caption{Three advanced face detectors and their characteristics.
}
\label{table2}
\end{table}
\section{Related Work}
\subsection{Face Detection}
Often face detection is considered as a special case of object detection. Object detectors are, in general, categorized as one-step (e.g. SSD, YOLO) and two-step (e.g. Faster R-CNN) detectors. Two-step detectors mostly use region proposals and classification, while one-step detectors only rely on single feed-forward convolutional networks (without classification).
Most of the face detectors are designed to address specific variations in real-world scenarios e.g. scale\cite{yang2016multi, yang2017face, zhang2017s, zhu2017cms, hao2017scale, zhang2017s, zhu2018seeing, tang2018pyramidbox}, occlusion\cite{chen2017masquer, ge2017detecting, wang2017face1, sface2018}, pose\cite{shi2018real} or lighting condition\cite{zhou2018hybrid}. Therefore, face detectors are mostly suitable for datasets with corresponding characteristics and may lack generalization power (cross datasets).
For example, a single face detector may be restricted in handling a wide range of face scales (e.g., 10 px vs. 1000 px tall faces). Therefore, HR uses extremely large receptive fields to locate tiny faces. It also applies multi-scale testing by using an image pyramid to capture extreme scale features\cite{hu2016finding}. To detect occluded faces \cite{ge2017detecting}, a local linear embedding method is used to reduce noise and recover the lost cues. For blurry scenes, Bai et al. \cite{bai2018finding} propose a GAN based super-resolution and refinement network framework. It restores high-resolution faces from blurry ones. For face detection, blurry and low-resolution faces only have few features to extract from. Also, the boundary between the object and background is often difficult to distinguish.
\subsection{Face Synthesis}
Synthetic data is useful to improve the performance in face related applications\cite{masi2016we, osadchy2017genface, abbasnejad2017using, kortylewski2018training}. The generation of synthetic face images can be achieved by face editing methods including shape morphing\cite{blanz1999morphable}, relighting\cite{wang2009face, shu2017neural}, pose normalization\cite{hassner2015effective, zhu2015high, yim2015rotating}, and expression modification\cite{thies2016face2face, pumarola2018ganimation}. Recently, GAN-based methods provide realistic results of facial attribute manipulation \cite{choi2017stargan, shen2017learning, lu2017recent} but they are bounded by the limitations of the training images. Training images merely cover a narrow range of variations, and may cause artifacts during generation. Face synthesis methods are widely used in face recognition tasks\cite{masi2016we, kortylewski2018training}, because such tasks rely on extensive face attribute information. However, in our paper, we focus on generating face images for face detection also considering extreme variations such as large scale and heavy blur. Existing datasets mostly contain subtle face attribute manipulation. Further, existing methods directly manipulate faces in 2D. This may hinder their application in handling extreme imaging conditions. Our synthetic images are rendered versions of real 3D faces. Data generation in 3D space enables the process of including extreme changes in viewpoint, scale, illumination, occlusion and background.
\subsection{Influence of detection characteristics}
The performance of detectors are affected by both network architecture and object characteristics. Hoiem et al. \cite{hoiem2012diagnosing} provides an extensive analysis on the influence of different variations on different detectors. Another comparison study\cite{huang2017speed} focuses on the trade-off between speed and performance of meta-architecture detectors.
Karaoglu et al.\cite{karaoglu2016detect2rank} exploits the correlation between different object detectors by means of high-level contextual information. In this paper, we focus on faces and their characteristics. The aim is to study the influence of different data augmentation methods on face detection performance.
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{render.eps}
\caption{An overview of rendering pipeline. It shows the generation of real-synthetic images. First, pose, background and occlusion are manipulated on the original 3D models. Then, the 3D models are converted into 2D images. The figure of the pose variation is taken from \cite{ariz2016novel}.}
\label{fig1}
\end{figure*}
\section{Challenges for Face Detection}
\textbf{Pose} can significantly change the appearance of a face. Extreme poses may result in heavy occlusion or skewed aspect ratio of the face bounding box. Most (deep) face detectors use augmented data by rotating faces over different angles, or jointly estimate the pose\cite{osadchy2007synergistic} to obtain robustness against pose variations\cite{farfade2015multi}. PCN\cite{shi2018real} calibrates the orientation of faces to upright at different stages progressively.
\textbf{Scale} changes may have a negative influence on the performance of face detection. For example, the image features for a 10px face are essentially different than of a 1000px sized face. Combining feature pyramids and multi-scale testing is used to detect faces of extreme scales\cite{hu2016finding}.
\textbf{Context} information can play a crucial role in determining the precise location of faces. Faces in unconstrained settings may be surrounded or occluded by different objects. Round-shaped, background objects may result in false positives. For example, HR\cite{hu2016finding} uses large-scale context information to locate tiny faces. SSH \cite{najibi2017ssh} applies a context module to effectively use background features.
\textbf{Facial occlusion} may obstruct the presence of valuable information for detection\cite{ge2017detecting}. Facial occlusion can be divided into two different categories: landmark occlusion and heavy occlusion. Landmark occlusion means that only a few landmarks like eyes or mouths are occluded, while most of the face is still visible. In contrast, heavy occlusion means that more than half of the face is missing due to occlusion, image border or extreme pose\cite{chen2017masquer}. Or that a face is occluded by another face.
\textbf{Illumination} changes may substantially influence the appearance of faces. For example, it is difficult to distinguish faces, under extreme lighting conditions, from the background. Zhou et al. \cite{zhou2018hybrid} uses multi-spectrum sensing to detect faces under low lighting conditions.
\textbf{Blur and Low resolution} usually impede face detectors from retrieving available information. For example, images may be distorted in collection, storage, or transmission, leading to degraded quality of images\cite{lowquality}. In some extreme cases, only the outline of the faces can be identified. Bai et al. \cite{bai2018finding} use GANs to refine blurry faces to improve performance. Refinement network or multi-scale testing are feasible solutions to detect blurry or low-resolution faces.
\section{Face Detection: Datasets}
In this section, we discuss a number of well-known face detection benchmarks and their characteristics.
\textbf{MAFA}
is a representative dataset of face images with occlusion. The dataset is mainly composed of occluded samples using different types of occlusions\cite{ge2017detecting}. To cope with the interference of pose, MAFA only includes a narrow range of head poses. MAFA has three types of annotations in the dataset: masked, unmasked and ignored. Faces are extremely blurry or tiny where faces with a side length of less than 32 pixels are labeled as 'Ignored'.
\textbf{UFDD} contains faces in different weather conditions and other challenging variations concerning lens impediments, motion blur and defocus blur\cite{nada2018pushing}. Additionally, it has a collection of distracting images. For the UFDD dataset, the most challenging part is the extreme lighting and blur.
\textbf{Wider Face} is the most challenging benchmark for face detection ~\cite{yang2016wider}. It includes various events (e.g., basketball, football) with a variety of backgrounds. The large number of faces contain extreme poses, exaggerated expressions, heavy occlusion and extreme lighting conditions. The most challenging part of Wider Face is the extreme scale. Wider Face has three categories of difficulty: easy, medium, and hard. The criteria to categorize faces into these different categories are vague. Our Table \ref{table4} shows the basic characteristics of faces, irrespective of invalid faces, in the Wider Face validation partition.
\setlength{\tabcolsep}{4pt}
\begin{table}[h]
\begin{center}
\begin{tabular}{|l|ccc|}
\hline
Partition & Large & Medium & Tiny\\
\hline\hline
Height&50-400(96.6\%)&30-50(99\%)&10-30(99\%)\\
Width&20-300(96.3\%)&10-70(99.7\%)&8-20(95\%)\\
Number &7211 &6108 &18636\\
\hline
\end{tabular}
\end{center}
\caption{Face scale information of the validation set in Wider Face. We distinguish three face categories based on height and width. Proportion information represents the percentage of faces that fits within the scale interval.}
\label{table4}
\end{table}
\setlength{\tabcolsep}{1.4pt}
\subsection{Face Detectors}
In this section, we outline the different face detection algorithms used for comparison.
\textbf{Faster RCNN} is one of the mostly used object detectors in the literature. It is not designed to be robust against challenging variations \cite{ren2015faster} for face detection.
\textbf{SSH} is an extremely fast one-step face detector. It is designed to be scale invariant \cite{najibi2017ssh}. To accelerate the inference process, it removes selectively a number of parameters from the structure. This strategy has a negative influence on the detector's performance. SSH requires additional multi-scale processing to detect faces with extreme scales.
\textbf{HR} face detector performs well on tiny faces by using wide-range contextual information and using testing on multiple resolutions \cite{hu2016finding}. Its architecture resembles RPN \cite{ren2015faster} and uses both feature pyramids and image pyramids. However, HR face detector trained on Wider Face is extremely sensitive to tiny, round objects in the background. HR heavily relies on contextual information to locate faces. For faces with limited information (e.g., heavily occluded, extremely small or blurry), complex background may hinder precise detection.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.4\textwidth]{basic.pdf}
\caption{Performance comparison on the basic settings of our training data. It includes the effect of training data size on both (a) Wider Face validation set and (b) MAFA test set. (c) and (d) respectively show the effects of objects and background on Wider Face validation with HR.}
\label{fig3}
\end{figure}
\section{Data Augmentation}
Data augmentation is based on our new 3D face dataset (3DU-Face). It contains 700 3D face mesh models with high-resolution texture of 435 different individuals. Some people may have multiple recordings taken at different times and imaging conditions. Most of the 3D models are taken in the wild under uncontrolled conditions. The 3D models contain 50 facial landmarks annotated by human experts.
Using the 3D models, the 2D images (projections) are rendered. The rendering pipeline is built in Blender 2.78. To change the viewpoint, the model is rotated over different Euler angles. The camera is kept in the same position. The parameters of pitch, yaw and roll are selected randomly using different ranges. For face scale variation, we change the distance between camera and the face models within a fixed range. The ground truth for face detection is generated from the 3D landmarks. The bounding box is generated to tightly encompass the forehead, chin, and cheek. The size of the faces is larger than $10\times8$ pixels.
In this paper, data is created that include face occlusion. Current approaches focus on (1) cropping face images or (2) using GAN to generate. However, both approaches have drawbacks. Cropping will reduce the information of face and may not provide sufficient generalization in case of real occlusion samples. GAN-based methods are able to generate subtle face attributes. However, for more global image/face changes, GAN may generate blurry results and artifacts. Therefore, in this paper, our aim is to generate face images in 3D space. We randomly add different 3D objects like sunglasses, hats, and helmets in the 3D space before rendering. All objects are placed at a selective locations to simulate landmark occlusion. To simulate occlusion, face regions are divided into three different face parts: head, eye and mouth. More than 1000 different combinations of occlusion are generated for each model.
\section{Results}
Experiments are conducted to systematically study the influence of data augmentation on the performance of face detection. We compare three face detection methods on the following face detection benchmarks: MAFA, UFDD and Wider Face. The face detection methods are Faster RCNN\cite{ren2015faster}, SSH\cite{najibi2017ssh} and HR\cite{hu2016finding}.
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{pose_resize.pdf}
\caption{Performance comparison on pose variation. Across pitch, yaw, and roll, different colors represent the rotated degree, as labeled at the top right (e.g., "15" means "-15" to "15".) }
\label{fig4}
\end{figure}
\subsection{Implementation Details}
\subsubsection{Settings of rendering process}
This section is to demonstrate the basic settings of our rendering process . The 3D models are not changed in terms of shape or texture in experiments. We choose 100 fixed 3D models as experimental subjects and set the default for pitch, roll and yaw randomly in ranges of (-15, 15), (-15, 15), (-60, 60), respectively. For each face model, the rotation origin is the center of its landmarks, irrespective of invalid landmarks. All the face models are aligned by using landmarks with an anchor model. The anchor model is aligned to the global axis in Blender. Within a fixed range (from 1 to 48 faces), we randomize the number of faces in each image. The distance between each model and camera is randomly selected within the range of 0 to 20 meters. The size of the faces is larger than $10\times8$ pixels. 50 HDR images (no humans) are taken from Shape Net \cite{shapenet2015} and used as backgrounds. These images provide environment lighting and background variations for the synthetic images. The background dataset includes both indoor and outdoor scenes. Back face culling is applied to avoid artifacts in the rendered results.
\subsubsection{Settings of the face detectors}
In the following experiments, we test our data augmentation methods with different face detectors on three benchmarks. We first train the face detectors on synthetic data and test them on real data. We also validate the methods on a subset of real data for the training part. Every dataset has its own domain. There are many different parameters in our rendering pipeline. It is unlikely to find the optimal setting for a specific real dataset. However, after comparing the performance on real data with different rendering parameters, we attain suitable and effective configurations for testing. We use the augmented synthetic data to improve the performance for real data. Different face detectors have their own approaches to augment data, like flipping, cropping, or transforming images. For fair comparison, we keep their original operations and hyper-parameters. For SSH and HR, we deploy their algorithms on one single GPU\cite{bal2016medium}. For Faster RCNN, we use the implementation from \cite{frcnn_pytorch}.
\subsection{The influence of data augmentation} \label{DataAugmentation}
In this section, we study the influence of various data augmentation formats. For this purpose, HR is considered. We do not employ Faster RCNN or SSH for extensive analysis but we rather use them to test performance of data augmentation. Faster RCNN is a generic detector without multi-scale testing, of which the performance may not reflect all the changes in variations. SSH is designed to be an scale-invariant one-step detector. For all experiments, we study one variation at the time. The other variations are kept the same.
\textbf{The influence of basic training data settings}:
We change the number of images to test the influence of training dataset size. As shown in Figure \ref{fig3}.(a), the increase of synthetic images continuously improves the performance on hard, but not easy and medium, levels. This is because more than half of the faces, for the hard level, are tiny faces. In the training process, large faces generate much more positive samples than tiny faces. Hence, larger training samples are more useful for tiny faces. Simply increasing the number of synthetic images may lead to over-fitting. Because most of faces from the Wider Face dataset contain extreme scales, a test is performed on MAFA to examine our conclusion about the (training) dataset size in Figure \ref{fig3}.(b). It shows that the performance on both occluded faces and full datasets (including occluded and unoccluded faces) is saturated after adding more training images. Moreover, background is crucial. HDR images provide higher resolution and less sharper results than real images. HDR images have their own bias in respect to other images; the increase of the number of HDR images does not improve the performance constantly (see Figure \ref{fig3}.(d)). In Figure \ref{fig3}.(c), we study the influence of the number of 3D objects. It shows that the number of 3D models does not strongly influence the performance. This may be because the face detection task does not heavily rely on the attribute or identity information of the faces. Also, our 3D models may have their own bias (e.g. annotation bias, mesh corruption or scanner noise).
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{basic.pdf}
\caption{Performance comparison on the basic settings of our render pipeline. It includes the effect of training data size on both (a) Wider Face validation set and (b) MAFA test set. (c) and (d) respectively show the effects of objects and background on Wider Face validation with HR.}
\label{fig3}
\end{figure}
\textbf{The influence of pose}:
To investigate the effect of head pose, we render faces with different ranges of pitch, roll and yaw, and test them on the Wider Face validation dataset. As shown in Figure \ref{fig4}, for pitch, roll or yaw, the minimal range gives better performance than others. The reason is that the majority of faces in the dataset do not contain extreme poses. Also, the face detectors need sufficient data to learn the representation of faces. Then, the different portions of extreme orientations are added to the training dataset, see Figure \ref{fig5}.(b). A small range of extreme pose boosts the performance of large and medium faces because most of the tiny faces (hard level) are extremely blurry and pose-agnostic. For the HR detector, most of the tiny faces are detected by multi-scale testing. Features from generic faces are more useful for detecting tiny faces.
\textbf{The influence of occlusion}:
We test two different types of face occlusion. The first type of occlusion is from other objects in the scene. MAFA focuses on the occlusion of face images. We test our augmentation methods for this first type of occlusion on the MAFA test set for three different occlusion settings: the baseline condition with no extra occlusion, landmark occlusion, and mixed occlusion (both landmark and heavy occlusion). For all occluded cases, none of the 2D faces is occluded by parts of the other 3D face models. As shown in Figure \ref{fig5}.(c), the performance on the MAFA test set improves drastically after adding occlusion in the synthetic training dataset. HR becomes more robust after training on synthetic faces with landmark and heavy occlusions. Some occlusion examples from our synthetic data are shown in Figure \ref{fig1}.
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{noise.pdf}
\caption{Performance comparison on other variations. Only (c) tests on MAFA test set, while the remaining are on Wider Face validation set. (a) shows the results of adding different noise levels from the down-sampling or up-sampling process; (b) shows the results after adding small-portions of extreme poses to the training dataset; (c) shows the results of adding different types of occlusion; (d) shows the results of adding occlusion from other faces.}
\label{fig5}
\end{figure}
The second type of occlusion is from other faces or human body parts. We choose the Wider Face validation set, to study how this second type of occlusion in our synthetic training data influences the performance of HR. This is because most images in the Wider Face are acquired using unconstrained settings. Many of the images are group pictures, and each image may have hundreds of tiny faces. We set a threshold for overlapping faces. This is to avoid other large faces to cover the tiny faces. As shown in Figure \ref{fig5}.(d), after adding occlusion from other faces in the training dataset, the performance of HR for hard level in Wider Face validation set improves substantially. The results on the first type of occlusion demonstrates the effectiveness of our occlusion augmentation. Our synthetic data provides suitable noise to simulate the patterns of occlusion. As for the second kind of occlusion, the samples in training dataset are needed for detectors to learn to distinguish the boundary between different faces.
\textbf{The influence of noise}:
Each benchmark has its own type of configuration. For the Wider Face dataset, the original high-resolution images are downloaded using a search engine and resized to a predetermined width of 1024 pixel. This process introduces noise caused down-sampling or up-sampling. Therefore, we first render images with multiple resolutions (as Set A, B and C as below), and then re-size them to one fixed resolution (1024$\times$768). Set A includes multiple high-resolution images (4096, 3072, 2048). Set B has multiple high- and low-resolution images (4096, 3072, 2048, 512, 256, 128). And Set C has multiple low-resolution images(512, 256, 128). We demonstrate the influence of noise for the different difficulty levels of Wider Face in Figure \ref{fig5}.(a). The performance for all the difficulty levels of Wider Face has been improved, especially for tiny faces. Set A achieves the best performance on Wider Face. This is because sampling process is also changing the size of the original faces in the rendered images. The tiny faces for real data are resized from large faces as our operation in Set A. As for Set B and C, a part of the large and medium faces are resized from the original tiny faces in the rendered images.
\subsection{Performance comparison on synthetic data}
In Table \ref{table5}, we show the performance comparison about three synthetic data sets on Wider Face validation set. These three synthetic datasets $s_1, s_2, s_3$ are combined with real data to improve detection performance for UFDD\cite{nada2018pushing} and Wider Face respectively in Section \ref{UFDD} and \ref{Wider Face}.
\setlength{\tabcolsep}{4pt}
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|ccc|}
\hline
Set & Easy & Medium & Hard\\
\hline\hline
$s_1$&0.795&0.742 &0.502\\
$s_2$&0.818&0.774 &0.53\\
$s_3$&0.828&0.796 &0.627\\
\hline
\end{tabular}
\end{center}
\caption{Average precision from HR\cite{hu2016finding} trained on different sets of synthetic data. These three sets are combined with real data to improve detectors' performance. Different settings: $s_1$ is our basic settings for rendering with light occlusion. $s_2$ combines $s_1$ with extra occlusion from other faces in the render process. $s_3$ adds additional blurry results from down-sampled high resolution images into $s_2$.}
\label{table5}
\end{table}
\setlength{\tabcolsep}{1.4pt}
\subsection{Improving face detector performance by data augmentation}
In this section, we study how to use our synthetic data to improve the performance of the face detectors (i.e., Faster RCNN, SSH and HR) on real datasets. We train on a combination of Wider Face and synthetic data and then test on MAFA, UFDD and Wider Face. Visualization of our detection results are shown in Figure \ref{fig9}.
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{mafa_resize.pdf}
\caption{Performance comparison on different sizes of synthetic data in training on MAFA test set with different detectors. }
\label{fig6}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{ufdd_resize.pdf}
\caption{Performance comparison on different data augmentations on UFDD test set with different detectors.{the table is not visible and not clear.}}
\label{fig7}
\end{figure}
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.4\textwidth]{wider_resize.pdf}
\caption{Performance comparison on different data augmentations on Wider Face validation set with different detectors.}
\label{fig8}
\end{figure}
\begin{figure*}[h]
\centering
\includegraphics[width=0.8\textwidth]{success_low.pdf}
\caption{Qualitative results on different variations of real dataset. We visualize examples of each variation.}
\label{fig9}
\end{figure*}
\begin{figure*}[h]
\centering
\includegraphics[width=0.8\textwidth]{fp.pdf}
\caption{False positive examples derived from our detection results. In both Figure \ref{fig9} and Figure \ref{fig10}, the green bounding boxes are the ground truth boxes; the bounding boxes with other colors are predictions with different confidence intervals; and the red bounding boxes are false negative examples. (a) is a square annotation example from the MAFA test set. (b) and (c) are occlusion annotations of, respectively, UFDD and Wider Face. (f) only has three annotations for ground truth but actually has more unlabeled faces.(d), (e) and (g) are a number of examples of the real dataset.}
\label{fig10}
\end{figure*}
\subsubsection{MAFA}
We only use mixed face occlusion (that is landmark and heavy occlusion as in \ref{DataAugmentation} for data augmentation for MAFA. This is because during the collection and annotation process, MAFA has constraints on the different types of variations. The synthetic images for data augmentation follow the same setting of MAFA training set. Our hypothesis is that the training data size will have an effect on the performance. As shown in Figure \ref{fig6}, the performance of different detectors improves with the increase of the number of synthetic images. However, after adding more and more data, the performance saturates and drops. The reason could be that there is a domain gap between the synthetic and real data. Our synthetic data may not be as complex as real data. Also, our synthetic data inherits biases (e.g. annotation bias or mesh corruption) from the original (real) 3D models. Therefore, there is a inverted U-shape relationship between the increase of synthetic data in training and performance.
\subsubsection{UFDD} \label{UFDD}
In this section, our data augmentation is used with the Wider Face only. This is because (1) the training dataset of UFDD is not made public, and (2) UFDD itself is trained on Wider Face \cite{nada2018pushing}. Three synthetic datasets $s_1, s_2, s_3$ are combined with real data to improve detectors' performance. Different settings: $s_1$ is our basic settings for rendering with light occlusion. $s_2$ combines $s_1$ with extra occlusion from other faces in the render process. $s_3$ adds additional blurry results from down-sampled high resolution images into $s_2$. The performance of data augmentation is shown in Table \ref{table5}. The influence of our data augmentation is shown in Figure \ref{fig7}. After merging synthetic data and real data together, the performance of Faster RCNN, which is trained on real data, improves significantly on $r + s_1$ and $r + s_2$. Given that Faster RCNN is not trained for different scales, the noise of $s_3$ impedes its performance. As for SSH, its architecture and parameters heavily rely on Wider Face. For UFDD, its performance becomes saturated after being trained on real data. After adding synthetic data, its performance is even worse than Faster RCNN. Faces in UFDD are not very challenging to HR; the performance therefore only changes slightly after using our data augmentation.
\subsubsection{Wider Face} \label{Wider Face}
We still use the same setting $s_1, s_2, s_3$ as in \ref{UFDD} to perform data augmentation for Wider Face. The performance comparison is shown in Figure \ref{fig8}. Faster RCNN is a generic object detector without multi-scale testing. Hence, it is supposed to generate fewer predictions than HR and SSH. After we add synthetic data, the performance substantially improves for all levels. The performance of HR and SSH nearly saturate after being trained on real data.
\subsection{Analysis}
\subsubsection{Analysis of synthetic data}
The advantage of synthetic data is that the variations in dataset can be fully controlled. Although, there is always a domain gap between synthetic data and real data, this paper shows that synthetic augmentation can provide large-scale datasets with annotations conveniently and precisely. Our results show the applicability of synthetic data as an alternative to real data.
\subsubsection{Analysis of false positives}
False positives are the primary factor that decrease the performance. In Figure \ref{fig10}, we plot a number of false positive examples. There are two major sources of false positive in our detection results.
The first source is annotation. Different datasets have their own annotation process. This process may have a negative influence on our predictions. For example, the annotation of occluded faces in Wider Face is based on the region of the entire face (Figure \ref{fig10}.(c)). In comparison, UFDD often annotates the visible part of occluded faces ((Figure \ref{fig10}.(b)). MAFA uses square annotation, which may contain background information of surrounding faces (Figure \ref{fig10}.(a)). Moreover, human annotators may not be able to annotate all the tiny and blurry faces in the background, see Figure \ref{fig10}.(f).
The second source of false positives is due to confusing objects, such as round-shaped objects and human body parts. Because real data has much more diverse and complex background than synthetic data. Our synthetic images are rendered from 3D face models. Most of these 3D face models only depict the upper part of the human body. Therefore, our rendering results are inherently unrepresentative for other human body parts. As a result, other human body parts (see Figure \ref{fig10}.(e)) and accessories can be a source of false positives.
\subsubsection{Analysis of face detectors}
Based on our detection results, we analyze the characteristics of three detectors respectively (1) Faster RCNN is an object, instead of face, detector. It does not adjust its settings of anchors for the face detection benchmarks. And it has fewer predictions without multi-scale testing. Despite that, our synthetic data augmentation substantially improves its performance on multiple challenging datasets. (2) SSH is a face-targeted detector. However, our synthetic data augmentation is not able to outperform Faster RCNN in most of the detection tasks except for the hard level of Wider Face. Designed to cope with scale, SSH results in worse performance when other variations are encountered. Detectors have a trade-off between speed and performance\cite{huang2017speed}. SSH pursues fast speed in inference process such that its light weight architecture is equipped to handle other variations. (3) Although the HR face detector already has excellent performance for different kinds of variations, our synthetic data still improves its performance. However, HR has a drawback that is extremely sensitive to small round-shape objects given its tiny-face-targeted architecture. HR generates more false positives than other detectors. It restricts the generalization on regular faces.
\section{Conclusion}
In this paper, we provide a synthetic data generator methodology with fully controlled, multifaceted variations based on a new 3D face dataset (3DU-Face). We customized synthetic datasets to address specific types of variations (scale, pose, occlusion, blur, etc.), and systematically investigate the influence of different variations on face detection performances. We validate our synthetic data augmentation for different face detectors (Faster RCNN, SSH and HR) on various face datasets (MAFA, UFDD and Wider Face).
{\small
\bibliographystyle{ieee} |
1905.03597 | \section{Introduction and problem setting}
Let $\Omega\subset\R^n$ be bounded, $p\geq2$ and $g\in W^{1,p}(\Omega)$. Consider the following minimization problem
\begin{equation}\label{eq:BL:min}
\EuScript{E}(u) \coloneqq\frac1p\int_\Omega|\nabla u|^p \d x \longrightarrow \min
\end{equation}
in the class $\EuScript{C}\coloneqq\{u, u-g\in W^{1,p}_0(\Omega)\}$. The minimizer denoted by $u^*(x)$ satisfies the following Euler-Lagrange-equation in the weak sense:
\begin{equation}\label{eq:BL:stationary}
\begin{cases}
-\Delta_p u^{*} &= 0 \quad \text{in $\Omega$,}\\
\hfill u^{*} &= g \quad \text{on $\partial\Omega$.}
\end{cases}
\end{equation}
The first order flow of $\EuScript{E}(v)$, i.e. $v_t +\partial_{v} \EuScript{E}(v) = 0,$ can be considered as a classical steepest descent flow for solving the minimization problem \eqref{eq:BL:min}. In the degenerate case $p>2$ the authors of \cite{BL:Lindqvist} obtained the sharp decay rate
\[
\sup_{x\in\Omega} |v(t,x)-u^{*}(x)| = O\left(t^{-\frac{1}{p-2}}\right) \quad \text{as $t\to\infty$.}
\]
Their proof is based on the Moser iteration, applied to the difference $v(t,x)-u^{*}(x)$, which
itself is not a solution, thus bounding the $ L^\infty$-norm in terms of the $ L^p$-norm.
It is well known, that an improvement in the convergence rate may be gained by considering the corresponding second order damped problem, cf. \cite{BL:Frankel,BL:Polyak,BL:BSGZ1} and references therein. Moreover, second order damped problems naturally appear in modeling mechanical systems. For instance, the motion of a material point with positive mass sliding on a profile defined by a function $\Phi$ under the action of the gravity force, the reaction force, and the friction force can asymptotically be approximated by the following second order dynamical system
\begin{equation}
\ddot{x}(t) + \lambda\dot{x}(t)+\nabla \Phi(x(t))=0 \end{equation}
called \emph{heavy ball with friction system (HBF)}, cf. \cite{BL:AGR}. We refer to \cite{BL:GOOZ} and \cite{BL:ENEO} to see numerical algorithms based on the HBF system for solving some special problems, e.g. large systems of linear equations, eigenvalue problems, nonlinear Schr\"odinger problems, inverse source problems, ill-posed problems. In \cite{BL:ENEO} the authors have shown advantages and superior convergence properties of such a dynamical functional particle method compared to a first order dynamical system, and also to several other iterative methods. So, it's hardly surprising that second order dynamical equations play an important role in acceleration for convergence to steady state solutions. In fact, the power of the use of the damped $p$-Laplace equation in image denoising was investigated in \cite{BL:BSGZ1}. However, an analysis as in \cite{BL:Lindqvist} of the asymptotic behavior, for $t\to\infty$, of the solutions to a damped $p$-Laplace equation was not done so far.
Our purpose here is to obtain the decay rate for large time of $u-u^*$ where $u$ denotes the solution to the evolutionary damped $p$-Laplace equation, namely:
\newcommand{\negphantom}[1]{\settowidth{\dimen0}{$#1$}\hspace*{-\dimen0}}
\begin{equation}\label{eq:BL:telegraph}
\begin{cases}
u_{tt}+a\, u_t &=\Delta_p u\hphantom{u_{0}(x)}\negphantom{\Delta_p u} \quad \text{in $(0,\infty)\times\Omega$,}\\
\hfill u(0,x)&=u_{0}(x) \quad\text{in $\{0\}\times\Omega$,}\\
\hfill u_t(0,x)&=0\hphantom{u_{0}(x)}\negphantom{0} \quad \forall x\in\Omega,\\
\hfill u(t,x)&=g(x)\hphantom{u_{0}(x)}\negphantom{g(x)} \quad \text{on $[0,\infty)\times\partial\Omega$,}
\end{cases}
\end{equation}
wherein $a>0$ is constant and $u_0\in W^{1,p}(\Omega)$, such that $u_0-g\in W^{1,p}_0(\Omega)$.
It is clear, that the solution of the damped equation \eqref{eq:BL:telegraph} behaves for large time like the stationary solution of \eqref{eq:BL:stationary}. Moreover, we have the following rate of decay for the $W^{1,p}_0$-norm of their difference:
\begin{theorem}\label{thm:BL:main}
Let $p\geq2$, $u^*$ denote a solution to \eqref{eq:BL:stationary} and $u$ a solution to \eqref{eq:BL:telegraph}. For large time we have
\begin{equation*}
\|u-u^*\|_{W^{1,p}_0(\Omega)} \leq C \cdot t^{-\frac{1}{(p-1)p}},
\end{equation*}
with a constant $C=C(p,\Omega,u_0,a)>0$.
\end{theorem}
Our proof is based on a careful analysis of the following error term:
\begin{equation}\label{eq:BL:errordef}
\mathrm{e}(t)\coloneqq\int_\Omega \frac{a^2}{2}w^2+a\,w\,w_t + w_t^2+2\cdot\left(\frac1p|\nabla u|^p-\frac1p|\nabla u^*|^p\right)\d x
\end{equation}
where we have set $w=u-u^*$. Note that our error term is chosen in such a way that it is compatible to our problem and we can estimate the error in terms of its derivative. Moreover, the fact
\[
\frac{\d}{\d t}\int_\Omega\frac1p|\nabla u|^p \d x = - \int_\Omega u_t\, \Delta_p u \d x,
\]
cf. page \pageref{eq:BL:integrated}, justifies the appearance of the last term in the error. It is worth mentioning that with our argumentation scheme we can improve the decay rate in the linear case $p=2$ and obtain the classical result from \cite{BL:HZ}, cf. the discussion in section \ref{sec:BL:improved}.
\section{Basic results}
Let us briefly introduce the notations used throughout this work.
The Euclidean norm in $\mathbb{R}^n$ is denoted by $|\cdot|$, a generic positive constant is represented by capital or small letter $c$ possibly varying from line to line, and we often write $u(t)(x)$ for $u(t,x)$.
Given a real Banach space $X$, the (Banach) space $L^{p}(0,T; X)$ consists of
all measurable
functions $u:[0,T]\to X$ such that
\[
\|u\|_{L^p(0,T;X)}=\left(\int_{0}^{T} \|u(t)\|_{X}^{p}\, \d t\right)^{\frac{1}{p}}
<\infty\, , \qquad 1\leq p<\infty\, ,
\]
$L^{\infty}(0,T; X)$ is the space of all measurable
$u:[0,T]\to X$ such that
\[
\|u\|_{L^{\infty}(0,T; X)}=\underset{t\in [0,T]}{\text{ess sup}} \|u(t)\|_{X}
<\infty.
\]
The (Banach) space $W^{1,p}(0,T; X)$, for $1\leq p \leq \infty$, consists of all $u\in L^{p}(0,T; X)$ such that $\partial_{t}u$ exists in the weak sense and belongs to $L^{p}(0,T; X)$.
Recall that for $u\in W^{1,p}(0,T; X)$ we have $u\in C^0([0, T]; X)$ and
$$\max_{0\le t\le T}\|u(t)\|_{X}\leq c(T)\cdot \|u\|_{W^{1,p}(0,T; X)}.$$
For further reading and elaborated clarifications on spaces involving time we refer the reader to \cite[Sec. 5.9.2]{BL:Evans}.
Throughout this work, we make use of the following inequalities:
\begin{itemize}
\item let $p\geq 2$. For all $a,b\in \R^n$ we have
\begin{equation}\label{eq:BL:VectIne}
2^{2-p}|a-b|^p\leq \skalarProd{|a|^{p-2}a-|b|^{p-2}b}{a-b}, \tag{A1}
\end{equation}
\item
for $p\ge 2$ and with an adequate constant $c(p)\in(0,1]$:
\begin{equation}\label{eq:BL:VectIne2}
|b|^p\geq |a|^p+p\skalarProd{|a|^{p-2}a}{b-a}+c(p)|b-a|^p, \tag{A2}
\end{equation}
\item
furthermore, for ~ $\pnorm{f}\leq M$ ~ and ~ $\pnorm{g}\leq M$ ~ the estimate
\begin{equation}\label{eq:BL:Rudin}
\int_\Omega \left | \vphantom{x^{x^{x^x}}}|f|^p -|g|^p\right| \d x \leq c(p,\Omega) M^{p-1} \pnorm{f-g} \tag{A3}
\end{equation}
holds, cf. \cite[p.~75]{BL:Rudin}.
\end{itemize}
Firstly, let us define the concept of weak solutions to the evolutionary damped $p$-Laplace equation:
\begin{definition}
We say that $u \in W^{1,p}_{\textrm{loc}}( 0,\infty ; W^{1,p}(\Omega)) $ is a solution to \eqref{eq:BL:ted} if
\[
\int_{0}^{\infty} \int_\Omega - u_{t}\, \phi_{t}- a\, u\, \phi_{t} + |\nabla u|^{p-2}\skalarProd{\nabla u}{\nabla \phi}\d x \d t =0,
\]
for each $\phi \in C^{\infty}_{0}( (0, \infty) \times \Omega).$
\end{definition}
In the following, let us denote by $u^*$ a solution to \eqref{eq:BL:stationary} and by $u$ a solution to \eqref{eq:BL:telegraph}. Moreover, we set
\[
E(t)\coloneqq\int_\Omega\frac12\,u_t^2(t,x) + \frac1p\,|\nabla u(t,x)|^p \d x.
\]
\begin{corollary}
$E(.)$ is non-increasing, or rather we have
\begin{equation}\label{eq:BL:Lemma1}
E'(t)= -a \int_\Omega u_t^2 \d x.
\end{equation}
\end{corollary}
\begin{proof}
A multiplication of
$$
u_{tt}+a\, u_t =\Delta_p u
$$
with $u_t$ followed by an integration over $\Omega$ gives
\begin{equation} \label{eq:BL:integrated}
\int_{\Omega} u_{tt} \, u_t \d x -\int_{\Omega} (\Delta_p u ) \, u_t \d x = -a \int_{\Omega} {u_t^2} \d x .
\end{equation}
Further, an integration by parts (note that there is no time dependence of $u_t$ on the boundary) yields
\[
- \int_\Omega u_t\, \Delta_p u \d x = \frac{\d}{\d t}\int_\Omega\frac1p|\nabla u|^p \d x,
\]
so that we can rewrite \eqref{eq:BL:integrated} to the desired relation \eqref{eq:BL:Lemma1}:
\begin{equation*}
E'(t)=\frac{\d}{\d t} \int_{\Omega} \frac12{u_t^2} \d x+ \frac{\d}{\d t}\int_\Omega\frac1p|\nabla u|^p \d x=-a \int_\Omega u_t^2 \d x.\qedhere
\end{equation*}
\end{proof}
\begin{remark}
The above computations are formal and can all be made rigorous.
\end{remark}
In view of \eqref{eq:BL:Lemma1}, we show that the gradient of $u$ (with respect to space) is bounded by the initial data and that $u_t$ tends to zero for big times:
\begin{corollary}\label{cor:BL:ut} Let $u$ be a solution to \eqref{eq:BL:telegraph}. Then
\begin{enumerate}[a)]
\item we have \quad $\|u_t(T)\|_{L^2(\Omega)}\xrightarrow{T\to\infty}0$.
\item for all $T\geq 0$ holds \quad $ \|\nabla u(T) \|_{L^p(\Omega)} \leq \pnorm{\nabla u_0}.$
\end{enumerate}
\end{corollary}
\begin{proof}
Integrating \eqref{eq:BL:Lemma1} over $(0,T)$ we gain
\begin{equation}\label{sun3}
\int_{\Omega} \frac12u_t^2(T,x) \d x+ \int_\Omega\frac1p|\nabla u(T, x)|^p \d x + a \int_{0}^{T}\int_\Omega u_t^2(\tau,x) \d x \d \tau \le \int_\Omega\frac1p|\nabla u_{0}(x)|^p\d x.
\end{equation}
Note that the right hand side of inequality \eqref{sun3} is independent of $T$, hence, the statement follows with $T\to\infty$.
\end{proof}
\begin{remark}\label{rem:BL:reg}
Taking the essential supremum with respect to time on both sides of \eqref{sun3} shows
\[
u_t \in L^{\infty}( 0, \infty; L^{2}(\Omega)), \quad \text{and}\quad u \in L^{\infty}( 0, \infty; W^{1,p}(\Omega)). \qedhere
\]
\end{remark}
Recall that $u^*$ is the minimizer of $ \EuScript{E}(.) $. Hence, Corollary \ref{cor:BL:ut} ensures the boundedness of the gradients of $u$ and $u^*$, respectively, more precisely
\begin{equation}\label{eq:BL:gradientbound}
\pnorm{\nabla u^*}\le\pnorm{\nabla u}\le M
\end{equation}
where we have set $M\coloneqq\pnorm{\nabla u_0}$.
Next, let us focus on the behavior of the energies. Since for large time the dependence of $u$ on time shrinks, cf. Cor. \ref{cor:BL:ut}, the convergence of energies should follow from the uniqueness of $p$-harmonic functions, and indeed, we have
\begin{lemma}\label{lem:BL:convergenceGrad}
Let $u^{*}$ and $u$ and be the solutions \eqref{eq:BL:stationary} and \eqref{eq:BL:telegraph}, respectively, then
\[
\EuScript{E}(u)\xrightarrow{t\to \infty}\EuScript{E}(u^*).
\]
\end{lemma}
\begin{proof}
Since $u^*$ is the unique minimizer of $\EuScript{E}(.)$, it suffices to show that
\begin{equation} \label{eq:BL:AlvarezGoal}
\limsup_{t\to\infty} \frac1p\int_\Omega|\nabla u|^p\d x \leq \frac1p\int_\Omega|\nabla v|^p\d x
\end{equation}
for all $v$ such that $v-g\in W^{1,p}_0(\Omega)$. For that purpose we will basically follow the proof of Theorem 2.1 from \cite{BL:Alvarez}:
Let $v\in W^{1,p}(\Omega)$ with $v-g\in W^{1,p}_0(\Omega)$ be given. Consider the following auxiliary function
\[
\varphi(t)\coloneqq \frac12\int_\Omega \left(u(t,x)-v(x)\right)^2\d x.
\]
Then $\varphi\in W^{2,1}(0,\infty)$, cf. Remark \ref{rem:BL:reg}, and, as $u$ fulfills \eqref{eq:BL:telegraph}, we have
\begin{align*}
\varphi''(t)+a\,\varphi'(t)&=\int_\Omega (u-v)\,\Delta_p u +u_t^2\d x\\
&= -\int_\Omega |\nabla u|^{p-2} \skalarProd{\nabla u}{\nabla u - \nabla v}+u_t^2\d x \\
&\overset{\eqref{eq:BL:VectIne2}}{\leq}\int_\Omega \frac1p|\nabla v|^p-\frac1p|\nabla u|^p+u_t^2\d x\\
&\leq \int_\Omega \frac1p|\nabla v|^p+\frac32u_t^2\d x -E(T)
\end{align*}
for all $t\in[0,T]$, where we have used that $E(.)$ is non-increasing. A multiplication of both sides with $e^{at}$, followed by an integration yields
\[
\varphi'(t)\le e^{-at} \varphi'(0) + \frac{1}{a}(1- e^{-at}) \left(\int_\Omega \frac1p|\nabla v|^p\d x- E(T)\right)+ \frac{3}{2} \int_0^t\int_\Omega e^{-a(t-\tau)}{u_t^2(\tau,x)}\d x \d\tau.
\]
Integrating once more and using the fact that $\displaystyle E(T)\ge \frac1p\int_\Omega|\nabla u|^p\d x$, implies
\begin{equation}\label{eq:BL:Alvarez}
\begin{split}
\varphi(T)+\frac{1}{a^2}\left(aT-1 +e^{-aT}\right)\frac1p\int_\Omega|\nabla u|^p\d x \leq \frac{1}{a^2}&\left(aT-1+e^{-aT}\right)\frac1p\int_\Omega|\nabla v|^p\d x\ +\\
& + \varphi(0)+\frac1a(1-e^{-aT})\,\varphi'(0)+h(T)
\end{split}
\end{equation}
where we have set
\begin{align*}
h(T)&\coloneqq\frac{3}{2}\int_0^T\int_0^t\int_\Omega e^{-a(t-\tau)}{u_t^2(\tau,x)}\d x \d \tau \d t\\
&=\frac{3}{2a}\int_0^T\int_\Omega {u_t^2(\tau,x)}(1-e^{-a(T-\tau)})\d x\d \tau.
\end{align*}
Due to Remark \ref{rem:BL:reg} the term $h(T)$ is bounded. Hence, dividing \eqref{eq:BL:Alvarez} by $\displaystyle \frac{1}{a^2}\left(aT-1 +e^{-aT}\right)$ and letting $T\to\infty$ gives the desired estimate \eqref{eq:BL:AlvarezGoal}.
\end{proof}
On account of the convergence of the energies, we get the $W^{1,p}$ convergence of $u$ to $u^*$:
\begin{corollary}\label{cor:BL:u}
Let $u$ and $u^*$ be as before, then we have $$\|{u-u^*}\|_{W^{1,p}_0(\Omega)}\xrightarrow{t\to\infty}0.$$
\end{corollary}
\begin{proof}
By Poincar\'e's inequality
\[
\int_\Omega |u-u^*|^p \d x \leq \tilde{c}(p,\Omega) \int_\Omega |\nabla u - \nabla u^*|^p \d x.
\]
Furthermore, by \eqref{eq:BL:VectIne2} we have
\begin{equation}\label{eq:BL:estimate}
c(p) \int_\Omega |\nabla u - \nabla u^*|^p \d x\leq \int_\Omega|\nabla u|^p -|\nabla u^*|^p \d x,
\end{equation}
where we used the fact, that $u^*$ is a $p$-harmonic function which gave us
\begin{equation*}
\int_\Omega |\nabla u^*|^{p-2}\skalarProd{\nabla u^*}{\nabla u - \nabla u^*} = 0.
\end{equation*}
The claim follows then using Lemma \ref{lem:BL:convergenceGrad}.
\end{proof}
\section{Proof of the decay rate}
We are now prepared to prove our main result:
\begin{proof}[Proof of Theorem \ref{thm:BL:main}]
A multiplication of
$$
u_{tt}+a\, u_t =\Delta_p u - \Delta_p u^*
$$
with $w=w(t,x)\coloneqq u(t,x)-u^*(x)$, and integrating by parts (note that $\left.w\right|_{\partial\Omega}=0$) yields
\begin{align*}
\int_\Omega w_{tt}\,w+a\,w_t\,w \d x &= - \int_\Omega\skalarProd{|\nabla u|^{p-2}\nabla u -|\nabla u^*|^{p-2}\nabla u^* }{\nabla u - \nabla u^*}\d x\\
& \overset{\eqref{eq:BL:VectIne}}{\leq} -2^{2-p}\int_\Omega |\nabla w|^p \d x.
\end{align*}
Hence, multiplying both sides of the last inequality with $a>0$ and adding ~ $\int_\Omega a\,w_t^2$ ~ we end up with
\begin{equation}\label{eq:BL:8}
\frac{\d}{\d t}\int_\Omega \frac{a^2}{2}w^2+a\,w\,w_t \d x \leq \int_\Omega a\,w_t^2-2^{2-p}a|\nabla w|^p\d x.
\end{equation}
Recall the definition of our error term
\begin{equation}\tag{\ref{eq:BL:errordef}}
\mathrm{e}(t)\coloneqq \int_\Omega \frac{a^2}{2}w^2+a\,w\,w_t + w_t^2+2\cdot\left(\frac1p|\nabla u|^p-\frac1p|\nabla u^*|^p\right)\d x.
\end{equation}
So, $\mathrm{e}\in W^{1,1}(0,\infty)$ and due to the minimizing properties of ~$u^*=u^*(x)$,~ we have that ~ $\mathrm{e}(t)\geq0$ for all $t>0$. ~ Moreover, relation \eqref{eq:BL:Lemma1} and inequality \eqref{eq:BL:8} show
\begin{align}
\mathrm{e}'(t) &\leq \int_\Omega a\,w_t^2-2^{2-p}a|\nabla w|^p\d x + \frac{\d}{\d t}\int_\Omega u_t^2 + 2\cdot\frac1p|\nabla u|^p \d x \notag\\
&= -a \int_\Omega w_t^2+2^{2-p}|\nabla w|^p \d x\label{eq:BL:abl} \\
&\leq 0.\notag
\end{align}
Furthermore, with \eqref{eq:BL:Lemma1} we have:
\begin{align}
\left|\frac{\mathrm{e}'(t)}{a}\right|&=\left|\int_\Omega a\,w\,w_t+w\,w_{tt}-w_t^2\d x\right| \le \left|\int_\Omega (\Delta_p u - \Delta_p u^*)w \d x\right|+\|w_t\|_{L^2(\Omega)}^2\notag\\\notag
&\le \int_\Omega |\nabla u|^{p-1}|\nabla w|\d x + \int_\Omega |\nabla u^*|^{p-1}|\nabla w|\d x+\|w_t\|_{L^2(\Omega)}^2\\\notag
&\le \left(\pnorm{\nabla u}^{p-1}+\pnorm{\nabla u^*}^{p-1}\right)\pnorm{\nabla w}+\|w_t\|_{L^2(\Omega)}^2\\\notag
&\le 2 M^{p-1}\pnorm{\nabla w}+\|w_t\|_{L^2(\Omega)}^2\\\label{eq:BL:derivative}
&\xrightarrow{t\to\infty}0, \quad \text{by Cor. \ref{cor:BL:u} and Cor. \ref{cor:BL:ut}, respectively,}
\end{align}
where in the intermediate steps we have used integration by parts, the Cauchy-Schwarz inequality, the H\"older inequality and the boundedness of the gradients, cf. \eqref{eq:BL:gradientbound}.
Our next goal is to estimate the error in terms of its derivative. In regard with \eqref{eq:BL:Rudin} we arrive at
\begin{equation*}
\mathrm{e}(t) \leq \int_\Omega\left(\frac{a^2}{2}+a\right)w^2 + \left(\frac{a}{4}+1\right)w_t^2 \d x + c(p,\Omega,u_0)\cdot\pnorm{\nabla w}\,.
\end{equation*}
Using Lebesgue embedding and Poincar\'e's inequality for the first term we get
\begin{equation*}
\mathrm{e}(t) \leq c_1(p,\Omega,a)\cdot\pnorm{\nabla w}^2 + \left(\frac{a}{4}+1\right)\int_\Omega w_t^2 \d x + c(p,\Omega,u_0)\cdot\pnorm{\nabla w}\,.
\end{equation*}
Furthermore, in \eqref{eq:BL:abl} we already aimed
\[
\int_\Omega w_t^2+2^{2-p}|\nabla w|^p \d x \leq -\frac{\mathrm{e}'(t)}{a}.
\]
All in all, we get
\begin{equation*}
\mathrm{e}(t) \leq c_2(p,\Omega,a)\cdot \left(-\frac{\mathrm{e}'(t)}{a}\right)^{\frac{2}{p}} + \left(\frac{a}{4}+1\right)\cdot \left(-\frac{\mathrm{e}'(t)}{a}\right) + c_3(p,\Omega,u_0)\cdot\left(-\frac{\mathrm{e}'(t)}{a}\right)^{\frac{1}{p}}.
\end{equation*}
Since $$ -\frac{\mathrm{e}'(t)}{a} \xrightarrow{t\to\infty} 0,$$ cf. \eqref{eq:BL:derivative}, the error term $\mathrm{e}(t)\geq0$ satisfies for large time a differential inequality of type
\begin{align}
\mathrm{e}(t) &\leq c_4(p,\Omega,u_0,a)\cdot (-\mathrm{e}'(t))^{\frac1p},\notag\\
\shortintertext{and we may rewrite this}
\mathrm{e}'(t) &\leq - c_5(p,\Omega,u_0,a)\cdot \mathrm{e}(t)^p,\notag\\
\shortintertext{respectively, so by Lemma 1.6 from \cite{BL:HZ} we gain}
\mathrm{e}(t)&\leq c_6(p,\Omega,u_0,a)\cdot t^{-\frac{1}{p-1}}.
\label{eq:BL:decayRate1}
\end{align}
By \eqref{eq:BL:decayRate1}, \eqref{eq:BL:estimate} and the Poincar\'e inequality we finally arrive at
\begin{equation*}
\|u-u^*\|_{W^{1,p}_0(\Omega)}^p \leq c_7(p,\Omega,u_0,a) \cdot t^{-\frac{1}{p-1}}\,.\qedhere
\end{equation*}
\end{proof}
\subsection{Enhancement of the decay rate for \boldmath $p=2$}\label{sec:BL:improved}
A crucial ingredient in our proof of the decay rate was inequality \eqref{eq:BL:Rudin} which we applied to estimate the difference of the energies. In fact, for $p=2$ this relation can be improved to the \textit{equality}
\begin{equation*}
\int_\Omega |\nabla u|^2 -|\nabla u^*|^2 \d x = \int_\Omega |\nabla u - \nabla u^*|^2 \d x
\end{equation*}
where we used the harmonicity of $u^*$. Hence, we obtain for the error term
\begin{align}
\mathrm{e}(t) & \leq \int_\Omega \left(\frac{a^2}{2}+a\varepsilon\right)w^2 + \left(\frac{a}{4\varepsilon}+1\right)w_t^2+|\nabla w|^2\d x\notag\\
&\le \int_\Omega\left(\left(\frac{a^2}{2}+a\,\varepsilon\right)\tilde{c}(\Omega)+1\right)|\nabla w|^2 +\left(\frac{a}{4\varepsilon}+1\right)w_t^2\d x\notag\\
& = c(a,\Omega) \int_\Omega w_t^2+|\nabla w|^2 \d x\le c(a,\Omega)\left(-\frac{\mathrm{e}'(t)}{a}\right)\label{eq:BL:p2}
\end{align}
where in the intermediate steps we used the Poincar\'e inequality, and $\varepsilon>0$ was choosen in such a way that the prefactors coincided. Relation \eqref{eq:BL:p2} may be rewriten to
\[
\mathrm{e}'(t)\le -\frac{a}{c(a,\Omega)}\,\mathrm{e}(t) \quad \text{for all $t>0$},
\]
so, by Gronwall's inequality, the error term fulfills
\[
\mathrm{e}(t) \le c\cdot \exp\left(-\frac{a}{c(a,\Omega)}\ t\right)
\]
and for the decay rate we arrive at
\[
\|u-u^*\|^2_{W^{1,2}_0(\Omega)}\le C \cdot \exp\left(-\frac{a}{c(a,\Omega)}\ t\right) \quad \text{for all $t>0$},
\]
a well known result, cf. e.g. Theorem 2.1 a) in \cite{BL:HZ}.
\vspace{2em}
\begin{acknowledgement}
The authors are grateful to the Hausdorff Research Institut for Mathematics (Bonn) for support
and hospitality during the trimester program \emph{Evolution of Interfaces}, where work on
this article was undertaken. Moreover, the authors would
like to thank John Andersonn for helpful suggestions and discussion.
\end{acknowledgement} |
2103.03542 | \subsection{Primer on CSS Processing}
\label{subsec:primer}
In our system, the terminal transmits a linear upchirp signal with a bandwidth $BW$ to the MAV with backscatter tags. A tag backscatters the signal with a frequency shift $f_0$ for preventing the interference from the excitation signal. Multiple tags have different frequency shifts. Marvel uses the chirp signal that is compatible with LoRa protocol. We adopt the configuration options provided by Semtech LoRa chipset~\cite{sx1276}. It sets a fixed number of options for each parameter, {\em e.g.}, $SF \in \{6, 7, 8, 9, 10, 11, 12\}$, with the recommendations for using these parameters. Its duration $T$ depends on spreading factor ($SF$) and bandwidth~\cite{liando2019known, note2015loratm}, {\em i.e.}, $T = 2^{SF}/BW$. The most prominent recommendation is to use $SF$ settings of $SF = 7$ to $12$ and $BW$ $125$, $250$, and $500$ KHz. The recommendations ensure the acceptable transmission distance and data rate tradeoff.
In our context, to achieve state estimation in a long-range/through-wall setting, the CSS signal needs good decoding capability. This capability is proportional to the product of signal duration $T$ and bandwidth $BW$. As $T\times BW = 2^{SF}$, we choose $SF = 12$. Meanwhile, to improve the range resolution of the backscatter-based pose sensing (\cref{subsec:pose}), we need the signal bandwidth as large as possible. Thus, we set $BW = 500$ KHz. At this configuration, the chirp duration is $8.192$ ms. Such a short chirp duration, which is within the channel coherent time, is required by the channel phase extraction (\cref{subsec:phase}).
To decode the chirp, the receiver first multiples the received signal with a synthesized downchirp whose frequency linearly varies from $BW/2 + f_0$ to $-BW/2 + f_0$. Then, it takes a fast Fourier transform (FFT) on this multiplication (Fig.~\ref{fig:overview}). This operation sums the energy across all the frequencies of the chirp, producing a {\em peak} at an FTT bin.
\subsection{Below-Noise Channel Phase Extraction}
\label{subsec:phase}
Since MAVs are expect to carry out emergency tasks like fire rescue, the system desires the localizability with a single anchor (its terminal) and without prior knowledge of the work space, being instantly deployable and operable wherever required. The position of a target referring to a single anchor can be represented by the angle $\phi$ and the range $r$ of the target to the anchor as polar coordinates. And both the parameters can be inferred by the channel phase of the signal.
The channel phase extraction for chirp signals has two steps as shown in Fig.~\ref{fig:phase_workflow}: we first combat the Doppler effect to estimate the beginning of the chirp and then we extract the channel phase leveraging the linearity of the chirp frequencies.
To estimate the beginning of the chirp, we leverage a key property of the chirp signal: a time delay in the chirp signal translates to frequency shift. Ideally, decoding the original upchirp with a downchirp produces a peak in the first FFT bin (see Fig.~\ref{fig:overview}). When a tag is separated from the terminal, the backscatter signal handler receives the signal with a timing offset of the signal's round trip. The peak appears in the shifted bin $f_s$. If we move the beginning of the received chirp $f_s$ samples closer to its real beginning and repeat the decoding operation, there will be a new peak at the first FFT bin again and the symbol at this instant is the beginning of the transmission. However, under the MAV's mobility, the signal additionally experiences the Doppler frequency shift. The shifted bin $f_s$ is a mixed result of the timing offset and the Doppler effect. The above operation can no longer recover the beginning of the chirp.
Our solution leverages the kinetics and the structure of a MAV. We attach four backscatters on the landing gear of a MAV. As shown in Fig.~\ref{fig:tags}, the Doppler frequency shift of a tag, {\em e.g.}, tag $T_1$, is a combinatorial result of translation and rotation. The shift $\Delta f(t)$ can be expressed as,
\begin{equation}
\Delta f(t) = \frac{f_c}{c} \mathbf{u}_p(t)\cdot\left[\mathbf{v}_t(t) + \mathbf{v}_r(t)\right] = \Delta f_t(t) + \Delta f_r(t),
\label{eqn:doppler}
\end{equation}
where $\mathbf{u}_p(t)$ is the unit vector that represents the direction from the MAV to the terminal, $f_c$ the carrier frequency, $c$ the speed of RF signals in the medium. $\mathbf{v}_t(t)$ and $\mathbf{v}_r(t)$ are the translational velocity and the rotational velocity. $\Delta f_t(t)$ and $\Delta f_r(t)$ corresponds to the translational shift and the rotational shift. To estimate the beginning of the chirp, we need to isolate the frequency shift translated from the timing offset by eliminating the effect of Doppler shift.
{\bf Eliminating the effect of Doppler shift}.
We first eliminate the effect of the rotational shift by the key observation that any pair of opposing tags on the landing gear, {\em e.g.}, tags $T_1$ and $T_1^\prime$ in Fig.~\ref{fig:tags}, always have rotational velocities with {\em the same magnitude but opposite directions} and all tags share {\em the same translational velocity}. Thus, averaging the shifted peak of two opposing tags eliminates the rotational shift as shown in Fig.~\ref{fig:peaks}. Specifically, decoding the backscattered signals from a pair of opposing tags, we obtain the FFT bin indices, $\hat{B}_i$ and $\hat{B}_i^\prime$,
\begin{equation}
\hat{B}_i = f_T^i + \Delta f_t^i + \Delta f_r^i, \; \hat{B}_i^\prime = f_T^{i^\prime} + \Delta f_t^i - \Delta f_r^i,
\label{eqn:shift}
\end{equation}
where $f_T^i$ and $f_T^{i^\prime}$ are the frequency shift translated by the timing offset, $\Delta f_t^i$ and $\Delta f_r^i$ the translation shift and the rotational shift of tag $i$. Note that $f_T^i \approx f_T^{i^\prime}$ since their maximal difference is the translated shift from the traveling time of the distance between a pair of opposing tags, {\em i.e.}, the diameter $D$ of the MAV, which is negligible as $D$ ($66$ cm for the DJI M100) is too small for the speed of RF signal propagation. Thus, averaging them, {\em i.e.}, $1/2(\hat{B}_i + \hat{B}_i^\prime) = f_T^i + \Delta f_t^i$, eliminates the rotational shift.
Since the two pairs of tags on the MAV are structurally symmetric, when we perform the above operation to each pair, the results are expected to be identical. However, they exhibit a slight difference as shown in Fig.~\ref{fig:peaks}. This is because the micro-controllers of the tags are not synchronized with the terminal, it introduces an additional carrier frequency offset (CFO) for each tag, which is a constant. In our approach, $\Delta f_{\text{CFO}}$ is the difference of CFOs upon averaging the two pairs of tags, which is still a constant. We can simply apply this to the rest of the transmission to estimate the right chirp phase.
Now we eliminate the translational shift $\Delta f_t^i$ to isolate the frequency shift $f_T^i$ translated from the timing offset. Then, we can obtain the signal at the real beginning of the transmission by moving the beginning of the received chirp $f_T^i$ samples. $\Delta f_t^i$ can be tracked using the accelerations measured by the onboard IMU. Initially, the MAV is about to take off. At this initial stage, there is no motion, $1/2(\hat{B}_i + \hat{B}_i^\prime)$ is already the frequency shift $f_T^i$. Thus, the channel phase can be obtained according to the workflow (Fig.~\ref{fig:phase_workflow}). Then, we specify $\mathbf{u}_p(t)$ in Eqn.~\eqref{eqn:doppler} by our angle estimation algorithm in \cref{subsec:pose}. When the MAV takes off, the accelerations measured by IMU can track the translational velocity $\mathbf{v}_t(t)$. Thus, $\Delta f_t^i = f_c/c\cdot\mathbf{u}_p(t)\cdot\mathbf{v}_t(t)$ and $f_T^i = 1/2(\hat{B}_i + \hat{B}_i^\prime) - \Delta f_t^i$. Note that integrating the accelerations to obtain the velocity will suffer from the temporal drift. The super-accuracy algorithm in \cref{sec:pose} corrects the drift and feeds back to the flight control system.
{\bf Extracting channel phase}.
At this stage, we have corrected the signal to the symbol at the beginning of the transmission. Now we compute the channel phases of all frequencies in the chirp by the method proposed in~\cite{nandakumar20183d}. We have
\begin{equation}
\hat{\theta}_\Sigma = \theta_1 + \theta_2 + \cdots + \theta_N = \theta_1 + \theta_1 \frac{f_2}{f_1} + \cdots + \theta_1 \frac{f_N}{f_1},
\label{eqn:phase_sum}
\end{equation}
where $f_1, \cdots, f_N$ are explicitly defined when generating the chirp signal. Solving the above equation obtains the channel phases of all frequencies in the chirp.
Notice that this method requires a short chirp duration to be within the channel coherent time. As we mentioned in \cref{subsec:primer}, we choose the parameters of CSS signals that conform to LoRa standard as $SF = 12$, $BW = 500$ KHz. According to signal duration $T = 2^{SF}/BW$, the chirp duration is $8$ ms, which is within the channel coherent time. Moreover, $SF = 12$ ensures the best decoding capability of CSS signals and $BW = 500$ KHz will benefit the range estimation of the pose sensing in the next subsection.
\subsection{Below-Noise Pose Sensing}
\label{subsec:pose}
{\bf Range estimation}.
Assume that the terminal is separated from a tag on the MAV by a distance of $r$. A linear chirp signal with $N$ frequencies transmitted by the terminal propagates a total distance of $2r$ for the round trip to and from the tag. The wireless channel of such a signal is, $\mathbf{H} = \left[\gamma_1 e^{-j2\pi f_1\frac{2r}{c}}, \gamma_2 e^{-j2\pi f_2\frac{2r}{c}}, \cdots, \gamma_N e^{-j2\pi f_N\frac{2r}{c}}\right]$, where $\gamma_i$ is the attenuation corresponding to frequency $f_i$ in the chirp, $i = \{1, \cdots, N\}$. In the absence of multipath, we can use the obtained channel phases of the backscatter signal to estimate the range $r$. However, due to multipath, the obtained phases is actually the sum of phases of the direct-path signal and the multipath-reflected signals.
To combat multipath while conforming to LoRa protocol, we dynamically send multiple chirps in the channels of $900$ MHz band and combine the phase information across all these channels to simulate a wideband transmission. At a high level, a wideband signal can be used to disambiguate the multipath. There are $13$ channels separated by $2.16$ MHz with respect to the adjacent channels. We have four tags on the MAV which are configured to different frequency shifts for preventing the interference from the excitation signal. So, the terminal can transmit excitation signals in $2$ channels and receive backscatter signals across $8$ channels. By combining them, the terminal sends the phases at all the channels to the MAV through LoRa. Then, the MAV computes the range estimate by using an inverse FFT on the phases to get the time-domain multipath profile. We use a fixed energy threshold over this profile to identify the closest (most direct) path from the MAV.
{\bf Angle estimation}.
The angle of incident signals $\phi$ is also encoded in the phases of the signals. The backscattered chirp signal received by a linear array with $M$ antennas from $K$ propagation paths has the measurement matrix $\mathbf{X}$,
\begin{equation}
\begin{aligned}
& \mathbf{X} = \left[\mathbf{x}_1 \dotsc \mathbf{x}_N \right] = \mathbf{S}\left[\mathbf{F}_1 \dotsc \mathbf{F}_N \right], \\
& \mathbf{S}\mathbf{F}_i = \left[ \mathbf{s}(\phi_1) \dotsc \mathbf{s}(\phi_K) \right]\left[ \gamma_{i1} \dotsc \gamma_{iK} \right]^\top, i = \{1, \cdots, N\}, \\
& \mathbf{s}(\phi_k) = \left[ 1 \; e^{-j\eta \sin(\phi_k)} \dotsc e^{-j(M-1)\eta \sin(\phi_k)} \right]^\top,
\end{aligned}
\label{eqn:angle}
\end{equation}
where $k = \{1, \cdots, K\}$, $\mathbf{F}_i$ denotes the attenuation factors of $K$ paths at frequency $i$ in the chirp, $\gamma_{ij}$ the attenuation factor of path $j$ at frequency $i$. $\mathbf{S}$ is the steering matrix where $s(\phi_k)$ denotes the steering vector of path $k$, and the constant $\eta = 2\pi d\frac{f_c}{c}$ where $d$ is the antenna spacing. $\phi_k$ is the angle of interest. We can see that the angle only exists in the steering matrix, contributing the phases in the complex elements of matrix $\mathbf{X}$.
\begin{figure}
\centering
\begin{minipage}[b]{0.23\textwidth}\centering
\center
\includegraphics[width=1\textwidth]{phase_workflow.pdf}\vspace{-0.3cm}
\caption{Phase extraction workflow.} \label{fig:phase_workflow}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}[b]{0.23\textwidth}\centering
\center
\includegraphics[width=1\textwidth]{tags3.pdf}\vspace{-0.3cm}
\caption{The motion of a MAV consists of translation and rotation.} \label{fig:tags}
\end{minipage}
\vspace{-4mm}
\end{figure}
Thus, even without the attenuation information, we can use the obtained phases to construct a {\em virtual measurement matrix} of which all complex elements have unit attenuation with the phases of frequencies in the chirp to allow the angle estimation. The virtual measurement matrix $\hat{\mathbf{X}}$ can be written as
\begin{equation}
\hat{\mathbf{X}} =
\begin{bmatrix}
e^{j\theta_{11}} & e^{j\theta_{12}} & \cdots & e^{j\theta_{1N}} \\
e^{j\theta_{21}} & e^{j\theta_{22}} & \cdots & e^{j\theta_{2N}} \\
\vdots & \vdots & \ddots & \vdots \\
e^{j\theta_{M1}} & e^{j\theta_{M2}} & \cdots & e^{j\theta_{MN}}
\end{bmatrix},
\end{equation}
where $\theta_{ij}$ denotes the phase of antenna $i$ at frequency $j$. Applying $\hat{\mathbf{X}}$ to the super-resolution angle estimation technique~\cite{kotaru2015spotfi}, we obtain the direct-path angle of a tag to the terminal. The four tags provide four angles for every chirp. We compute the harmonic mean of the four angles as the final result.
{\bf Rotation estimation}.
The real problem to determine a MAV's orientation is how to anchor the yaw, a.k.a., heading. The orientation can be represented by Euler angles: roll $\alpha$, pitch $\beta$, and yaw $\psi$ for a rotation around $x$, $y$, and $z$ axes (Fig.~\ref{fig:tags}). And it can be computed by integrating the 3D angular velocity readings from the onboard IMU. The results however suffer from temporal drifts due to the inherent noise of IMU. The drifts of roll and pitch would tilt the vehicle and move away. They can be corrected by the position, which has been obtained by the above range and angle estimates, as it helps the MAV realize unintended translations. However, the drift of heading causes no translation but rotation. We need drift-free rotation estimates to fix the heading.
Our idea is that the rotational shift is solely determined by the rotation. We can use it to map the rotation. According to Eqn.~\eqref{eqn:shift}, subtracting the indices of the peaks from two opposing tags $\hat{B}_i$ and $\hat{B}_i^\prime$ gives the rotational frequency shift,
\begin{equation}
\Delta \hat{B}_i = \hat{B}_i - \hat{B}_i^\prime = f_T^i - f_T^{i^\prime} + 2\times \Delta f_r^i \approx 2\times \Delta f_r^i.
\label{eqn:rotational_shift}
\end{equation}
Now we model the rotational shift. We denote the angle of the MAV to its terminal as $\phi$ and the MAV's rotation as $\psi$ (refer to Fig.~\ref{fig:tags}), then $\mathbf{u}_p = \left[\cos\phi \; \sin\phi \right]^\top, \; \mathbf{v}_r = \frac{D}{2}\omega\left[ \cos(\psi + \frac{\pi}{2}) \; \sin(\psi + \frac{\pi}{2}) \right]^\top$, where $\omega$ is the angular velocity during the rotation. The rotational shift can be expressed as
\begin{equation}
\Delta f_r^i = \frac{f_c}{c} \mathbf{u}_p\cdot \mathbf{v}_r = \frac{f_cD}{2c} \omega \times \sin\left(\phi - \psi\right).
\label{eqn:relative_shift}
\end{equation}
The terminal computes $\Delta f_r^i$ by Eqn.~\eqref{eqn:rotational_shift} and sends it to the MAV. $\phi$ can be obtained by the angle estimation algorithm. The gyroscope in IMU measures angular velocity $\omega$. The rest parameters are known constants. Thus, rotation $\psi$ can be solved by Eqn.~\eqref{eqn:relative_shift}.
\begin{figure}
\centering
\begin{minipage}[b]{0.22\textwidth}\centering
\center
\includegraphics[width=1\textwidth]{peaks.pdf}\vspace{-0.3cm}
\caption{Rotational shift elimination.} \label{fig:peaks}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}[b]{0.24\textwidth}\centering
\center
\includegraphics[width=1\textwidth]{extrinsic_big_font.pdf}\vspace{-0.3cm}
\caption{The reference frames of two sensing components in our system.} \label{fig:extrinsic}
\end{minipage}
\end{figure}
\subsection{Implementation and Evaluation Methodology}
The terminal is built by two colocated NI USRP-2943 nodes, each with a UBX160 daughterboard. They have four channels to be configured as a data handler with one antenna and a backscatter signal handler with three antennas. The USRPs are driven by a host computer. We configure USRPs to work on $900$ MHz band. Specifically, the data handler sends $500$ KHz bandwidth signals at $902$ MHz center frequency, which is in US902-928MHz ISM band. The backscatter signal handler receives backscattered signals for the channel phase extraction (\cref{subsec:phase}). The three antennas for the backscatter signal handler are mounted to an acrylic pole separated by a distance of $16$ cm. To ease the prototype implementation, we use a Semtech SX1276MB1LAS long-range transceiver driven by the host computer to send phases to another LoRa transceiver on the MAV for pose sensing (\cref{subsec:pose}). The USRP nodes are synchronized using a NI CDA-2990 8 Channel Clock Distribution Accessory, as an external clock. We run the CSS decoding and the channel phase extraction on the terminal.
\begin{figure}
\centering
\begin{minipage}[b]{0.28\textwidth}\centering
\center
\includegraphics[width=1\textwidth]{platform3_big_font.jpg}\vspace{-0.3cm}
\caption{Experiment platform.} \label{fig:sys}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}[b]{0.18\textwidth}\centering
\center
\includegraphics[width=1\textwidth]{indoor_setup_big_font.pdf}\vspace{-0.3cm}
\caption{Through-wall setup.} \label{fig:wall_setup}
\end{minipage}
\vspace{-4mm}
\end{figure}
The MAV system is built by attaching an Intel NUC, a LORD MicroStrain 3DM-GX4-45 IMU, and an SX1276MB1LAS long-range transceiver on the DJI Matrice 100. In addition, there are four customized LoRa backscatter tags attached on the landing gear of the MAV. The backscatter uses the ADG919 and ADG904 RF switches to enable backscatter communications. The four backscatters are controlled by an Altera STEP-MAX10 FPGA. It configures them to shift $1$ MHz frequency with each other when backscattering the linear chirps with $500$ KHz bandwidth. We run Marvel on the Intel NUC with a $1.3$ GHz Core i5 processor with $4$ cores, an $8$ GB RAM and a $120$ GB SSD, running Ubuntu Linux. The backscatter-based pose sensing module and the backscatter-inertial super-accuracy state estimation algorithm are written in C++. We use Robot Operating System (ROS) to be the interfacing robotics middleware. The experimental platform is shown in Figure~\ref{fig:sys}. All system models and parameters of our experimentations are summarized in Table~\ref{tab:experiment}.
\begin{table}[h]
\footnotesize
\centering
\caption{Parameters of our experimentations.}
\begin{tabular}{ | p{3.6cm} | p{4cm} |}
\hline
Component/Configuration & Parameter/Model \\ \hline
MAV platform & DJI Matrice 100 \\ \hline
USRP & NI USRP-2943 \\ \hline
Daughterboard & UBX160 \\ \hline
External clock & NI CDA-2990 \\ \hline
LoRa transceiver & Semtech SX1276MB1LAS \\ \hline
IMU & LORD MicroStrain 3DM-GX4-45 \\ \hline
Onboard computer & Intel NUC \\ \hline
FPGA & Altera STEP-MAX10 \\ \hline
Backscatter switches & ADG919, ADG904 \\ \hline
Center frequency of CSS signals & $902$ MHz \\ \hline
Signal bandwidth & $500$ KHz \\ \hline
Frequency shift of tags & $1$ MHz \\ \hline
\end{tabular}
\label{tab:experiment}
\vspace{-2mm}
\end{table}
We conduct experiments in both outdoors and indoors for the evaluations in long-range and through-wall settings. The outdoor experiments are conducted in an open field in front of an office building. There is no obstacle between the MAV and the terminal. The indoor experiments are conducted in a MAV test site of $12\times 8$ square meters. The site is located on the basement level of an office building as shown in Fig.~\ref{fig:wall_setup}. Multiple rooms are separated by concrete walls and wooden doors, and have office furniture including tables and computers. To safely conduct indoor experiments, we equipped DJI Guidance~\cite{djiguidance} to detect obstacles. DJI Guidance is a vision-based navigation aid that can perform hovering and obstacle detection in GPS-denied environments. This system will take over the control from Marvel to perform hovering as long as it detects obstacles, {\em e.g.}, walls and pillars, within $2$ meters of the MAV's surroundings.
\subsection{Micro-benchmark Evaluation}
\label{subsec:microbenchmark}
We evaluate the performance of positioning and rotation estimation, respectively. To evaluate the positioning approach, we build a sliding rail by the stepper motor ROB-09238~\cite{steppermotor} that supports the moving with a controllable speed. We place the MAV on a plate mounted on this rail. To evaluate the rotation estimation, we place the MAV on a plate mounted on the stepper motor and control the rotating speed. In long-range experiments, we place the terminal at one end of the field and move the MAV away from the terminal in increments of $10$ m. In through-wall experiments, we place the MAV in the test site and move the terminal to different rooms (Fig.~\ref{fig:wall_setup}). There are three concrete walls between the terminal and the MAV at location $5$. At each location, we repeat the experiment multiple times and compute the errors. Notice that there are outliers of the backscatter-based pose sensing when we test at position $4$ in Fig.~\ref{fig:wall_setup}. When testing at this position, the doors are open. The MAV occasionally flies near the door and there is no obstacle between the terminal and the MAV at this moment. Therefore, we believe the outliers in this case are due to the change of channel path in the duration of the chirp. Nevertheless, these outliers hardly have negative impacts on the system performance as the majority inliers contribute reliable information to the state estimation, making the optimization subject to the multi-view constraint insensitive to these outliers.
{\bf Positioning accuracy}.
We first validate the positioning capability of Marvel in different speeds. We compare Marvel with the state-of-the-art CSS-based localization system, $\mu$locate~\cite{nandakumar20183d}, which operates correctly in semi-stationary scenarios. As shown in Fig.~\ref{fig:comparison}, the accuracies of the two approaches are similar in stationary case, whose mean error is around $0.8$ m. However, the error of $\mu$locate scales with the speed since its channel phase estimates are distorted by the Doppler frequency shift. The red stars in Fig.~\ref{fig:comparison} denote the best and worst errors over each setting. From the stars, we can see that the worst position error reaches $2.45$ m for $\mu$locate while Marvel's accuracy keeps steady. Meanwhile, we also statistically analyze the results and plot the $95\%$ confidence intervals over the bar. The intervals show that the positioning is quite reliable. In the worst case that runs $\mu$locate at a speed of $0.3$ m/s, the interval is $1.783 \pm 0.109$ m.
The positioning results in different settings are shown in Fig.~\ref{fig:position_error}. The blue dashed lines denote mean errors. The red stars denote the best and worst errors for each setting. We also plot the $95\%$ confidence interval for each setting. To demonstrate that our approach is resilient to the Doppler effect under mobility, we move the MAV along the rail in a speed of $3$ m/s, which is the maximum speed allowed.
\begin{figure}
\centering
\begin{minipage}[b]{0.24\textwidth}\centering
\center
\includegraphics[width=1\textwidth]{comparison_big_font.pdf}\vspace{-0.3cm}
\caption{Positioning vs. speed.} \label{fig:comparison}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}[b]{0.22\textwidth}\centering
\center
\includegraphics[width=1\textwidth]{position2_big_font.pdf}\vspace{-0.3cm}
\caption{Positioning vs. setting.} \label{fig:position_error}
\end{minipage}
\vspace{-4mm}
\end{figure}
The long-range result shows that the error scales with the MAV-terminal distance. The position error of $0.58$ m at a distance of $1$ m, which increases to $0.79$ m at a distance of $5$ m. This further increases to $1.44$ m at a distance of $20$ m. This is due to the fact that the angle estimate with limited accuracy maps to a growing uncertain area of the MAV's position with the increasing distance. Our customized backscatter works at most $50$ m at which the worst case position accuracy is $2.66$ m. The confidence interval in this case is $1.863 \pm 0.176$ m. Beyond that distance, the power of the received signal is too low to decode even with the CSS coding.
The through-wall result shows that the accuracies at different locations are similar because the MAV-terminal distance does not vary much. But the accuracy in indoors is worse than at a distance of $1$ m in the open space due to the multipath fading. The worst case accuracy at location $5$ where has three walls blocking the MAV and the terminal is $1.22$ m. The confidence interval in this case is $1.216 \pm 0.067$ m. Our terminal is unable to receive the backscatter signal when it goes through more than three walls.
In summary, the position accuracy is limited to meter level in both outdoors and indoors due to the limited signal bandwidth at the $900$ MHz band that we use. Nevertheless, with the aid of IMU, Marvel achieves decimeter-level accuracy as shown in \cref{subsec:systemlevel}.
{\bf Rotation estimation accuracy}.
We evaluate the rotation estimation by controlling the stepper motor whose angular velocity starts from $0.2$ rad/s and increases by the rate $0.05$ rad/s until $1.5$ rad/s, and then decreases by the same rate to be back at $0.2$ rad/s. The whole process takes $52$ seconds as shown in Fig.~\ref{fig:rotation_accuracy}. We repeat the experiment $60$ times and analyze the data. As expected, the result in the through-wall setting is worse ($95\%$ confidence interval $18.8\degree \pm 1.55\degree$, standard deviation $13.3\degree$) than the other ($95\%$ confidence interval $9.2\degree \pm 0.83\degree$, standard deviation $6.3\degree$) due to the larger error of angle estimation in the presence of multipath. Fig.~\ref{fig:rotation_accuracy} also shows that our rotation estimation algorithm succeeds in closely tracking the MAV's rotation with varying angular velocities in both settings, providing drift-free results.
\begin{figure}
\centering
\begin{minipage}[b]{0.29\textwidth}\centering
\center
\includegraphics[width=1\textwidth]{microbench_rotation_big_font.pdf}\vspace{-0.3cm}
\caption{Rotation accuracy.} \label{fig:rotation_accuracy}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}[b]{0.17\textwidth}\centering
\center
\includegraphics[width=1\textwidth]{time_accuracy_big_font.pdf}\vspace{-0.3cm}
\caption{Accuracy vs. latency.} \label{fig:computation_time}
\end{minipage}
\vspace{-4mm}
\end{figure}
{\bf Latency}.
The latency is the key for the real-time property, which is a must for any state estimator for aerial vehicles. The closed form solution (Eqn.~\eqref{eqn:relative_shift}) to the rotation estimation makes the computation efficient. But the other two submodules in Marvel are time-consuming in that the angle estimation requires a parameter search (\cref{subsec:pose}) and the super-accuracy algorithm incorporates a number of states within the optimization framework (\cref{sec:pose}). The computational complexity of the angle estimation is $\mathcal{O}((N_a+ N_t) \times L)$~\cite{kotaru2015spotfi}, where $N_a$ and $N_t$ are the number of steps for each path parameter, $L$ is the number of paths. The computational cost depends on the parameter searching steps and ranges. The super-accuracy algorithm is solved by the Gauss-Newton algorithm, which is an iterative method and has no guaranteed computational complexity in theory. The computational cost depends on the initialization point, the scale of the problem, and the rate of convergence. The whole system is implemented in multiple threads. Thus, the overall system latency depends on the largest time cost between these two submodules.
On one hand, we test that the average latency of computing an angle is $37.55$ ms. The angle estimation does not hinder the real-time processing. On the other hand, the super-accuracy algorithm has a tradeoff between the accuracy and the latency. The more state involved the more accurate result obtained. But this also increases the latency because a larger state vector and the corresponding measurements are involved in the optimization framework.
We tune the number of states in the sliding window from $10$ to $50$ for testing. The result in Fig.~\ref{fig:computation_time} shows that when incorporating $50$ states, the positioning accuracy is $0.554$ m, which is only $7\%$ better than the accuracy with $30$ states. However, the $95\%$ confidence interval of latency is $146.088 \pm 7.564$ ms, which is $2.5\times$ slower than the latency with $30$ states. Therefore, we set $30$ states for the rest of our experiments and the average latency is $57.07$ ms for each update. The update rate can reach about $17$ Hz, which is greater than the data rate ($10$ Hz) of the backscatter-based pose sensing, ensuring the real-time processing.
\subsection{System-level State Estimation}
\label{subsec:systemlevel}
\begin{figure}[t!]
\centering
\shortstack{
\includegraphics[width=0.16\textwidth]{circle_traj_s.pdf}\\
{\footnotesize (a) Circular trajectory}
}\quad
\shortstack{
\includegraphics[width=0.15\textwidth]{circle_pos_error_s.pdf}\\
{\footnotesize (b) Position error}
}
\shortstack{
\includegraphics[width=0.15\textwidth]{circle_euler_error_s.pdf}\\
{\footnotesize (c) Orientation error}
}
\caption{Long-range state estimation.}
\label{fig:losflight}
\vspace{-4mm}
\end{figure}
We program the MAV to fly in different trajectories for evaluating the overall performance of Marvel. The ground truth of the flight trajectories is provided by OptiTrack~\cite{optitrack}. The maximum linear velocity reaches $2.53$ m/s in this experiment.
\subsubsection{Manual initialization and extrinsic calibration}
We first conduct experiments by manually setting the extrinsic parameters, using a vernier caliper and a protractor to measure the relative pose between the IMU and the backscatter sensing. The initial state can be measured by OptiTrack.
In long-range experiments, the MAV flied in a circular trajectory. Since the backscattered signal cannot be decoded when the distance is longer than $50$ m, the terminal is placed $20$ m away from the MAV before taking off to ensure that the MAV cannot go beyond the distance limitation during the flight. As shown in Fig.~\ref{fig:losflight}, the average error of state estimation is $33.66$ cm for positioning and $4.99\degree$ for orientation estimation. This demonstrates that the super-accuracy algorithm significantly increases the accuracy of pose tracking, enabling accurate state estimation.
In through-wall experiments, for safety reasons, the MAV has to fly in the test site. We placed the terminal at location $5$ and the MAV flied in a square trajectory due to the limited area. As shown in Fig.~\ref{fig:nlosflight}, the average position error over the trajectory is $52.56$ cm and the average orientation error is $6.64\degree$. The accuracy is slightly worse than in the open field due to the multipath fading and the more aggressive motions around the corners of the square trajectory.
\begin{figure}[t!]
\centering
\shortstack{
\includegraphics[width=0.16\textwidth]{square_traj_2_s.pdf}\\
{\footnotesize (a) Square trajectory}
}\quad
\shortstack{
\includegraphics[width=0.15\textwidth]{square_pos_error_2_s.pdf}\\
{\footnotesize (b) Position error}
}
\shortstack{
\includegraphics[width=0.15\textwidth]{square_euler_error_s.pdf}\\
{\footnotesize (c) Orientation error}
}
\caption{Through-wall state estimation.}
\label{fig:nlosflight}
\vspace{-4mm}
\end{figure}
\subsubsection{Online initialization and extrinsic calibration}
We conduct experiments in a similar setting as the previous content except that the online initialization and extrinsic calibration module is working. Initially, we start running the initialization procedure. Then we hold the MAV by hand and move with enough rotations in about $10$ seconds. At this stage, the initialization procedure is completed. The system proceeds to the super-accuracy nonlinear state estimator.
In long-range experiments, we program the MAV flying in an eight-shape trajectory. As shown in Fig.~\ref{fig:initialize}, the average error is $39.18$ cm for positioning and $5.11\degree$ for orientation estimation. The performance is similar to the case with manual settings. In through-wall experiments, the setup and the flying trajectory is the same as the experiments in manual configuration due to safety reasons in the confined area. In the indoor test, we fly the same trajectory as in Fig.~\ref{fig:nlosflight} (the result with the manual solution). If we re-plot the trajectory, they will be highly identical. Only small differences can be carefully found. As a result, we compare the statistical result of positioning and orientation estimation as shown in Fig.~\ref{fig:initialize_throughwall}. The result shows that the online initialization procedure achieves similar performance to the case of manual initialization. The $95\%$ confidence intervals are $62.23 \pm 2.82$ cm for the positioning error and $7.12\degree \pm 0.18\degree$ for the orientation error respectively.
In principle, the online solution is not as perfect as the manual solution as it optimizes the initial state and extrinsic parameters through noisy sensor measurements. Therefore, Fig.~\ref{fig:initialize_throughwall} exhibits that the online solution is slightly worse than the manual solution. Such a minor difference does not affect the overall performance.
\begin{figure}[t!]
\centering
\shortstack{
\includegraphics[width=0.16\textwidth]{2circle_traj_s.pdf}\\
{\footnotesize (a) Eight-shape trajectory}
}\quad
\shortstack{
\includegraphics[width=0.15\textwidth]{2circle_pos_error_s.pdf}\\
{\footnotesize (b) Position error}
}
\shortstack{
\includegraphics[width=0.15\textwidth]{eight_euler_error_s.pdf}\\
{\footnotesize (c) Orientation error}
}
\caption{Long-range state estimation with online initialization and extrinsic estimation.}
\label{fig:initialize}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=3in]{initialization_comparison2.pdf}
\caption{Positioning and orientation estimation accuracies with/without the initialization procedure in the through-wall setting.}
\label{fig:initialize_throughwall}
\vspace{-4mm}
\end{figure}
\subsubsection{Comparison with Other RF-based State Estimators}
\begin{figure}[t!]
\centering
\shortstack{
\includegraphics[width=0.23\textwidth]{different_approach_comparison_outdoor.pdf}\\
{\footnotesize (a) Outdoor performance}
}\quad
\shortstack{
\includegraphics[width=0.23\textwidth]{different_approach_comparison_indoor.pdf}\\
{\footnotesize (b) Indoor performance}
}
\caption{Marvel outperforms the state-of-the-art RF-based state estimators.}
\label{fig:different_approach}
\vspace{-4mm}
\end{figure}
In this experiment, we compare Marvel with the state-of-the-art RF-based MAV state estimation systems, CWISE~\cite{li2016csi} and WINS~\cite{zhang2020wifi_full}. They use WiFi signals to estimate MAV states. Therefore, they are incapable of working in a long-range or through-wall setting. To have a fair performance comparison, we place a WiFi access point (AP) at a mild range, in particular, at a distance of $20$ m to the MAV in outdoors. In indoor experiments, we place the AP at location $1$ where one wall blocks the vehicle and the AP. Moreover, we also compare with a modified $\mu$locate~\cite{nandakumar20183d} that fuses the position estimates obtained by $\mu$locate with IMU measurements. Due to the inferior accuracies of other approaches, we still use the state estimates from Marvel to control the vehicle. Then we run other approaches along the same trajectory, in particular, a circular trajectory in outdoors and a squared trajectory in the indoor MAV test site. All experiments are conducted by flying the MAV $90$ seconds. CWISE and WINS are using active radio while Marvel and $\mu$locate uses more challenging RF backscatters to do the state estimation. Since CWISE and $\mu$locate cannot address the orientation estimation, we take the position error to be the evaluation metric.
Fig.~\ref{fig:different_approach} shows that Marvel outperforms all other approaches. In the outdoor scenario, the mean position error of Marvel is $0.235$ m. CWISE's performance is similar to WINS's because the multipath is insignificant. But CWISE (mean error $1.073$ m) is still worse than WINS (mean error $0.525$ m) as there are some reflections of signals from the ground and CWISE is very sensitive to multipath. Marvel benefits from the high sensitivity of LoRa nature against environmental noise, making the phases more faithful to the position. However, $\mu$locate does not eliminate the Doppler effect of fast MAVs, increasing the positioning error as shown in Fig.~\ref{fig:comparison}. Fusing such erroneous estimates with IMU measurements cannot effectively correct the IMU drift. Therefore, its performance is the worst with mean error $2.591$ m and the error tends to be larger along time.
In the indoor scenario, the mean position error of Marvel is $0.314$ m, being slightly worse due to the presence of multipath. $\mu$locate's performance is similar to the outdoor case because one wall blockage does not affect the decoding capability of LoRa. Its mean error is $2.741$ m. WINS is much worse as expected (mean error $1.056$ m) because the wall's blockage reduces the amplitude of received WiFi signals, making the channel state information (CSI) reported by the WiFi card less accurate and thus increasing the estimation error. Unfortunately, CWISE is completely incapable in indoors due to the multipath. It brings no information to correct the IMU drift. Therefore, we can see that the position error is increasing indefinitely in a fast speed.
\subsection{Discussion}
Here we first briefly discuss the limitation of our system to mapping and path planning. Then, we discuss the energy consumption concern of Marvel.
{\bf Limitations.} The CSS signals we use for state estimation cannot observe obstacles due to its large wavelength since the resolution of environmental observation depends on the wavelength of interactive medium~\cite{kotaru2019light}. Obstacle detection and avoidance relates to mapping and path planning problems, which are also critical to autonomous flight. The system maps the sizes and positions of obstacles and then generates proper trajectories to circumvent obstacles. Optical signals, {\em e.g.}, visible light captured by cameras, with nanometer-level wavelength are more effective to detect obstacles. However, cameras fail to work in the harsh environment with smoke and fog in our context. Single-chip millimeter wave (mmWave) radar can penetrate airborne obscurants~\cite{lu2020see}, being robust to detect obstacles behind smoke and fog.
{\bf Energy consumption.} Marvel uses low-power backscatters to enable robust state estimation in long-rang or through-wall settings. However, it cannot reduce the energy consumption in our implementation as the additional payload heavily impacts on the energy consumption. Specifically, according to DJI Matrice 100 specifications~\cite{djim100}, the power consumption for hovering is $19$ W per $100$ g. One of our customized backscatter tags weighs $6$ g. Marvel attaches $4$ tags on the landing gear, adding a payload of $24$ g. Thus, the payload additionally consumes $4.56$ W. On the other hand, the power consumption of each backscatter tag is $400$ $\mu$W, four tags on the MAV consumes $1.6$ mW. The power consumption of commercial WiFi is $2.1$ W~\cite{halperin2010demystifying}. In terms of the power consumption of communications, we indeed save energy more than $1000 \times$. However, the total amount of energy saving is about $2.1$ W, which is less than the additional $4.56$ W power consumption for hovering the tags against the gravity. Even we design integrated circuits (ICs) for our backscatter tags, significantly reducing the weight of tags, the power saving for communications is still negligible to the flight.
According to our test, the DJI M100 can hover $1188$ seconds without adding any hardware component or running any software algorithm. Our prototype adds additional payload with weight about $970$ g, including the expansion bay that can attach additional hardware, an Intel NUC, four backscatter tags, a LoRa transceiver, the DJI guidance system, an FPGA, many TTL cables and a USB expansion. The payload reduces the hovering time from $1188$ to $668$ seconds (reduced by $43.8\%$) without running our algorithms. Running Marvel with the same payload the vehicle can perform hovering $650$ seconds. This concludes that the energy consumption of the software algorithm is negligible compared with the consumed energy from the additional payload.
Note that our implementation is a prototype that demonstrates the effectiveness of our design. With more advanced hardware, {\em e.g.}, more compact and powerful DJI Manifold 2, and engineering efforts to miniaturize Marvel's components, {\em e.g.}, the LoRa transceiver and backscatters, the additional payload can be substantially reduced. In addition, running our software has negligible impacts on the flight time. Therefore, Marvel has no mandatory impact on the MAV flight time.
\subsection{Extrinsic Rotation Estimation}
\label{subsec:extrinsic}
The online calibration and initialization procedure requires sufficient rotations. In practice, we hold the MAV and manually rotate it for this procedure. The extrinsic calibration and the initialization can be formulated as two linear systems. We first estimate the relative rotation $\mathbf{q}_b^i$ by aligning two rotation sequences from the IMU and the backscatter sensing. Then we use this rotation to further estimate the relative position $\mathbf{p}_b^i$ as well as other initial values in \cref{subsec:init}.
Typically, the IMU data rate ($100$ Hz) is much higher than the data rate of the backscatter sensing ($\approx 10$ Hz). Thus, there have been buffered multiple IMU measurements in the interval $[k, k+1]$. We first pre-integrate such IMU data between two sets of pose features from the backscatter sensing. The IMU pre-integration technique has been developed in~\cite{shen2015tightly, lupton2012visual}. We give its usage in our system.
The raw measurements of IMU includes acceleration $\hat{\mathbf{c}}_t$ and angular velocity $\hat{\bm{\omega}}_t$ at time $t$. Given two time instants that corresponds to two sets of backscatter-based pose features, we can pre-integrate the buffered IMU readings as~\cite{shen2015tightly}
\begin{equation}
\begin{aligned}
\hat{\bm{\alpha}}_{i_{k+1}}^{i_k} & = \iint_{t\in[k, k+1]}R(\mathbf{q}_t^{i_k})\hat{\mathbf{c}}_t\,\mathrm{d}t^2, \; \hat{\bm{\beta}}_{i_{k+1}}^{i_k} = \int_{t\in[k, k+1]}R(\mathbf{q}_t^{i_k})\hat{\mathbf{c}}_t\,\mathrm{d}t, \\
\hat{\bm{\gamma}}_{i_{k+1}}^{i_k} & = \int_{t\in[k, k+1]}\bm{\gamma}_t^{i_k} \otimes \begin{bmatrix}
0 & \frac{1}{2}\hat{\bm{\omega}}_t
\end{bmatrix}^\top \mathrm{d}t,
\end{aligned}
\label{eqn:integration}
\end{equation}
where $\otimes$ denotes the quaternion multiplication operation. $R(\mathbf{q}_t^{i_k}) \in \text{SO}(3)$ is the conversion from the quaternion to the rotation matrix. We use the quaternion representation for modelling the odometry as a vector. $\bm{\gamma}_t^{i_k}$ is the incremental rotation from $i_k$ to current time $t$, which is available through short-term integration of gyroscope measurements. Then we can write the IMU propagation model for position, velocity, and rotation as~\cite{shen2015tightly}
\begin{equation}
\begin{bmatrix}
\mathbf{p}_{i_{k+1}}^{i_0} \\
\mathbf{v}_{i_{k+1}}^{i_{k+1}} \\
\mathbf{q}_{i_{k+1}}^{i_0} \\
\end{bmatrix} =
\begin{bmatrix}
\mathbf{p}_{i_k}^{i_0} + R(\mathbf{q}_{i_k}^{i_0})\mathbf{v}_{i_k}^{i_k}\Delta t_k - \frac{1}{2}\mathbf{g}^{i_0}\Delta t_k^2 + R(\mathbf{q}_{i_k}^{i_0})\hat{\bm{\alpha}}_{i_{k+1}}^{i_k} \\
R(\mathbf{q}_{i_k}^{i_{k+1}})\mathbf{v}_{i_k}^{i_k} - R(\mathbf{q}_{i_0}^{i_{k+1}})\mathbf{g}^{i_0}\Delta t_k + R(\mathbf{q}_{i_k}^{i_{k+1}})\hat{\bm{\beta}}_{i_{k+1}}^{i_k} \\
R(\mathbf{q}_{i_k}^{i_0})\hat{\bm{\gamma}}_{k+1}^k
\end{bmatrix},
\label{eqn:preimu}
\end{equation}
where $\Delta t_k$ denotes the time interval between two consecutive states. $\mathbf{g}^{i_0}$ is the initial gravity of the IMU frame. Note that IMU measurements combine the force for countering gravity $\mathbf{g}^{i_0}$ and the MAV dynamics. $\mathbf{g}^{i_0}$ is initially unknown since the initial attitude of a MAV is unknown. The gravity must be tracked to be aligned with the local frame through the attitude.
After obtaining the IMU rotation $\mathbf{q}_{i_{k+1}}^{i_k}$, we need to know the rotation $\mathbf{q}_{b_{k+1}}^{b_k}$ measured by the backscatter sensing. The two terms can be connected through the extrinsic rotation $\mathbf{q}_b^i$.
The backscatter sensing estimates the yaw angle $\hat{\psi}_{b_{k+1}}^{b_k}$. Thanks to the IMU that it only drifts in four degrees of freedom, corresponding to 3D position and the yaw angle (rotation around the gravity direction)~\cite{qin2017vins}. In other words, the IMU has no drift in roll and pitch rotations. It is easy to obtain the incremental rotation from the backscatter sensing with the IMU reading. Specifically, we first compute roll $\hat{\vartheta}_{i_{k+1}}^{i_k}$ and pitch $\hat{\varphi}_{i_{k+1}}^{i_k}$ from $\hat{\bm{\gamma}}_{i_{k+1}}^{i_k}$ (Eqn.~\eqref{eqn:integration}). Note that the incremental roll and pitch in the two frames are identical in that the MAV is rigid, {\em i.e.}, $\hat{\vartheta}_{b_{k+1}}^{b_k} = \hat{\vartheta}_{i_{k+1}}^{i_k}$ and $\hat{\varphi}_{b_{k+1}}^{b_k} = \hat{\varphi}_{i_{k+1}}^{i_k}$. Then we convert $(\hat{\vartheta}_{b_{k+1}}^{b_k}, \hat{\varphi}_{b_{k+1}}^{b_k}, \hat{\psi}_{b_{k+1}}^{b_k})$ to be $\mathbf{q}_{b_{k+1}}^{b_k}$.
With $\mathbf{q}_{i_{k+1}}^{i_k}$ and $\mathbf{q}_{b_{k+1}}^{b_k}$, $\mathbf{q}_{i_{k+1}}^{i_k} \otimes \mathbf{q}_b^i = \mathbf{q}_b^i \otimes \mathbf{q}_{b_{k+1}}^{b_k}$ holds for any $k$. Restructuring this equation gives
\begin{equation}
\left[
\mathcal{G}_1\left(\mathbf{q}_{i_{k+1}}^{i_k}\right) - \mathcal{G}_2\left(\mathbf{q}_{b_{k+1}}^{b_k}\right)
\right] \cdot \mathbf{q}_b^i = \mathbf{G}_{k+1}^k \cdot \mathbf{q}_b^i = \mathbf{0},
\end{equation}
where
\begin{equation*}
\begin{aligned}
\mathcal{G}_1\left(\mathbf{q}\right) & =
\begin{bmatrix}
q_c\mathbf{I}_3 + \lfloor\mathbf{q}_{xyz}\times\rfloor & \mathbf{q}_{xyz} \\
-\mathbf{q}_{xyz} & q_c
\end{bmatrix}, \mathcal{G}_2\left(\mathbf{q}\right) =
\begin{bmatrix}
q_c\mathbf{I}_3 - \lfloor\mathbf{q}_{xyz}\times\rfloor & \mathbf{q}_{xyz} \\
-\mathbf{q}_{xyz} & q_c
\end{bmatrix}.
\end{aligned}
\end{equation*}
$\lfloor\mathbf{q}_{xyz}\times\rfloor$ is the skew-symmetric matrix from the first three elements $\mathbf{q}_{xyz}$ of the quaternion $\mathbf{q}$. $q_c$ is the fourth element.
With $N$ incremental rotations along the pose features from the backscatter sensing, we have the following over-constrained linear system
\begin{equation}
\begin{bmatrix}
\mathbf{G}_1^0 & \mathbf{G}_2^1 & \cdots & \mathbf{G}_N^{N-1}
\end{bmatrix} \cdot \mathbf{q}_b^i = \mathbf{G}_N\cdot \mathbf{q}_b^i = \mathbf{0}.
\end{equation}
Solving the above system obtains the extrinsic rotation $\mathbf{q}_b^i$. Next, we take this to estimate the extrinsic translation $\mathbf{p}_b^i$ and the initial position, attitude, and velocity of the vehicle together.
\subsection{Initialization}
\label{subsec:init}
We adopt a sensor fusion method to obtain the initial state and employ a sliding window formation that incorporates a fixed number of IMU and backscatter sensing measurements to ensure constant computational complexity~\cite{kaess2012isam2}. We recover the initial state in the first IMU frame. The state vector within the window is defined as,
\begin{equation}
\begin{aligned}
\mathbf{\mathcal{S}} & = \left[ \mathbf{s}_0; \quad \mathbf{s}_1; \quad \cdots; \quad \mathbf{s}_n; \quad \mathbf{p}_b^i; \quad \bm{\rho} \right] \\
\mathbf{s}_k & = \left[ \mathbf{p}_{i_k}^{i_0}; \quad \mathbf{v}_{i_k}^{i_k}; \quad \mathbf{g}^{i_k} \right], \; \mathbf{p}_{i_0}^{i_0} =
\begin{bmatrix}
0 & 0 & 0
\end{bmatrix}^\top,
\end{aligned}
\label{eqn:init}
\end{equation}
where $\mathbf{s}_k$ denotes $k$\textsuperscript{th} state in the window, which contains position $\mathbf{p}_{i_k}^{i_0}$, velocity $\mathbf{v}_{i_k}^{i_k}$, and the gravity in the IMU frame $\mathbf{g}^{i_0}$, $n$ the number of states in the sliding window, $\bm{\rho}$ denotes the position of the terminal, $\mathbf{p}_b^i$ the relative translation of the IMU with respect to the backscatter sensing.
The initialization is to solve a maximum likelihood problem by minimizing the sum of the Mahalanobis norm of all measurements errors within the sliding window
\begin{equation}
\min_{\bm{\mathcal{S}}} \left\{\sum_{j\in\mathcal{L}}\left\| \hat{\mathbf{z}}_{b_j} - \mathbf{H}_{b_j}\bm{\mathcal{S}} \right\|_{\mathbf{P}_{b_j}}^2 + \sum_{k\in\mathcal{I}}\left\| \hat{\mathbf{z}}_{i_{k+1}}^{i_k} - \mathbf{H}_{i_{k+1}}^{i_k}\bm{\mathcal{S}} \right\|_{\mathbf{P}_{i_{k+1}}^{i_k}}^2\right\},
\label{eqn:linear}
\end{equation}
where $\mathcal{L}$ is the set of backscatter-based pose features and $\mathcal{I}$ denotes the set of IMU measurements. We choose the Mahalanobis norm to be the optimization objective because it takes into account the correlations of the data set. These correlations amongst internal states of different sensing modalities are key for any high-precision inertial-based autonomous system~\cite{leutenegger2015keyframe}. $\mathbf{H}_{b_j}$ and $\mathbf{H}_{i_{k+1}}^{i_k}$ are corresponding measurement matrices. Since the initialization procedure does not take long time, the gyroscope drift is not significant. We integrate the gyroscope measurements to compute rotation $\mathbf{q}_{i_{k+1}}^{i_0}$ and $\mathbf{q}_{i_{k+1}}^{i_k}$ and thus system \eqref{eqn:linear} can be solved in a linear fashion.
We first define the measurement model $\left\{\hat{\mathbf{z}}_{b_j}, \mathbf{H}_{b_j}, \mathbf{P}_{b_j}\right\}$ for $j$\textsuperscript{th} observation of backscatter sensing as
\begin{equation}
\begin{aligned}
\hat{\mathbf{z}}_{b_j} = \hat{\mathbf{0}} = \left\lfloor \left(\hat{d}_{b_j} \hat{\mathbf{a}}_{b_j}\right) \times \right\rfloor f_i^b\left(f_{i_0}^{i_j}\left(f_b^i\left(\mathbf{p}_{b_j} - \bm{\rho}\right)\right)\right) = \mathbf{H}_{b_j}\bm{\mathcal{S}} + \mathbf{n}_{b_j},
\end{aligned}
\end{equation}
where $\hat{d}_{b_j}$ and $\hat{\mathbf{a}}_{b_j}$ are $j$\textsuperscript{th} range and angle measurements from the backscatters. The function $f_X^Y(\mathbf{t})$ denotes the transformation of a vector $\mathbf{t}$ from frame $X$ to frame $Y$. We define $f_X^Y(\mathbf{t})$ and its inverse $f_Y^X(\mathbf{t})$ as
\begin{equation}
\begin{aligned}
f_X^Y(\mathbf{t}) & = R(\mathbf{q}_X^Y)\cdot\mathbf{t} + \mathbf{p}_X^Y, \quad f_Y^X(\mathbf{t}) = R(\mathbf{q}_Y^X)\cdot\left(\mathbf{t} - \mathbf{p}_X^Y\right).
\end{aligned}
\end{equation}
Note that $f_{i_0}^{i_j}(\cdot)$ follows the same rule and all rotations are known.
$\mathbf{n}_{b_j}$ is the additive Gaussian noise for the backscatter sensing. Its covariance matrix $\mathbf{P}_{b_j}$ can be estimated by statistically analyzing the pose features.
Then we can derive the IMU measurement model $\left\{\hat{\mathbf{z}}_{i_{k+1}}^{i_k}, \mathbf{H}_{i_{k+1}}^{i_k}, \mathbf{P}_{i_{k+1}}^{i_k}\right\}$ between consecutive frames $k$ and $k+1$ from Eqn.~\eqref{eqn:preimu} (with known $\mathbf{q}_{i_k}^{i_0}$ and $\mathbf{q}_{i_k}^{i_{k+1}}$) as~\cite{shen2016initialization}
\begin{equation}
\begin{aligned}
\hat{\mathbf{z}}_{i_{k+1}}^{i_k} =
\begin{bmatrix}
\hat{\bm{\alpha}}_{i_{k+1}}^{i_k} \\
\hat{\bm{\beta}}_{i_{k+1}}^{i_k} \\
\hat{\mathbf{0}}
\end{bmatrix} & =
\mathbf{H}_{i_{k+1}}^{i_k} \bm{\mathcal{S}} + \mathbf{n}_{i_{k+1}}^{i_k},
\end{aligned}
\end{equation}
where $n_{i_{k+1}}^{i_k}$ is the additional Gaussian noise for the IMU measurement model. The covariance $\mathbf{P}_{k+1}^k$ can be computed recursively by first-order discrete-time propagation within $\Delta t_k$, referring to~\cite{qin2017vins} for more details. Finally, we solve the above linear system to initialize the vehicle's state, extrinsic translation, and the terminal's position.
\section{Introduction}
\label{sec:intro}
\input{intro}
\section{Background and System Overview}
\label{sec:overview}
\input{overview}
\section{Backscatter-based Pose Sensing}
\label{sec:lr}
\input{chirp}
\section{Extrinsic Calibration and Initialization}
\label{sec:init}
\input{init.tex}
\section{Backscatter-inertial Super-accuracy State Estimation}
\label{sec:pose}
\input{state_estimation}
\section{Implementation and Evaluation}
\label{sec:evaluation}
\input{eval}
\section{Related Work}
\label{sec:related}
\input{related}
\section{Conclusion}
\label{sec:conclusion}
\input{conclusion}
\bibliographystyle{IEEEtran}
\subsection{Background}
Fig.~\ref{fig:background} shows a typical framework of autonomous aerial vehicles. A terminal on the ground is the user interface that sends desired goals to an aerial vehicle or multiple aerial vehicles. The level of the vehicle autonomy determines the types of user goals that can be supported. Semi-autonomous flight allows users to give desired actions, {\em e.g.}, forward, backward, and turning left/right. Most of commercial photographic drones work in this way. A user is trained to control the vehicle's action by the remote terminal. On the other hand, fully autonomous flight can accept high-level goals, {\em e.g.}, cruise on a certain track, requiring the integration of multiple technologies.
Four components are required to enable fully autonomous flight~\cite{mohta2018fast}. The first component is state estimation. This refers to the ability of a vehicle to estimate its position, orientation, and velocity (the rate of change of position and orientation). Second, the vehicle must be able to compute its control commands. Based on where it needs to go and what the estimate of its current state is, the vehicle must be able to compute the commands that need to be sent to the motors or the rotors, and have them rotate at the appropriate speeds to achieve the desired action. Third, the vehicle needs some basic capability to map its environment. If it does not know what the surrounding looks like, then it is incapable of reasoning and planning safe trajectories in this environment. Finally, the vehicle should be able to compute safe paths, given a set of obstacles and a destination.
\begin{figure}[t]
\centering
\includegraphics[width=3.5in]{background.pdf}
\caption{A framework of autonomous aerial vehicles.}
\label{fig:background}
\vspace{-4mm}
\end{figure}
The terminal sends a desired goal to the planner of the vehicle to start a task. The planner generates a path using the map from the mapping module. It then sends the path to the trajectory generator. The trajectory generator converts the path into a trajectory and sends it to the attitude controller. The attitude controller derives the desired state based on the trajectory and the current estimated state. It then sends the desired state to the flight control system. The flight control system takes the desired state to compute control commands to adjust the rotation speed of rotors. The input to the mapping module and the state estimation module denotes sensor measurements. This paper focuses on the fundamental and foremost important module -- state estimation.
\subsection{System Overview}
The system overview is shown in Fig.~\ref{fig:overview}. It has two components, the user's terminal and the MAV system.
\begin{figure*}[htp]
\centering
\includegraphics[width=7in]{overview2.pdf}
\caption{System architecture. The terminal receives backscattered signals and extracts their phases. The MAV system runs three components. Among them, the system only runs the initialization and extrinsic calibration at the initialization stage for bootstrapping the super-accuracy algorithm.}
\label{fig:overview}
\end{figure*}
The terminal excites the backscatter tags on the MAV's landing gear and extracts channel phases. It has four antennas. We take one antenna as the data handler that alternately sends dummy chirps and data, such as channel phases. Dummy chirps are to excite the backscatter tags. Data are received by the LoRa transceiver on the MAV for pose sensing. We take the other three antennas as the backscatter signal handler that receives the signals backscattered by the tags and extracts the channel phases. The core module in the terminal is the {\em channel phase extraction} (see details in \cref{subsec:phase}), which provides channel phases for the backscatter-based pose sensing running on the MAV.
The MAV system runs the state estimator that takes channel phases from backscattered signals and measurements from the onboard IMU to estimate the state. The estimator consists of three modules:
\begin{itemize}
\item {\bf Backscatter-based pose sensing} uses the phases from the LoRa transceiver to compute the range, angle and rotation of the MAV to the terminal, enabling pose tracking in long range or through occlusions (see details in \cref{subsec:pose}).
\item {\bf Online initialization and extrinsic calibration} takes the above backscatter-based pose estimates and the IMU measurements, which include 3D accelerations and angular velocities, to estimate the initial state and the extrinsic parameters, {\em i.e.}, the relative pose between the backscatter sensing and the IMU (see details in \cref{sec:init}). The obtained initialization point and extrinsic parameters properly bootstrap the state estimation algorithm in the next component.
\item {\bf Backscatter-inertial super-accuracy state estimation algorithm} fuses the measurements from the backscatter-based sensing and the IMU through a graph-based optimization framework. It models each module's estimates as a Gaussian mixture and computes a Gaussian approximation of the posterior over the MAV trajectory. The algorithm updates the state in a sliding window fashion for real-time processing (see details in \cref{sec:pose}).
\end{itemize}
Finally, the state estimator sends the state to the MAV's flight control system, {\em i.e.}, DJI N1 flight control system~\cite{djim100} in our implementation. With the current estimated state and a desired goal, the flight control system computes the commands that adjust the power to rotors to achieve desired actions.
\subsection{Problem Formulation}
\label{subsec:formulation}
The graph representation of our state estimation problem is shown in Fig.~\ref{fig:graphslam}. Let $\mathbf{s}_k$ denote the state at time $k$. At each $k$, the MAV observes a set of backscatter sensing measurements $\mathbf{z}_k$ which include range $\hat{d}_k$, angle $\hat{\mathbf{a}}_k \in \mathbb{R}^{3}$ and yaw rotation $\hat{\psi}_k$. $\mathbf{u}_{k+1}^k = \left[\hat{\bm{\alpha}}_{i_{k+1}}^{i_k}; \; \hat{\bm{\beta}}_{i_{k+1}}^{i_k}; \; \hat{\bm{\gamma}}_{i_{k+1}}^{i_k}\right]$ is the preintegrated result over IMU measurements (defined in Eqn.~\eqref{eqn:integration}) that represents the odometry between two consecutive states, {\em i.e.}, $\mathbf{s}_k$ and $\mathbf{s}_{k+1}$.
To achieve real-time processing, we still employ an incremental state update scheme~\cite{kaess2012isam2} that takes IMU and backscatter-based measurements in a fixed time interval for state estimation. As long as a new state with its backscatter-based measurements is available, our approach works in a sliding window fashion that incorporates the new state and marginalizes the oldest state. The marginalization will follow a similar way to~\cite{zhang2020robot}. It converts the estimated information from the marginalized measurements into a new prior $\{\mathbf{b}_p, \mathbf{H}_p\}$ to constraint later estimates. The full state vector within the window is defined similar to the linear initialization. The difference is two-fold: 1) the extrinsic transformation matrix is included for refinement; 2) the gravity is replaced by rotation $\mathbf{q}_{i_k}^{i_0}$ for combating the IMU drift of rotations.
\begin{equation}
\begin{aligned}
\bm{\mathcal{S}} & = \left[ \mathbf{s}_0; \quad \mathbf{s}_1; \quad \cdots; \quad \mathbf{s}_n; \quad \mathbf{s}_b^i; \quad \bm{\rho} \right], \\
\mathbf{s}_k & = \left[ \mathbf{p}_{i_k}^{i_0}; \quad \mathbf{v}_{i_k}^{i_k}; \quad \mathbf{q}_{i_k}^{i_0} \right], k \in [1, n], \; \mathbf{s}_b^i = \left[\mathbf{p}_b^i; \quad \mathbf{q}_b^i\right].
\end{aligned}
\end{equation}
In the state vector, we consider variables in different metrics, {\em e.g.}, meter for position, m/s for velocity, and radian for orientation. Therefore, we choose the Mahalanobis norm to rescale them with their covariance matrices. The covariance matrices of measurements are required to be updated. The objective is to minimize the sum of the Mahalanobis norm of backscatter sensing and IMU residuals to obtain a maximum a posteriori estimation given the prior converted by the marginalization:
\begin{equation}
\min_{\bm{\mathcal{S}}} \left\{\left(\mathbf{b}_p - \mathbf{H}_p\bm{\mathcal{S}}\right) + \sum_{j\in\mathcal{L}}\left\|\mathbf{e}_{\mathcal{L}}\left( \hat{\mathbf{z}}_j, \bm{\mathcal{S}} \right) \right\|_{\mathbf{P}_j}^2 + \sum_{k\in\mathcal{I}}\left\|\mathbf{e}_\mathcal{I}\left(\hat{\mathbf{u}}_{k+1}^k, \bm{\mathcal{S}} \right) \right\|_{\mathbf{P}_{k+1}^k}^2\right\},
\label{eqn:nonlinear}
\end{equation}
where $\mathbf{e}_{\mathcal{L}}\left( \hat{\mathbf{z}}_j, \bm{\mathcal{S}} \right)$ (briefly denoted as $\mathbf{e}_{\mathcal{L}}^j$) and $\mathbf{e}_\mathcal{I}\left(\hat{\mathbf{u}}_{k+1}^k, \bm{\mathcal{S}} \right)$ (briefly denoted as $\mathbf{e}_\mathcal{I}^k$) are measurement residuals for LoRa backscatter and IMU, respectively.
\subsection{Backscatter-inertial State Estimation}
\label{subsec:estimation}
We now solve the nonlinear system~\eqref{eqn:nonlinear} for state estimation via the Gauss-Newton algorithm. This involves linearizing the nonlinear system by the first order Taylor expansion of the residuals in~\eqref{eqn:nonlinear} around the initial values provided by \cref{sec:init}, {\em i.e.}, computing Jacobians. We can use Ceres Solver~\cite{ceres-solver}, which is an open-source C++ library that solves complicated optimization problems, to automatically compute these complicated Jacobians and solve~\eqref{eqn:nonlinear} to obtain the state estimates. To use this tool, we have to define the measurement residuals for creating template functors.
{\bf Backscatter sensing residual}.
Given range $\hat{d}_{b_j}$, angle $\hat{\mathbf{a}}_{b_j}$, and rotation $\hat{\mathbf{q}}_{b_j}^{b_0}$, the residual is defined as,
\begin{equation}
\mathbf{e}_{\mathcal{L}}^j =
\begin{bmatrix}
\delta d_{b_j} \\
\delta\mathbf{a}_{b_j} \\
\delta\bm{\theta}_{b_j}
\end{bmatrix} =
\begin{bmatrix}
\left\|\hat{d}_{b_j}^2 - \left( \mathbf{p}_{b_j}^{b_0} - \bm{\rho} \right)^\top \left( \mathbf{p}_{b_j}^{b_0} - \bm{\rho} \right)\right\| \\
\hat{\mathbf{a}}_{b_j} \times \left( \mathbf{p}_{b_j}^{b_0} - \bm{\rho} \right) \\
2\left[\inv{(\hat{\mathbf{q}}_{b_j}^{b_0})}\otimes \mathbf{q}_{b_j}^{b_0} \right]_{xyz}
\end{bmatrix},
\label{eqn:backscatter_r}
\end{equation}
where $[\cdot]_{xyz}$ extracts the vector part of the quaternion, which is the approximation of the error-state representation. $\delta\bm{\theta}_{b_j}$ is the 3D error-state representation of quaternion. The covariance matrix $\mathbf{P}_{b_j}$ is the measurement noise matrix, which can be estimated by statistically analyzing the pose features.
{\bf IMU residual}.
Based on the kinematics, the residual of IMU measurements can be defined as,
\begin{equation}
\begin{aligned}
\mathbf{e}_\mathcal{I}^k & =
\begin{bmatrix}
\delta\bm{\alpha}_{i_{k+1}}^{i_k} \\
\delta\bm{\beta}_{i_{k+1}}^{i_k} \\
\delta\bm{\gamma}_{i_{k+1}}^{i_k}
\end{bmatrix}
=
\begin{bmatrix}
R(\mathbf{q}_{i_0}^{i_k})\left( \mathbf{p}_{i_{k+1}}^{i_0} - \mathbf{p}_{i_k}^{i_0} + \frac{1}{2}\mathbf{g}^{i_0}\Delta t_k^2 \right) - \mathbf{v}_{i_k}^{i_k}\Delta t_k - \hat{\bm{\alpha}}_{i_{k+1}}^{i_k} \\
R(\mathbf{q}_{i_0}^{i_k})\left( R(\mathbf{q}_{i_{k+1}}^{i_0})\mathbf{v}_{i_{k+1}}^{i_{k+1}} + \mathbf{g}^{i_0}\Delta t_k \right) - \mathbf{v}_{i_k}^{i_k} - \hat{\bm{\beta}}_{i_{k+1}}^{i_k} \\
2\left[ \inv{(\mathbf{q}_{i_k}^{i_0})} \otimes \mathbf{q}_{i_{k+1}}^{i_0} \otimes \inv{(\hat{\bm{\gamma}}_{i_{k+1}}^{i_k})} \right]_{xyz}
\end{bmatrix},
\end{aligned}
\label{eqn:imu_r}
\end{equation}
where $\hat{\bm{\alpha}}_{i_{k+1}}^{i_k}$, $\hat{\bm{\beta}}_{i_{k+1}}^{i_k}$, and $\hat{\bm{\gamma}}_{i_{k+1}}^{i_k}$ are the preintegrated result defined in Eqn.~\eqref{eqn:integration}. The covariance $\mathbf{P}_{i_{k+1}}^{i_k}$ can be computed in the same way as in \cref{subsec:init}. At this stage, the residuals of the nonlinear system~\eqref{eqn:nonlinear} have been explicitly defined. We next define the error-state representation~\cite{leutenegger2015keyframe} of our system to clarify the linearization process.
The residuals of the Euclidean part in the state vector such as vehicle's position, velocity, and terminal's position can be written as
\begin{equation}
\mathbf{p} = \hat{\mathbf{p}} + \delta\mathbf{p}, \quad \mathbf{v} = \hat{\mathbf{v}} + \delta\mathbf{v}, \quad \bm{\rho} = \hat{\bm{\rho}} + \delta\bm{\rho}.
\end{equation}
Since the rotation is non-Euclidean, its residual is modeled as the perturbation in the tangent space of the rotation manifold,
\begin{equation}
\mathbf{q} = \hat{\mathbf{q}} \otimes \delta\mathbf{q}, \quad \delta\mathbf{q} \approx
\begin{bmatrix}
\frac{1}{2}\delta\bm{\theta} \\
1
\end{bmatrix},
\end{equation}
where $\delta\bm{\theta}$ is the minimal presentation of rotation residual. Thus the full error-state vector can be written as
\begin{equation}
\begin{aligned}
\delta\bm{\mathcal{S}} & = \left[ \delta\mathbf{s}_0; \quad \delta\mathbf{s}_1; \quad \cdots; \quad \delta\mathbf{s}_n; \quad \delta\mathbf{s}_b^i; \quad \delta\bm{\rho} \right], \\
\delta\mathbf{s}_k & = \left[ \delta\mathbf{p}_{i_k}^{i_0}; \quad \delta\mathbf{v}_{i_k}^{i_k}; \quad \delta\bm{\theta}_{i_k}^{i_0} \right], k \in [1, n], \; \delta\mathbf{s}_b^i = \left[\delta\mathbf{p}_b^i; \quad \delta\bm{\theta}_b^i\right].
\end{aligned}
\end{equation}
In each Gauss-Newton iteration, system~\eqref{eqn:nonlinear} is linearized at the current state estimate $\hat{\bm{\mathcal{S}}}$ with respect to the error-state vector $\delta\bm{\mathcal{S}}$. Taking the derivative of residual $\mathbf{e}_\mathcal{L}^j$ and $\mathbf{e}_\mathcal{I}^k$ with respect to $\delta\bm{\mathcal{S}}$ produces the corresponding Jacobian matrices. Then we can solve the nonlinear system~\eqref{eqn:nonlinear} by Ceres Solver~\cite{ceres-solver}.
We summarize the super-accuracy state estimation algorithm in Algorithm~\ref{alg:state}. The goal of this algorithm is to continuously estimate the MAV state by solving the nonlinear system~\eqref{eqn:nonlinear} (Line 1). The pseudocode lists the major steps using Ceres Solver. First, we need the initialization point obtained by solving Eqn.~\eqref{eqn:linear} (Line 2). We set the initial state as the current state (Line 3) and create template functors of residuals based on their measurement models (Line 4--6). In the while-true loop, as long as receiving new measurements of the backscatter-based pose sensing, we first carry out the marginalization from~\cite{zhang2020robot} (Line 8--9). Then we update the vehicle's state (Line 11) with the error-state vector obtained by evaluating the residuals in Ceres Solver (Line 10).
\begin{algorithm}
\caption{Super-accuracy State Estimation}
\label{alg:state}
\begin{algorithmic}[1]
\STATE Goal: Continuously estimate the vehicle's state by solving Eqn.~\eqref{eqn:nonlinear} using Ceres Solver~\cite{ceres-solver}
\STATE Given the initial state and extrinsic parameters $\bm{\mathcal{S}_0}$ obtained by solving Eqn.~\eqref{eqn:linear}
\STATE Current state $\hat{\bm{\mathcal{S}}} \leftarrow \bm{\mathcal{S}_0}$
\STATE Create the template functor of the backscatter sensing residual $T_b$ by model~\eqref{eqn:backscatter_r}
\STATE Create the template functor of the IMU residual $T_i$ by model~\eqref{eqn:imu_r}
\STATE Create the template functor of the prior $T_p$ by the marginalization method~\cite{zhang2020robot}
\WHILE {true}
\IF {Receiving new backscatter-based pose features}
\STATE Marginalizing the states~\cite{zhang2020robot}
\STATE Error-state vector $\delta\bm{\mathcal{S}}$ $\leftarrow$ Evaluating ($T_b$ + $T_i$ + $T_p$) from current state $\hat{\bm{\mathcal{S}}}$
\STATE $\hat{\bm{\mathcal{S}}} \leftarrow \hat{\bm{\mathcal{S}}} + \delta\bm{\mathcal{S}}$
\ENDIF
\ENDWHILE
\end{algorithmic}
\end{algorithm} |
0812.0203 | \section{Introduction}
\label{sec:intro}
The non-local correlations of a quantum mechanical system are encoded in the behavior of the
entanglement properties of its wave functions. A pure quantum state of a bipartite system $A\cup B$
defines a mixed state in the observed region $A$ obtained from tracing out the degrees of freedom in the unobserved region $B$.
The non-local correlations connecting regions $A$ and $B$ are encoded in the behavior of the
von Neumann entanglement entropy, $ S = - \textrm{Tr} \rho_{A} \ln \rho_{A}$, where $\rho_A$
is the reduced density matrix of region $A$.
The entanglement entropy of a local quantum field theory, relativistic or not
is known to exhibit an ``area law'' scaling of the form $S \sim \mu \ell^{D-1}$ in spatial dimensions $D>1$
where $\mu$ is a non-universal coefficient\cite{Srednicki1993,Bombelli1986}.
There has been growing interest in the scaling behavior
of the entanglement entropy at quantum critical points and in topological phases.
The entanglement entropy of quantum critical systems in $D>1$ should contain universal subleading terms, whose
structure for a general quantum critical system is not yet known.
The scaling behavior of the entanglement entropy has only been studied in detail
in quantum critical systems in $D=1$ space dimension. Such systems are described by a
$(1+1)$-dimensional conformal field theory (CFT).
In a $1+1$-dimensional CFT,
the entanglement entropy of a subsystem $A$
of linear size $\ell$ of an otherwise infinite system ({\it i.e.\/} of linear size $L \to \infty$)
obeys a logarithmic scaling law,
\cite{Callan1994,Holzhey1994,Calabrese2004,Vidal2003,Latorre2004}
$S\sim \frac{c}{3} \ln (\frac{\ell}{a})+\ldots$, where $c$ is the {\em central charge} of the CFT, and $a$
is the short distance cutoff.
There has been a number of studies on topics related to this 1D logarithmic scaling form. For instance,
a possible connection between this result and gravitational physics was suggested\cite{Ryu2006}.
A similar logarithmic scaling behavior was found at infinite disorder fixed points of
1D random spin chains\cite{Refael2004,Refael2007}.
The quantum entanglement of quantum impurity systems has also been studied.
\cite{Kopp2007,Kopp2007a,Laflorencie2006,Sorensen2007,Affleck2008}
In this paper, we consider
the universal scaling form of the
entanglement entropy
at 2D conformal quantum critical points (QCP) -- two-dimensional quantum critical systems
with scale-invariant many body wave functions. At a 2D conformal QCP,
equal-time correlators of local operators coincide with
the correlation functions
of an appropriate 2D {\em classical} system
at criticality (which is
described by an Euclidean 2D CFT)\cite{Ardonne2004}.
The entanglement entropy of 2D conformal QCPs was first considered in Ref.[\onlinecite{Fradkin2006}], where a
scaling form was found:
$S=\mu \ell-\frac{c}{6} (\Delta \chi) \ln (\ell/a)+ \ldots$, where $c$ is the central charge of the
2D Euclidean CFT associated with the norm squared of the wave function and $\Delta \chi$ is the
change of the Euler characteristic $\chi$, $\Delta \chi=\chi_{A\cup B}-\chi_A-\chi_B$.
Notice for a region $A \subset B$ with a smooth boundary, $\Delta\chi=0$ and hence the logarithmic term vanishes.
Hence, if region $A$ has a smooth boundary,
there is no universal logarithmic term. In this case, we will show that instead there is a {\em finite},
${\cal O}(1)$, {\em universal term} $\gamma_{QCP}$ in the entanglement entropy at these quantum critical points,
{\it i.e.\/}
\begin{equation}
S_{QCP}=\mu \ell + \gamma_{QCP}+\ldots.
\label{eq:SQCP}
\end{equation}
Through explicit calculations and using general arguments based on CFT, we will show that $\gamma_{QCP}$
has a topological meaning in the sense that it is determined by the contributions of the winding modes of the underlying CFT.
In a topological phase in 2D, the entanglement entropy scales as\cite{Kitaev2006a,Levin2006}
\begin{equation}
S_{\rm topo}=\alpha \ell -\gamma_{\rm topo} + O(\ell^{-1}),
\label{eq:Stopo}
\end{equation}
where $\alpha$ is a non-universal coefficient and
$\gamma_{\rm topo}$, the {\em topological entanglement entropy}, is a topological invariant,
the logarithm of the so-called total quantum dimension $\mathcal{D}
$ of the underlying topological field theory describing the topological phase.\cite{Kitaev2006a, Levin2006}
Topological phases have non-trivial ground state degeneracies on
surfaces of non-trivial topology.
The topological entanglement entropy $\gamma_{\rm topo}$ also depends on the
global topology of the manifold, and on surfaces with non-trivial topology, on the degenerate ground state on that
surface.\cite{Dong2008}
Although superficially similar, the finite universal contributions to the entanglement entropy in topological phases
and conformal quantum critical points, $\gamma_{\rm topo}$ and $\gamma_{QCP}$, have a different origin and structure.
In the case of a topological phase, $\gamma_{\rm topo}$ is in general determined by the modular $S$-matrix of the
topological field theory of the topological phase.\cite{Kitaev2006a,Levin2006,Dong2008}
This modular $S$-matrix governs the transformation properties of the (degenerate) ground states of the topological phase
on a torus under modular transformations, $\tau \to -1/\tau$, where $\tau$ is the modular parameter of the torus.\cite{Witten1989}
However, we show below that for a general conformal quantum critical point, whose ground state wave function is given by the Gibbs weights of a Euclidean rational unitary CFT, the universal term $\gamma_{QCP}$ is determined by the modular $S$-matrix associated with the norm squared of the wave function. Thus, the modular $S$-matrix of the topological phase and that of the wave functions of 2D conformal quantum critical points have a conceptually different origin. In particular, in all the cases we checked here, $\gamma_{QCP}$ and $\gamma_{\rm topo}$ contribute with opposite signs to their respective entanglement entropies, as implied by the conventions we used in Eq.\eqref{eq:SQCP} and Eq.\eqref{eq:Stopo}.
We will show that, when the logarithmic terms
in the
entanglement entropy cancel, the
finite terms $\gamma_{QCP}$ are universal and are determined not only by the central charge but also by the restrictions on the states
imposed by the compactification conditions.
Furthermore, the form of the result for the entanglement entropy of Eq.\eqref{eq:SFM} implies a
connection with boundary CFT, as developed by Cardy.\cite{Cardy1986a,Cardy1986b} Thus,
in addition of it being determined by the central charge $c$, it must also depend on
the operator content of the CFT. For the same reason, the structure of Eq.\eqref{eq:SFM}
also suggests a direct connection between this problem and the Affleck-Ludwig boundary entropy of 1D quantum CFTs.
\cite{Affleck1991}
The paper is organized as follows.
In Section \ref{sec:qdm} we apply this approach first to the simpler
case of the quantum Lifshitz model (and the related quantum dimer models, QDMs) on planar, cylindrical and toroidal geometries.
These results apply to the QCPs of (generalized) quantum dimer model on bipartite lattices \cite{Rokhsar1988,Moessner2001,Fendley2002,Fradkin2004,Papanikolaou2007b,Castelnovo2005} and in
quantum eight-vertex models\cite{Ardonne2004}.
Through explicit calculations for various geometries, we show that
that, when the logarithmic terms
in the entanglement entropy cancel, and that the subleading
finite terms $\gamma_{QCP}$ are universal, determined not only by the central charge but also by the restrictions
imposed by the compactification conditions.
In Section \ref{sec:generalizedqcps} we
generalize this result to all 2D conformal QCPs whose scale-invariant wave functions have norms that are the partition functions of 2D Euclidean Rational CFTs (RCFT), CFTs
with a finite number of primary fields\cite{Ginsparg:1988nr,yellow}.
More specifically, we show that the finite term in the
entanglement entropy of the 2D wave function is determined by the change of the Affleck-Ludwig boundary
entropy of the 1D CFT -- a quantity determined by the modular $S$-matrix of the associated CFT and by the coefficients
in the fusion
rules.
We also discuss specific examples of this class including
2D quantum loop models \cite{Freedman2004b} which, with the naive inner product,
are known to be quantum critical. \cite{Troyer2008,Fendley2008}
We also briefly discess the quantum net models.\cite{Freedman2004b,Levin2005,Troyer2008,Fendley2008}
In Section \ref{sec:conclusions} we conclude with a summary and a discussion on open questions. In particular, we comment on the implications of our results to the nature of related topological phases.
\section{Quantum Lifshitz model universality class}
\label{sec:qdm}
The quantum Lifshitz model\cite{Ardonne2004} (QLM) in two space dimensions
is defined by the following Hamiltonian with an arbitrary paramter $k$:
\begin{equation}
H = \int d^{2} x \left[ \frac{\Pi^2}{2} +\frac{1}{2} \left(\frac{k}{4\pi}\right)^2 (\nabla^{2} \phi)^{2} \right],
\label{eq:qlm}
\end{equation}
where $\phi$ is a scalar field $\Pi = \dot{\phi}$ is its canonical momentum conjugate to $\phi$. The QLM Hamiltonian Eq.\eqref{eq:qlm} defines a class of
QCP's with dynamic critical exponent $z=2$, and a continuous parameter $k$.
This remarkable property of the model is evident in the exactly known wave function for the ground state $|GS\rangle$ which is a superposition of all field configurations $\phi(x,y)$
with the configuration dependent weight\cite{Ardonne2004}:
\begin{equation}
\Psi_{GS}[\phi] = \langle[\phi]|GS\rangle=\frac{1}{\sqrt{Z} }
e^{\displaystyle{-S[\phi]/2}},
\label{eq:Psi0qlm}
\end{equation}
with
\begin{equation}
S[\phi]=\int d^2x \; \frac{k}{4\pi} \left({\vec \nabla} \phi(x)\right)^2
\label{eq:Sofphi}
\end{equation}
and the norm squared of the state
\begin{equation}
Z=||\Psi_{GS}||^2=\int D\phi \;\; e^{\displaystyle{-S[\phi]}}.
\label{eq:norm}
\end{equation}
Notice $Z$ is identical to
the partition function for the Gaussian model, which defines free boson Euclidean CFT\cite{Nienhuis1987}, albeit with the ``stiffness'' $k$.
Hence Eq.\eqref{eq:qlm} defines an infinite class of 2D conformal QCP's all associated with free boson CFTs.
The QLM can be viewed low energy effective field theory capturing universal aspects of various microscopic lattice models
with $\phi$ playing the role of coarse grained height field\cite{Moessner2002,Henley1997,Ardonne2004} with the ``stiffness'' $k$
determined by the appropriate ``microscopic'' coupling constants\cite{Ardonne2004,Papanikolaou2007b}.
For such a mapping to work, the constraints of the lattice models should be build in through compactification of the boson field $\phi$ by demanding all physical operators to be invariant under the shift of $\phi\rightarrow \phi+2\pi r$ or equivalently all physical operators to take the form of vertex operators $e^{in\phi/r}$ for integer $n$.
In subsection \ref{subsec:micro} we will discuss specific examples of this mapping corresponding to particular values of $k$ using the convention of fixing $r=1$.
The examples will include so-called Rokhsar-Kivelson point (RK) of the quantum dimer model\cite{Rokhsar1988} and its generalizations
\cite{Castelnovo2005,Alet2005,Papanikolaou2007}
and the {\em quantum} eight-vertex model\cite{Ardonne2004}
special choices of the Baxter weight\cite{Baxter1982}.
Since $k$ can be varied in the QLM, this theory has an exactly {\em marginal} operator, resulting in continuously varying
critical exponents (scaling dimensions) of the allowed (vertex) operators.\cite{Papanikolaou2007}
\subsection{Entanglement entropy and partition functions for 2D conformal QCPs}
\label{sec:FM-summary}
To investigate the universal finite terms in the entanglement entropy at
2D conformal QCPs, we will rely on the approach described in the work of Fradkin and Moore.\cite{Fradkin2006}
They showed that $\textrm{tr} \rho_A^n$, where $\rho_A$ is
the (normalized) reduced density matrix of a
region $A$, with $A \subset B$ separated by the boundary $\Gamma$,
for the ground state $\Psi_0$ on $A \cup B$,
is given by
\begin{equation}
\textrm{tr}\rho_A^n= \frac{Z_n}{Z^n} = \left(\frac{Z_A Z_B}{Z_{A \cup B}}\right)^{n-1}.
\label{eq:rhoA^n}
\end{equation}
Here $Z_n$ is the partition function of $n$ copies of the equivalent 2D classical statistical mechanical
system
satisfying the constraint
that their degrees of freedom are identified on the boundary $\Gamma$, and
$Z^n$ is the partition function for $n$ decoupled systems. The partition functions on the r.h.s of
Eq.\eqref{eq:rhoA^n} are
$Z_A=||\Psi_0^A||^2$ with support on region $A$ and $||\Psi_0^B||^2$ with support in region $B$,
both satisfying
generalized Dirichlet ({\it i.e.\/} fixed) boundary conditions on $\Gamma$ of $A$ and $B$, and
$Z_{A \cup B}=||\Psi_0||^2$ is the norm squared for the full system.
The entanglement entropy $S$ is then obtained by an analytic continuation in $n$,
\begin{eqnarray}
S&=&-\textrm{tr}
\left(\rho_A \ln \rho_A\right) \nonumber \\
&=& - \lim_{n \to 1} \frac{\partial}{\partial n}
\textrm{tr} \rho_A^n \nonumber \\
&=& - \log \left(\frac{Z_A Z_B}{Z_{A \cup B}}\right)\nonumber\\
&&
\label{eq:SFM}
\end{eqnarray}
Hence, the computation of the entanglement entropy is reduced to the computation of a ratio of
partition functions in a 2D classical
statistical mechanical problem, an Euclidean CFT in the case of a critical wave function,
each satisfying specific boundary conditions.
In order to construct $\textrm{tr} \rho_A^n$,
we need an expression for the matrix elements of the reduced density matrix
$ \me{\phi^A}{\rho_A}{{\phi^\prime}^A}$.
Since the ground state wave function Eqs.\eqref{eq:Psi0qlm} and \eqref{eq:Sofphi}
is a local function of the field $\phi(x)$,
a general matrix element of the reduced density matrix is a trace of the density
matrix of the pure state $\Psi_{GS}[\phi]$ over the degrees of freedom of the ``unobserved'' region $B$,
denoted by $\phi^B(x)$. Hence the matrix elements of $\rho_A$ take the form
\begin{eqnarray}
&&\me{\phi^{A}}{ \hat{\rho}_{A}}{ {\phi^\prime}^A }
= \nonumber \\
&&\frac{1}{Z} \int [D\phi^{B} ] \,\, e^{\displaystyle{-\left(\frac{1}{2} S^{A}(\phi^{A}) +
\frac{1}{2} S^{A}({\phi^\prime}^A )
+S^B(\phi^B)\right)}},
\nonumber \\
&&
\end{eqnarray}
where the degrees of freedom satisfy the {\em boundary condition} at the common boundary
$\Gamma$:
\begin{equation}
BC_\Gamma:\quad \phi^B|_\Gamma=\phi^A|_\Gamma={{\phi^\prime}^A}|_\Gamma.
\label{eq:BCphiGamma}
\end{equation}
Proceeding with the computation of
$\textrm{tr}\rho_A^n$, it is immediate to see that the matrix product requires the condition $\phi^A_i={\phi^\prime}^A_{i-1}$
for $i=1,\cdots,n$, and ${{\phi^\prime}^A_n}=\phi^A_1$ from the trace condition.
Hence, $\textrm{tr}{\rho_A^n}$
takes the form
\begin{eqnarray}
\textrm{tr} \rho_A^n&\equiv& \frac{Z_n}{Z^n}
\nonumber \\
&=&\frac{1}{Z^n} \int_{BC_\Gamma} \prod_i D \phi_i^A D\phi_i^B \; e^{\displaystyle_{-\sum_{i=1}^n
\left(S(\phi_i^A)+S(\phi_i^B)\right)}}
\nonumber \\
&&
\label{eq:trrhoAn1}
\end{eqnarray}
{\em subject to the boundary condition $BC_\Gamma$} of Eq.\eqref{eq:BCphiGamma}.
Notice that the numerator, $Z_n$ is the partition
function on $n$ systems whose degrees of freedom are identified in $\Gamma$ but are
otherwise independent. Also notice the absence of the factors of $1/2$ in the exponentials
of Eq.\eqref{eq:trrhoAn1}.
The other important
consideration is that the compactification condition requires that two fields that differ by
$2\pi r$ be equivalent. Hence, the
boundary condition of Eq.\eqref{eq:BCphiGamma} is defined {\em modulo $2\pi r$}.
(Equivalently, the proper form of the
degrees of freedom is $e^{i\phi}$.) This means that one can alternatively define
$Z_n$ as a partition function for $n$
systems which are decoupled {\em in the bulk} but have a boundary coupling of the
form (in the limit $\lambda_\Gamma \to \infty$,
which enforces the boundary condition)
\begin{equation}
S_\Gamma=-\oint_\Gamma \lambda_\Gamma \sum_{i=1}^n
\cos(\phi_i-\phi_{i+1}).
\label{eq:SGamma}
\end{equation}
Here the fields $\phi_i$ extend over the entire region $A \cup B$.
Thus, this problem maps onto a boundary CFT for a system with $n$ ``replicas''
coupled only through the boundary condition on the closed contour $\Gamma$, the boundary
between the $A$ and $B$ regions.
For the special case of the free scalar field, one can simplify this further by taking
linear combinations of the replica fields.
Then the condition that the scalar fields $\phi_{i}$ agree with each other on $\Gamma$
can be satisfied by forming $n-1$ relative
coordinates $\varphi_i\equiv \phi_{i}-\phi_{i+1}$ ($i=1,\ldots,n-1$)
that vanish ({\em mod} $2\pi r$) on $\Gamma$, and one ``center of mass
coordinate'' field $\phi\equiv \frac{1}{\sqrt{n} } \sum_{i=1}^n \phi_{i} $
that is unaffected by the boundary $\Gamma$ (reflecting the fact that nothing physical
takes place at $\Gamma$).
Hence, the computation of $\textrm{tr} \rho_A^n$ reduces to the product of two partition functions:
\begin{enumerate}
\item
The partition function for the ``center of mass'' field $\phi$; since $\phi$ does not see the
boundary $\Gamma$, this is just the partition function $Z_{A \cup B}$ for a single field in the
entire system.
\item
The partition function for the $n-1$ fields $\varphi_i$ which are independent from each other
and vanish ({\em mod} $2\pi r$ on $\Gamma$. We denote this by $\left({Z^D_\Gamma}\right)^{n-1}$.
However, the fields $\varphi_i$ on the $A$ and $B$ regions are effectively decoupled from each other.
Hence, this partition function further factorizes to $Z^D_\Gamma=Z_A^D Z_B^D$, where $Z_A^D$ and $Z_B^D$
are the partition functions for a single field $\phi$ on $A$ and $B$ respectively,
satisfying in each case Dirichlet (fixed) boundary conditions ({\em mod} $2\pi r$)
at their common boundary $\Gamma$.
\end{enumerate}
Thus, we can write the trace $\textrm{tr} \rho_A^n$ as
\begin{equation}
\textrm{tr} \rho^{n}_{A} = \frac{\left(Z^{D}_\Gamma\right)^{n-1} Z_{A \cup B}}{Z^{n}_{A \cup B}} =
\left( \frac{Z^{D}_\Gamma }{Z_{F} }\right)^{n-1} =
\left( \frac{ Z_{A}^D Z_{B}^D }{Z_{A \cup B} } \right)^{n-1}.
\label{eq:trrho^n}
\end{equation}
Here the denominator factor, $Z_{A \cup B}^n$ comes from
the normalization factors, and represents the partition function over the entire system.
The entanglement entropy is then\cite{Fradkin2006}
\begin{equation}
S = -\log Z_{A}^D - \log Z_{B}^D+ \log Z_{A \cup B}\equiv F_A^D+F_B^D-F_{A \cup B},
\label{eq:SFM2}
\end{equation}
which, as indicated in the r.h.s of Eq. \eqref{eq:SFM2} reduces to the computation of the free energies
$F_A^D$, $F_B^D$ and $F_{A \cup B}$, for the equivalent 2D Euclidean CFT on regions $A$ and $B$,
each satisfying Dirichlet (fixed) boundary conditions on the common boundary $\Gamma$, and on the
full system, $A \cup B$, respectively.
The behavior of the free energy of a CFT as a function of the system size $\ell$ has been studied in detail.
The divergent terms, as $\ell \to \infty$, have the form\cite{Kac1966, privman88, Cardy-Peschel1988}
\begin{equation}
F(\ell)=f_0 \ell^2+ \sigma \ell -\frac{c}{6} \chi \ln \left(\frac{\ell}{a}\right) + {\cal O}(1)
\label{eq:cardy-peschel}
\end{equation}
provided the boundary $\Gamma$ is smooth (and differentiable). Here, $f_0$ and $\sigma$ are two
non-universal quantities, and $a$ is the short-distance cutoff; $c$ and $\chi$ are, respectively,
the central charge of the CFT and the Euler characteristic of the manifold.
It follows from this result that the entanglement entropy for region $A$ takes the form\cite{Fradkin2006}
\begin{equation}
S=\alpha \ell - \frac{c}{6} (\Delta \chi) \ln \left(\frac{\ell}{a}\right)+{\cal O}(1).
\label{eq:FM06}
\end{equation}
provided the boundary $\Gamma$ is smooth.
In all the geometries we discuss, the change in the Euler characteristic
vanishes, $\Delta \chi=0$, and there is no logarithmic term. However we will show below that, if the logarithmic terms cancel, there exist a universal finite $O(1)$ term,
as well as other universal
dependences on the geometry (such as aspect ratios).
We will now extract these universal finite terms.
\subsection{The Entanglement Entropy of the Quantum Lifshitz Universality Class}
\label{sec:entropy-qlm}
Here we calculate $\gamma_{QCP}$ at QCPs of the QLM universality class defined by Eq.\eqref{eq:qlm}
for three different geometries: (i) a cylindrical geometry, (ii) a toroidal geometry,
and (iii) a disk geometry. For the cylinder and disk we assume the
Dirichlet boundary conditions at the open ends.
We use the known results on the free boson partition function\eqref{eq:norm}
for different topologies and boundary conditions\cite{Polchinski86,Weisberger87,Ginsparg:1988nr,Fendley1994,Eggert1992,yellow},
which are necessary for the calculation of entanglement entropy.
It is useful to note that the action Eq.\eqref{eq:Sofphi} for general value of the ``stiffness'' k turns into the standard form:
\begin{equation}
S[\varphi]= \frac{1}{8\pi} \int d^2x \; \left(\partial_\mu \varphi \right)^2,
\label{eq:rescaling}
\end{equation}
upon a rescaling of the field $\sqrt{2k} \phi=\varphi$. If $\phi$ is compactified with radius $r=1$,
the rescaled field $\varphi$ has an effective compactification radius $R=\sqrt{2 kr^2}$.
We find $\gamma_{QCP}$ to depend linearly on $\ln R$ in all cases we consider.
\subsubsection{The Cylinder}
\label{subsec:Cylinder}
\begin{figure}[hb]
\begin{center}
\includegraphics[width=0.5\textwidth]{cylinderAB.eps}
\end{center}
\caption{Cylinder}
\label{fig:Cylinder}
\end{figure}
Let us begin by considering first a system on a long cylinder of linear size $L$ and circumference $\ell$ with
$L \gg \ell$.
Region $A$ to be observed, is a cylinder of length $L_A$ and circumference
$\ell$. The complement region, $B$,
is a cylinder of length $L_B$ (see Fig.\ref{fig:Cylinder}), also with circumference $\ell$. We assume that the QLM wave function Eq.\eqref{eq:Psi0qlm}
and hence the associated 2D partition function Eq.\eqref{eq:norm}
obey the Dirichlet boundary conditions at both ends of the cylinder, $A \cup B$.
From Eq.\eqref{eq:SFM2}, the entanglement entropy $S_A=S_B\equiv S$ is given by
\begin{equation}
S= -\ln Z^{A}_{DD}(L_A,\ell)-\ln Z^{B}_{DD}(L_B,\ell) +\ln Z^{A\cup B}_{DD}(L_A+L_B,\ell)
\label{eq:S-cyl}
\end{equation}
Here $Z_{DD}(L,\ell)$ is the partition function of Eq.\eqref{eq:norm} for a boson with compactification radius $R$ on cylinder of length $L$ and circumference $\ell$ with Dirichlet boundary conditions on both ends, which is well known:\cite{Fendley1994}
\begin{equation}
Z_{DD}(L,\ell)=\mathcal{N}\; \frac{1}{R} \frac{\vartheta_3\left(\frac{2\tau}{R^2}\right)}{\eta(q^2)}
\label{eq:ZDD-cylinder}
\end{equation}
where $R=\sqrt{2r^2k}$ is the effective compactification radius (as before), and $\mathcal{N}$ is a non-universal regularization-dependent prefactor, responsible for the area and
perimeter dependent terms in the free energy shown in Eq.\eqref{eq:cardy-peschel}.
(There are no logarithmic terms for a cylinder or a torus as their Euler characteristic $\chi$ vanishes.)
In Eq.\eqref{eq:ZDD-cylinder} $\tau=i\frac{L}{\ell}$ is the modular parameter, encoding the geometry of the cylinder,
and $q=e^{2\pi i \tau}$. The elliptic theta-function $\vartheta_3(\tau)$ and the Dedekind eta-function $\eta(q)$
are given by
\begin{equation}
\vartheta_3(\tau)=\sum_{n=-\infty}^\infty q^{\frac{n^2}{2}}, \quad
\eta(q)=q^{\frac{1}{24}} \prod_{n=1}^\infty (1-q^n).
\label{eq:theta3-eta}
\end{equation}
The important feature of Eq.\eqref{eq:ZDD-cylinder} is the factor $1/R$, the contribution of the winding modes of the compactified boson on the cylinder
with Dirichlet boundary conditions.
Putting it all together, it is straightforward to find an expression for the entanglement entropy using Eq.\eqref{eq:SFM}.
In general, the entanglement entropy depends on the geometry ({\it e.g.\/} the aspect ratios $L/\ell$) of the cylinders,
encoded in ratios of theta and eta functions. However,
in the limit $L_A\gg \ell$, in which the length of the cylinders are long compared to their circumference,
the entanglement entropy given by Eq.\eqref{eq:S-cyl} and Eq.\eqref{eq:ZDD-cylinder} takes a simple form
\begin{equation}
S=\mu \ell + \ln R,
\label{eq:entropy-cylinder}
\end{equation}
where $\mu$ is a non-universal constant that depending on the regularization-dependent pre-factor $\mathcal{N}$
of Eq.\eqref{eq:ZDD-cylinder}. Hence, there is a $\mathcal{O}(1)$ universal contribution to the entanglement entropy
$\gamma_{QCP}= \ln R$ for the cylinderical geometry.
The explicit dependence of $\gamma_{QCP}$ on the effective effective compactification radius $R=\sqrt{2kr^2}$ shows that it is determined by the winding modes of the compactified boson
and thus it is a universal quantity determined by the topology of the surface.
In particular we find that the universal piece of the entanglement entropy, $\gamma_{QCP}$, for a compactified boson is a
continuous function of the radius $R$, a consequence of the existence of an exactly marginal operator at this QCP.
We find the similar relations for all topologies we considered.
We will come back to this point in section \ref{subsec:micro}, in the context of several microscopic models of interest.
\subsubsection{The Torus}
\begin{figure}[hb]
\begin{center}
\includegraphics[width=0.5\textwidth]{torusAB.eps}
\end{center}
\caption{Torus}
\label{fig:Torus}
\end{figure}
We now consider the case in which the full system $A\cup B$ is a torus for which the real part of the modulus $L/\ell \gg 1$,
as shown in Fig.\ref{fig:Torus}. The two subsystems, $A$ and $B$ are now two cylinders,
of length $L_A$ and $L_B$ respectively ($L=L_A+L_B$),
both with the same circumference $\ell$. We will thus need the partition function
on a torus and on two cylinders (with both ends of the cylinders obeying Dirichlet boundary conditions.)
The trace $\textrm{tr} \rho_A^n$ now becomes
\begin{equation}
\textrm{tr } \rho_{A}^{n} =
\left( \frac{Z^{A}_{DD}(L_A,\ell) Z^{B}_{DD}\left(L_B,\ell\right)}{Z^{A\cup B}_{\rm torus}(L,\ell) } \right)^{n-1}.
\end{equation}
The partition functions for the two cylinders, $A$ and $B$ has the form of Eq. \eqref{eq:ZDD-cylinder}.
The partition function
for the torus is\cite{yellow,Ginsparg:1988nr}
\begin{equation}
Z_{\rm torus}(L,\ell) = \left(Z_{\rm cylinder}^{NN}\left(\frac{L}{2},\ell \right)\right)^2,
\label{eq:torus-cylinder}
\end{equation}
where $Z_{\rm cylinder}^{NN}(\frac{L}{2},\ell)$ is the partition function on a cylinder of length $\frac{L}{2}$ and
circumference $\ell$, with Neumann boundary conditions at both ends:
\begin{equation}
Z_{\rm cylinder}^{NN}\left(\frac{L}{2},\ell \right)=\mathcal{N}\; \sqrt{\frac{kr^2}{2}}\; \frac{\vartheta_3\left(\tau k r^2\right)}{\eta(q^2)},
\label{eq:}
\end{equation}
where $\tau=i\frac{L}{\ell}$ and $q=\exp(2\pi i \tau)$.
In the limit $L_A \gg \ell \gg a$ and $L_B \gg \ell \gg a$, the entanglement entropy for the toroidal geometry is
\begin{equation}
S=\mu \ell +
2 \ln \left(\frac{R^2}{2}\right).
\label{eq:entropy-torus}
\end{equation}
Hence, for the toroidal geometry, the universal term is $\gamma_{QCP}=2 \ln \left(kr^2 \right)=2 \ln (R^2/2)$.
In Eq.\eqref{eq:entropy-torus} $\mu$ is, once again, a non-universal factor which depends on both the short distance regularization and boundary
conditions (in fact, it is not equal to the constant we also called ``$\mu$'' in the entanglement entropy
for the case of the
cylinder, Eq.\eqref{eq:entropy-cylinder}.) As was the case for the cylindrical geometry,
in the case of the torus
$\gamma_{QCP}$ is also determined by the contribution of the zero modes of the compactified boson to the partition functions.
Thus, here too, $\gamma_{QCP}$ depends on the
effective boson radius $R=\sqrt{2kr^2}$. However, the different values of
$\gamma_{QCP}$ in Eq.\eqref{eq:entropy-torus} and Eq.\eqref{eq:entropy-cylinder}
is due to the fact that on the torus all
three partition functions have contributions from the zero modes.
\subsubsection{The Disk}
\label{subsec:Disk}
\begin{figure}[hb]
\begin{center}
\includegraphics[width=0.4\textwidth]{diskAB.eps}
\end{center}
\caption{Disk}
\label{fig:Disk}
\end{figure}
Finally, we compute the entanglement entropy for the disk geometry, shown in Fig.\ref{fig:Disk}.
The line of argument used
above applies here as well. This is the case discussed in Ref.[\onlinecite{Fradkin2006}],
where it was found that the logarithmic
term in the entanglement entropy cancels exactly if the boundary $\Gamma$ is smooth. Here we compute the (subleading)
finite universal piece.
To compute the entanglement entropy we need to compute three partition functions, on the two disks $A$ and
$A \cup B$,
and on the annulus $B$, all with Dirichlet boundary conditions. These partition functions were computed in the literature
long
ago for an uncompactified boson.\cite{Polchinski86,Weisberger87} They can be obtained from the partition functions on
cylinders,
with Dirichlet-Dirichlet (for the annulus) and
Dirichlet-Neumann (for the disks) boundary conditions by a conformal mapping $w=\frac{\ell}{2\pi} \ln z$, from the $z$
complex
plane to the cylinder (labeled by $w$). The partition function for the annulus (region $B$) of inner circumference $\ell$
and
outer circumference $L$ (with Dirichlet boundary conditions) is
\begin{equation}
Z_{DD}^B(L,\ell)=\mathcal{N} \; \sqrt{\frac{\pi}{\ln \left(L/\ell\right)}}\;
\frac{1}{\sqrt{2kr^2}}\frac{\vartheta_3\left(\frac{\tau_B}{r^2k}\right)}{\eta(q_B^2)}.
\label{eq:Z-annulus}
\end{equation}
Except for the factor of $1/\sqrt{2kr^2}$, which is due to the zero modes of the compactified boson, this result agrees with
those of Ref.[\onlinecite{Weisberger87}]. In Eq.\eqref{eq:Z-annulus} we have used $q_B=e^{2\pi i \tau_B}=\frac{\ell}{L}$
(with the modular parameter $\tau_B=-\frac{i}{2\pi} \; \ln \left(\frac{L}{\ell}\right)$).
Similarly, the partition functions on the two disks, regions $A$ and $A\cup B$, are conformally mapped to two infinitely long
cylinders (as the UV cutoff $a \to 0$) with Neumann-Dirichlet boundary conditions. These partition functions are
\begin{equation}
Z_{\rm disk}=2^{-5/12} \pi^{1/4} \frac{\vartheta_4 \left(\tau \right)}{\eta(q^2)},
\label{eq:ZNN}
\end{equation}
where $q=\left(\frac{a}{\ell}\right)^4,\left(\frac{a}{L}\right)^4$ for regions $A$ and $A \cup B$, respectively,
and $\tau$ is their corresponding modular
parameter; $\vartheta_4(\tau)$ is the elliptic theta-function
\begin{equation}
\vartheta_4(\tau)=\sum_{n=-\infty}^\infty (-1)^n q^{\frac{n^2}{2}}.
\label{eq:theta4}
\end{equation}
The resulting entanglement entropy for the planar (disk) geometry is found to be
\begin{equation}
S=\frac{1}{2} \; \ln \left[\frac{1}{\pi} \ln \left(\frac{L}{\ell} \right)\right]+ \ln R.
\label{eq:SDisk}
\end{equation}
Hence, for the case of the disk there is also a universal finite piece in the entanglement entropy,
$\gamma_{QCP}=\ln \sqrt{2kr^2}\equiv \ln R$.
As in the cases discussed above (the cylinder and the torus), here too $\gamma_{QCP}$ has a
topological origin as it is due to the winding modes of the compactified boson.
However, unlike the case of the of the cylinder and toroidal geometries, in the case of the disk there is
also a dependence on the aspect ratio $L/\ell$ (the double logarithmic term), as already noted in
Ref.[\onlinecite{Fradkin2006}]. (Note that we included the factor of $1/\pi$ in the double logarithm since it arises from the conformal mapping.)
\subsection{Entanglement Entropy of Quantum Dimer Models and Related Systems}
\label{subsec:micro}
The results on the entanglement entropy of the preceding subsections apply to several ``microscopic'' systems of interest. The
simplest of them is the quantum dimer model on bipartite lattices at the RK point (associated with the RK wave function of the QDM).
As noted in Ref.[\onlinecite{Ardonne2004}],
the RK point of the QDM maps onto the quantum Lifshitz model for a particular value of the radius $r=1$ and stiffness $k=2$ (in
the notation used here.) This corresponds to a 2D Euclidean boson CFT at the free fermion radius. Of course, this is not an
accident, since in
this case the lattice partition functions can also be computed exactly by pfaffian methods,
\cite{Fisher1961,Samuel1980,Fendley2002} and hence it is a free Dirac fermion system.
Generalized quantum dimer models have
been discussed recently.\cite{Alet2005,Papanikolaou2007b,Castelnovo2005} In these models the wave functions correspond to
dimer models with weights that depend on the number of dimer pairs on the plaquettes. For a considerable range of values of these
weights the system remains critical and can also be mapped onto a quantum Lifshitz model, albeit with a different stiffness
connected with the presence of an exactly marginal operator. Thus, in these models the stiffness varies continuously as a
function of the microscopic weights. This dependence, discussed in detail in Ref.[\onlinecite{Papanikolaou2007b}], is of course
non-universal, as it depends on the microscopic structure of the system. Nevertheless, the
critical exponents have a universal dependence on the stiffness. The same applies to the universal piece of the
entanglement entropy $\gamma_{QCP}$, which can be read-off from the results presented in this section.
Similarly, the {\em quantum} eight-vertex model wave function\cite{Ardonne2004} also maps onto a
free fermion problem for a special choice of weights.\cite{Baxter1982}
For general values of $k$ the fermions are interacting (see the discussion below) but the effects only
enter through an exactly marginal operator. The mapping of the quantum 2D eight-vertex model to the quantum Lifshitz model
was shown in detail in Ref.[\onlinecite{Ardonne2004}] where the relation between the stiffness $k$ of the compactified boson
and the Baxter weights is given explicitly. $k$ and the weight $c$ in the Baxter
wave function (along the six vertex line) are related by
\begin{equation}
\frac{\pi}{2k}=\cot^{-1}\sqrt{\frac{4}{c^4}-1}
\label{eq:6v}
\end{equation}
for a boson with compactification radius $r=1$ or, equivalent, an effective radius $R=\sqrt{2kr^2}$.
The results of the preceding subsections on the entanglement entropy for the quantum Lifshitz model apply to the lattice
models almost without change. Once the mapping of the stiffness to the microscopic parameters
(as in the case of the quantum eight vertex
model) is known, the universal piece,
$\gamma_{QCP}$, can be read-off immediately. The only caveat here is that in lattice models it is impossible to have
closed simply connected regions with smooth boundaries. The resulting paths of the effective coarse grained quantum Lifshitz
model will always have singularities, such as corners, which contribute with a logarithmic dependence to the entanglement entropy
(as discussed in Ref.[\onlinecite{Fradkin2006}]) rendering the finite terms generally non-universal. The cylinder and torus
geometries are exceptional in this sense, and allow for a direct check of these ideas in microscopic models, either through an
exact solution or by means of numerical computations.
We end this discussion by giving the results for the universal entanglement entropies $\gamma_{QCP}$
for the Lifshitz universality class at the free
fermion (or dimer) and Kosterlitz-Thouless transition of the dimer and Baxter (six vertex) wave functions for all three geometries.
(See the summary of Table
\ref{table:entropies}.) At the ``free dimer'' point (the free fermion point of the dimer
models) the stiffness $k=2$ (corresponding to $c^2=\sqrt{2}$ in the Baxter wave function), and the universal term of the
entanglement entropy for a disk geometry is $\gamma_{QCP}^{\rm disk}=\ln \sqrt{2kr^2}=\ln 2$.
For the cylinder, also at the free dimer point, we also found
$\gamma_{QCP}^{\rm cylinder}= \ln 2$, while for the torus we obtained
$\gamma_{QCP}^{\rm torus}=2\ln 2$. (Below we will discuss the relation of these results with the {\em topological} entanglement entropy of
the nearby $\mathbb{Z}_2$ topological phase.)
Away from the free dimer (or fermion) points, the stiffness $k$ changes and so does the
entanglement entropy. Thus, at the Kosterlitz-Thouless transition point of both the dimer and six vertex wave functions (where the
Baxter weight is $c=\sqrt{2}$), the stiffness is $k=1$.
(At this point the associated $c=1$ CFT has an $SU(2)_1$ Kac-Moody current algebra, and the effective compactification
radius here is $R=\sqrt{2}$.) The (finite) entanglement entropies now are $\gamma_{QCP}^{\rm torus}=2 \ln \sqrt{2}$,
$\gamma_{QCP}^{\rm cylinder}=0$, and $\gamma_{QCP}^{\rm disk}=\ln \sqrt{2}$.
\begin{table}[h]
\newcolumntype{Y}{>{\centering\arraybackslash$}m{1cm}<{$}}
\newcolumntype{C}{>{\centering\arraybackslash$}m{2cm}<{$}}
\renewcommand{\arraystretch}{2}
\begin{tabular}{|C||C|C|C|}
\hline
R &{\rm cylinder}&{\rm torus}& {\rm disk}\\
\hline
2 \ ({\rm RK point}) & \ln 2 & 2 \ln 2 & \ln 2 \\
\hline
\sqrt{2} \ ({\rm KT point}) & \ln \sqrt{2} & 0 & \ln \sqrt{2}\\
\hline
\end{tabular}
\caption{Universal entanglement entropies $\gamma_{QCP}$ of the lattice models in QLM universality class
in the cylinder, torus, and disk
geometries. $\gamma_{QCP}$ based on calculations from QLM is quoted at the free fermion point (or RK point) $R=2$, and at the Kosterlitz-Thouless ($SU(2)_1$) point, $R=\sqrt{2}$.}
\label{table:entropies}
\end{table}
The only caveat in applying the calculation of $\gamma_{QCP}$ in the QLM to microscopic models is that is impossible to have
closed simply connected regions with smooth boundaries on a lattice. Hence the resulting paths of the effective coarse grained QLM
will always have singularities (such as corners) which contribute a finite logarithmic dependence to the entanglement entropy.
\cite{Fradkin2006}
The cylinder and torus
geometries are exceptional in this sense, and allow for a direct check of these ideas in microscopic models, either through an
exact solution or by means of numerical computations.
\section{Generalized conformal QCPs associated with RCFT}
\label{sec:generalizedqcps}
We now generalize the application of Eq.\eqref{eq:SFM} to the computation of the entanglement entropy to more general case of conformal QCPs, specifically those associated whose wave functions have an associated 2D Euclidean RCFT (a CFT with a finite number of primary fields.)
\subsection{Entanglement entropy and Boundary Conformal Field theory}
The ground state wave function for a conformal quantum critical point can be expressed as
Gibbs weight associated with a 2D Euclidean CFT:
\begin{equation}
\Psi_{GS}[\phi]=\frac{1}{\sqrt{Z}} e^{\displaystyle{-S[\phi]/2}}
\end{equation}
as in the case of the QLM discussed in the previous section. Hence there is a one-to-one mapping between
the norm square of the wave function
and the partition function of a local 2D Euclidean CFT, and also between the equal-time correlators
of the operators of the 2D conformal QCP map onto
and the correlators of primary fields of the 2D Euclidean CFT.
Furthermore, we will also assume that the
associated Euclidean CFT is {\em unitary} (the $S$-matrix to be defined below
is unitary) and that it is a {\em RCFT}. The restriction to unitary RCFT allows us to exploit well developed technology for this large class of CFTs\cite{Ginsparg:1988nr,yellow}, especially that of
operator product expansion (OPE) and of {\em modular S-martirx}, in calculation of $\gamma_{QCP}$.
The behavior of RCFTs with specified boundary conditions (especially their partition functions),
is the subject of boundary conformal field theory, and was discussed extensively by Cardy\cite{Cardy1986a,Cardy1989}.
We will follow the approach and results of Cardy in this section.
We also need to specify the boundary conditions at the ends of the cylinder, {\it i.e.\/} the {\em boundary states}
of the boundary CFT.\cite{Cardy1986a} Let us denote these conformal boundary conditions by $(\alpha,\beta)$.
The associated (conformally invariant) boundary states $\bra{a}$ and $\ket{b}$ can be constructed for each CFT.
On the other hand, at the common boundary $\Gamma$ between the regions $A$ and $B$, all $n-1$
fields must obey {\rm fixed} (`Dirichlet') boundary conditions. As shown by Cardy,\cite{Cardy1986a} this boundary condition is quite generally given
by the boundary state $\ket{0}$ in the conformal block of the identity ${\bf 1}$.
For simplicity, we will consider here only the geometries of a cylinder (with specific boundary conditions at each end) and a torus. As in
Eq.\eqref{eq:SFM} we will need to compute the free energies of region
$A$, $B$ and $A \cup B$ with fixed boundary conditions.
The partition function for a RCFT on a cylinder
of length $L$ and circumference $\ell$, with boundary conditions $a$ and $b$ on the left and right ends respectively,
$Z_{a/b}$, can be expressed in terms of the characters $\chi_i$ of the RCFT:
\begin{equation}
Z_{a/b}=\sum_j N^j_{ab} \chi_j\left(e^{\displaystyle{-\pi \ell/L}}\right),
\label{eq:Zbc}
\end{equation}
where the integers $N^j_{ab}$ are the fusion constants, the coefficients in the OPE of the RCFT,
\begin{equation}
\Phi_a \times \Phi_b=\sum_j N^j_{ab} \Phi_j.
\label{eq:OPE}
\end{equation}
The Virasoro characters $\chi_j$ are given by the trace over the descendants $\ket{\Phi_j}$ of the highest weight state, which are obtained by acting on it
with the Virasoro generators $\hat{L}_{-n}$ ($n>0$):
\begin{equation}
\chi_j(e^{-\pi \ell/L})=e^{{\pi \ell c }/{24 L}} \;
\textrm{tr}_a\left(e^{-\frac{\pi \ell}{L} \hat{L}_0}\right),
\label{eq:characters}
\end{equation}
where $c$ is the central charge of the CFT, $\hat{L}_0$ is the $n=0$ Virasoro generator. Here the modular parameter is $\tau\equiv i\ell/2L$.
Under a modular transformation $\tau\rightarrow-1/\tau$, which exchanges the Euclidean ``space'' and ``time''
dimensions of the cylinder ({\it i.e.\/} it flips the cylinder from the ``horizontal'' to the ``vertical'' position),
the characters transform as
\begin{equation}
\chi_i\left(e^{\displaystyle{-\pi \ell/L}}\right)=S^j_i\; \chi_j\left(e^{\displaystyle{-4\pi L/\ell}}\right),
\label{eq:modular}
\end{equation}
where $S^j_i$ is the {\em modular $S$-matrix} of the RCFT. The modular $S$-matrix and
the fusion coefficients are related by the Verlinde formula \cite{Verlinde:1988sn}
\begin{equation}
N^j_{ab}=\sum_i \frac{S^i_jS^i_aS^b_i}{S^i_0}.
\label{eq:verlinde}
\end{equation}
The limit of interest here is, once again, $L \gg \ell$. Under a modular transformation, the partition function
of Eq.\eqref{eq:Zbc} becomes
\begin{equation}
Z_{a/b}=\sum_{i,j} N^i_{ab} \; S^j_i\; \chi_j\left(e^{\displaystyle{-4\pi L/\ell}}\right).
\label{eq:Zbc2}
\end{equation}
In the limit $\frac{\ell}{L} \to 0$, $Z_{a/b}$ is dominated by the
the descendants of the identity $\bf{1}$ (up to exponentially small corrections). Hence, in this limit,
\begin{equation}
Z_{a/b} \to \sum_i N^i_{ab} \; S_i^0 \; \chi_0\left(e^{-4\pi L/\ell}\right) \to
e^{{\frac{\pi L c}{6\ell}}} \; \sum_i N^i_{ab} \; S^0_i
\label{eq:lowT}
\end{equation}
and $\ln Z_{a/b}$ becomes
\begin{equation}
\ln Z_{a/b}=\frac{\pi L c}{6\ell}+\ln g_{ab},
\label{eq:Zbc-g}
\end{equation}
dropping UV singular (non-universal) terms. The quantity $\ln g_{ab}$ in Eq.\eqref{eq:Zbc-g} is the
{\em boundary entropy} of a boundary RCFT introduced by Affleck and Ludwig\cite{Affleck1991}, where the ``ground state degeneracy''
$g_{ab}$ is given by
\begin{equation}
g_{ab}=\sum_i N^i_{ab} S_i^0.
\label{eq:g}
\end{equation}
Using Eq.\eqref{eq:SFM}, these standard results imply that the
entanglement entropy of the 2D rational conformal QCP
for a cylindrical geometry(see Fig.\ref{fig:Cylinder}). For boundary conditions $a$ and $b$
at the two ends associated with regions $A$ and $B$, the entanglement entropy is
\begin{eqnarray}
S&=& -\ln \left(\frac{Z_A^{a0}Z_B^{0b}}{Z_{A\cup B}^{ab}}\right)
\nonumber \\
&=&\mu \ell -\ln \left(\frac{\left(\sum_j N_{a0}^j \; S_j^0\right) \;
\left( \sum_k N_{0b}^k\;S_k^0\right)}{\sum_l N_{ab}^l\; S_l^0}\right)
\nonumber \\
&=&\mu \ell-\ln \left(\frac{g_{a0} g_{0b}}{g_{ab}}\right),
\label{eq:S-cylinder-rcft}
\end{eqnarray}
where we explicitly used the fact that the state at the common boundary $\Gamma$ should be fixed to be the {\em fixed} BC with boundary state $\ket{0}$.
The result Eq.\eqref{eq:S-cylinder-rcft} provides an explicit way to compute $\gamma_{QCP}$
for the entire class of many-body wave functions at QCPs associated with RCFT
in terms of the data of the RCFT:
\begin{equation}
\gamma_{QCP}=-\ln \left(\frac{\left(\sum_j N_{a0}^j \; S_j^0\right) \;
\left( \sum_k N_{0b}^k\;S_k^0\right)}{\sum_l N_{ab}^l\; S_k^0}\right).
\label{eq:gammaQCP}
\end{equation}
This is the main result of this section. It shows that $\gamma_{QCP}$ is in general determined by the OPE coefficients $N_{ba}^c$
(which encode the boundary conditions on the partition functions) and by the modular $S$-matrix, $S_i^j$, of the RCFT associated with
the {\em norm squared of the many-body wave function} at the given QCP.
It is important to note that it is also possible to define a unitary $S$-matrix that governs the transformation properties
of the {\em wave function} itself under a modular transformation. This modular $S$-matrix plays a central role in 2D topological phases and in topological field theories.\cite{Witten1989,Kitaev2006a,Bonderson2006b} However, only for topological theories these are two $S$-matrices are the same and in general they are different or not even defined at all. We will come back to this issue in the discussion section.
A particularly simple result is obtained for the case of a cylinder with fixed boundary conditions on both ends. In this case, $Z_A$,
$Z_B$ and $Z_{A \cup B}$ are cylinders with fixed boundary conditions, and hence the boundary states for all three cases are in the
conformal block of the identity ${\bf 1}$. Since in this case the only non-vanishing OPE coefficient is $N_{0 0}^0=1$, the
universal term of the entanglement entropy, $\gamma_{QCP}$, depends only on the element $S_0^0$ of the modular $S$-matrix of the RCFT:
\begin{equation}
\gamma_{QCP}=-\ln S_0^0.
\label{eq:gammaQCP-simple}
\end{equation}
For the case in which the full region$A \cup B$ is a torus, we can use an analogue of Eq.\eqref{eq:S-cylinder-rcft}
by writing the partition function $Z_{A \cup B}$ in the denominator of Eq.\eqref{eq:S-cylinder-rcft}
as a modular invariant. In the limit of interest $L \gg \ell$, the denominator $g_{ab}$ of Eq.\eqref{eq:S-cylinder-rcft}
is replaced by a sum of terms with similar structure corresponding to a sum over boundary conditions (and twists) needed to
represent the torus (see, for instance, Ref.[\onlinecite{yellow}]). Similarly, Eq.\eqref{eq:S-cylinder-rcft} can also be applied
to the disk geometry upon a conformal mapping as it was done for the case of the compactified boson in section \ref{subsec:Disk}.
\subsection{Applications}
We will now discuss some examples of interest.
In applying the results Eq.\eqref{eq:gammaQCP} to specific systems, one should keep in mind that
that choice of the inner product of the 2D quantum theory can play a subtle role.
As it was pointed out recently by Fendley\cite{Fendley2008}, a scale invariant wave function
does not necessarily imply scale invariance of the correlators. Their actual behavior depends also on the choice of inner product. Here we have assumed that the states labeled by the set of field configurations $\phi(x,y)$ form an orthogonal basis. Hence, the norm of the wave function is a sum over states with the local weights squared. However what matters is that the {\em matrix elements} (and in particular the norm of the states) be scale-invariant. A number of interesting counterexamples are known.\cite{foot1}
The QLM is a special case where such ``naive'' inner product maintains scale invariance. This is due to the existence of
exactly marginal operators in the QLM.
Below we discuss four cases where the ground state wave function with the ``naive'' inner product describes QCPs:
(i) a QCP associated with the 2D Ising CFT, (ii) the QCPs associated with compactified boson CFT, (iii) QCPs in quantum loop models\cite{Freedman2004b,Troyer2008}, and
(iv) quantum net models\cite{Levin2005,Fendley2005,Fidkowski2006,Fendley2008}. (See footnote Ref.[\onlinecite{foot2}].)
\subsubsection{The 2D Ising wave function}
As an example of a system described by an RCFT we consider a 2D quantum spin system whose ground state wave function has for
amplitudes the Gibbs weights of the 2D classical Ising model. This system is quantum critical if the square of the weights
(which also have the form of a Gibbs weight for the 3D Ising model) are at the critical point of the 2D Ising model,
the Onsager value.
The critical point of the 2D Ising model is the simplest RCFT. It has central charge $c=1/2$, and three (bulk) primary fields:
1) the identity ($\bf{1}$, with conformal weight $h=0$), 2) the energy density ($\varepsilon$, with conformal weight $h=1/2$), and 3)
the spin field ($\sigma$, with conformal weight $1/16$), which obey the operator algebra (OPE)
\begin{eqnarray}
&&\varepsilon \times \varepsilon={\bf 1}\nonumber \\
&&\varepsilon \times \sigma=\sigma \nonumber \\
&&\sigma \times \sigma={\bf 1}+\varepsilon.
\label{eq:Ising-OPE}
\end{eqnarray}
The critical Ising model has three possible boundary states:\cite{Cardy1986a} 1) the {\em spin up} state $\ket{+}$,
2) the {\em spin down} state $\ket{-}$, and 3) the {\em free} state $\ket{f}$. (Either the up or the down state can
be regarded as the fixed boundary state.) These three boundary states, $\ket{+}$, $\ket{-}$, and $\ket{f}$ are
in the conformal blocks of the identity ${\bf 1}$ (denoted by $\ket{\tilde{0}}$), the energy density $\varepsilon$ (denoted by
$\ket{\tilde{\frac{1}{2}}}$, and the spin field $\sigma$ (denoted by $\ket{\tilde{\frac{1}{16}}}$), respectively.
The boundary states are given by\cite{Cardy1986a}
\begin{eqnarray}
\ket{+}&\equiv&\ket{\tilde 0}=\frac{1}{\sqrt{2}} \ket{0}+\frac{1}{\sqrt{2}}\ket{\varepsilon}+
\frac{1}{\sqrt[4]{2}}\ket{\sigma}\nonumber\\
\ket{-}&\equiv&\ket{\tilde {\frac{1}{2}}}=\frac{1}{\sqrt{2}} \ket{0}+\frac{1}{\sqrt{2}}\ket{\varepsilon}-
\frac{1}{\sqrt[4]{2}}\ket{\sigma}\nonumber\\
\ket{f}&\equiv&\ket{\tilde{\frac{1}{16}}}= \ket{0}-\ket{\varepsilon}.\nonumber\\
&&
\label{eq:boundary-states-Ising}
\end{eqnarray}
The modular $S$-matrix is
\begin{equation}
S=
\left(
\begin{array}{ccc}
\frac{1}{2} & \frac{1}{2} & \frac{1}{\sqrt{2}} \\
\frac{1}{2} & \frac{1}{2} & -\frac{1}{\sqrt{2}} \\
\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} & 0
\end{array}
\right),
\label{eq:Ising-S}
\end{equation}
where the columns are labeled by the highest weights $0$, $1/2$, and $1/16$, in that order.
The entanglement entropy for this wave function can now be computed, using the result of Eq.\eqref{eq:S-cylinder-rcft}.
We will take region $A \cup B$ to be a long cylinder of length $L$ and circumference $\ell$, and regions $A$ and $B$ to be
two cylinders of lengths $L_A$ and $L_B$ respectively, with the same circumference $\ell$, and with $L=L_A+L_B$.
Let us take the boundary conditions at both ends of $A \cup B$ to be free. By a conformal mapping, this maps onto the disk. Back on the cylinder, the free boundary condition is described by the boundary state $\ket{f}$,
which is in the conformal block of the primary field $\sigma$.
On the other hand, at the boundary $\Gamma$ between regions $A$ and $B$, we have the fixed boundary condition,
the up state $\ket{+}$
We readily find
\begin{eqnarray}
&&g_{\sigma, 0}=N_{\sigma,0}^\sigma S_\sigma^0=\frac{1}{\sqrt{2}} \nonumber \\
&&g_{0,\sigma}=N_{0,\sigma}^\sigma S_\sigma^0=\frac{1}{\sqrt{2}}\nonumber\\
&&g_{\sigma,\sigma}=N_{\sigma,\sigma}^0 S_0^0+N_{\sigma,\sigma}^\varepsilon S_\varepsilon^0=1.
\end{eqnarray}
The universal term of the entanglement entropy, $\gamma_{QCP}$ now is
\begin{equation}
\gamma_{QCP}=-\ln \frac{g_{a0} g_{0b}}{g_{ab}}=-\ln \frac{\left(S_\sigma^0\right)^2}{S_0^0+S_\varepsilon^0}=\ln 2.
\label{eq:entropy-cylinder-Ising-free}
\end{equation}
On the other hand, we could consider instead the case of fixed boundary conditions at both ends of the cylinder $A \cup B$. This corresponds to the boundary state $\ket{\tilde{0}}$. Since the boundary condition on $\Gamma$ is always {\rm fixed},
$\gamma_{QCP}$ is now
\begin{equation}
\gamma_{QCP}=-\ln S_0^0=\ln 2.
\label{eq:entropy-Ising-cylinder-fixed}
\end{equation}
In the case where $A \cup B$ is torus of large circumference $L$ and small circumference $\ell$
(hence with modular parameter
$\tau=i\ell/L$), the regions $A$ and $B$ are cylinders each of length $L_A$ and $L_B$ and circumference $\ell$, with fixed boundary
conditions at both ends. The partition function for the torus, $Z_{A\cup B}^{\rm torus}$, is\cite{yellow,Ginsparg:1988nr}
\begin{equation}
Z_{A\cup B}^{\rm torus}=\frac{1}{2}\left(\bigg\vert \frac{\vartheta_2(\tau)}{\eta(\tau)}\bigg\vert+\bigg\vert \frac{\vartheta_3(\tau)}{\eta(\tau)} \bigg\vert+ \bigg\vert\frac{\vartheta_4(\tau)}{\eta(\tau)}\bigg\vert\right).
\label{eq:Z-Ising-torus}
\end{equation}
Using the modular invariance of $Z$ on the torus ($\tau \to -1/\tau$), one finds that in the limit $L\gg \ell$,
$Z_{A \cup B}^{\rm torus}
\to \frac{3}{2}$. Hence, in the case of the torus, $\gamma_{QCP}$ is
\begin{equation}
\gamma_{QCP}^{\rm torus}=-\ln \frac{\left(S_0^0\right)^2}{\frac{3}{2}}=\ln 6.
\label{eq:gammaQCP-torus}
\end{equation}
\subsubsection{The compactified boson wave function}
We can also use this approach to compute the entanglement entropy for the compactified boson wave function (the quantum Lifshitz
state) discussed in the previous Section.
However, unlike the explicit computation of the boson determinant presented in the previous section,
a computation that can be done for any compactification radius $R$,
the boundary CFT approach we are using in this section only applies for a rational CFT.
This restricts the compactification radius to
be such that $R^2$ is a rational number. (The general case can be regarded as a limit.)
It is now straightforward to compute the entanglement entropy using Eq.\eqref{eq:S-cylinder-rcft}.
For this case we find $\gamma_{QCP}=-\ln S_0^0=\ln R$, consistent with the results of the preceding section.
\subsubsection{Quantum loop models}
Quantum loop models are two-dimensional quantum systems whose Hilbert space is spanned by states labelled by
loop configurations (or coverings) of a two-dimensional lattice. We will denote by $\{\mathcal{L}\}$ the set of these configurations.
Conventionally, this set of states
are taken to be a basis of the loop Hilbert space, and hence they are assumed to be linearly independent, complete and orthonormal,
(with respect to the naively defined inner product.)
Quantum loop models were originally proposed as candidates for time-reversal invariant topological phases.
\cite{Freedman2001,Freedman2004,Freedman2004b} Wave functions in the Hilbert space of (multi) loop configurations have the form
\begin{equation}
\ket{\Psi_{(x,d)}}=\sum_{\mathcal{L}} x^{L[\mathcal{L}]} d^{N[\mathcal{L}]} \ket{\mathcal{L}}.
\label{eq:loop-zd}
\end{equation}
Here $N[\mathcal{L}]$ is the number of loops in state (configuration) $\mathcal{L}$, $L[\mathcal{L}]$ is the length of loop in the
configuration,
$d$ is the ``loop fugacity'', and $x$ is the weight (fugacity) of a unit length of loop.
The candidate wave functions of a quantum loop model in a putative topological phase
depends on the loop configuration but not on the length of the loops. The simplest such state is the ``$d$-isotopy''
(multi) loop wave function'' \cite{Freedman2001,Freedman2004}
\begin{equation}
\ket{\Psi_d}=\sum_{\mathcal{L}} d^{N[\mathcal{L}]} \ket{\mathcal{L}}
\label{eq:loop-d}
\end{equation}
obtained from $\ket{\Psi_{(x,d)}}$ by setting the fugacity of the unit length of loop $x=1$.
This is a generalization of Kitaev's ``Toric Code'' wave function\cite{Kitaev2003} ($d=1$), {\it i.e.\/} a $\mathbb{Z}_2$ gauge theory
deep in its deconfined phase in $2+1$ dimensions.
Another limit of interest is the ``fully packed'' state
\begin{equation}
\ket{\Psi_{(\infty,d)}}={\lim_{ x \to \infty}} \sum_{\mathcal{L}} x^{L[\mathcal{L}]}d^{N[\mathcal{L}]} \ket{\mathcal{L}}
\label{eq:loop-infty-d}
\end{equation}
obtained by setting $x \to \infty$, which forces the constraint that the loops cover the maximal allowable set of links on the
lattice.
With the naively defined inner product, the norm squared of the $d$-isotopy state $\ket{\Psi_d}$, Eq.\eqref{eq:loop-d}, is
\begin{equation}
Z(d^2)\equiv||\Psi_d||^2=\sum_{\mathcal{L}} d^{2N[\mathcal{L}]},
\label{eq:Zd}
\end{equation}
which is the same as the partition function of a 2D classical loop model on the same lattice, with a weight $d^2$ per loop. Likewise, the norm
squared of the fully packed loop state $\ket{\Psi_{(\infty,d)}}$ is the partition function $Z(\infty,d^2)$
of the classical fully packed loop model, with fugacity $d^2$, on the same lattice.
The partition functions of classical loop models on a 2D lattice have been studied extensively, particularly on the honeycomb lattice
(for a detailed review see Refs.[\onlinecite{Nienhuis1987,Kondev96,Kondev96b}].) In the fully packed limit, the partition function
$Z(\infty,d^2)$ is critical for $d\leq \sqrt{2}$. The universality classes of the fully packed loop models (on the honeycomb lattice)
are rational {\it{unitary}} CFTs only for $d=1$ (the $SU(2)_1$ RCFT) and $d=\sqrt{2}$ (the $SU(3)_1$ RCFT).
For finite $x$, the partition function for the dense loop gas $Z(x,d^2)$ is also critical for $d\leq \sqrt{2}$.
The universality classes are again rational unitary CFTs only for $d=1$ and $d=\sqrt{2}$. The fixed point for the case $d=1$ is equivalent to the statistics of the proliferated domain walls of the classical 2D Ising model at infinite temperature.\cite{Nienhuis1987} For $d=\sqrt{2}$ the dense and dilute loop gases have the same critical theory, the Kosterlitz-Thouless critical point, and hence also the $SU(2)_1$
RCFT.
We can now use the result in Eqs.\eqref{eq:gammaQCP} and \eqref{eq:gammaQCP-simple} to compute the universal term of
the entanglement entropy for the loop wave functions with $d=1,\sqrt{2}$, on a cylinder with fixed boundary conditions (for the loops). The modular $S$-matrices are
known,\cite{Ginsparg:1988nr,yellow,Dong2008} and the needed $S_0^0$ matrix elements are
$S_0^0=\frac{1}{\sqrt{2}},\frac{1}{\sqrt{3}}$, for
$SU(2)_1$ and $SU(3)_1$, respectively. The universal term $\gamma_{QCP}$ of the entanglement entropy for each case is
$\gamma_{QCP}=\ln \sqrt{2}, \ln \sqrt{3}, -\ln 2$ for the fully packed state at $d=1$ (and also for the loop gas at $d=\sqrt{2}$), the fully
packed loop state at $d=\sqrt{2}$, and the dense loop gas at $d=1$ (corresponding to the Kitaev state), respectively.
Here we have used a recent result on the behavior of of the dense loop model by Cardy\cite{Cardy2006} who showed (among many other things) that for $d=1$ the partition function of the dense loop model on the cylinder $Z=2$. We will see in the discussion section that this {\em negative} value, $\gamma=-\ln 2$, coincides with the direct computation of the {\em topological} entanglement entropy in the Kitaev wave function.\cite{Hamma2005,Levin2006,Kitaev2006a}
\subsubsection{Quantum net models}
Finally, we will briefly discuss the more interesting, but less understood problem of the wave functions for {\em quantum net models}\cite{Levin2005,Fendley2005,Fidkowski2006,Fendley2008}. These states were proposed as candidates for a time-reversal invariant non-Abelian topological phase. The Hilbert space of quantum net models is spanned by the coverings of a lattice by configurations of nets, {\it i.e.\/} branching loops (with trivalent vertices). An interesting example is the chromatic polynomial state.\cite{Fendley2005} In this state, the nets are regarded as a configuration of domain walls of a $Q$-state Potts model. The weight of a given state $\ket{\mathcal{L}}$ is the chromatic polynomial $\chi_Q[\mathcal{L}]$ of the configuration. The chromatic polynomial counts the number of ways of coloring regions of the lattice separated by domain walls of a $Q$-state 2D Potts model. They were first introduced in the computation of the low temperature expansion for the 2D Potts models (see, for instance, Ref.[\onlinecite{Baxter1982}].) For non-integer $Q$, the chromatic polynomial can be computed by an iterative procedure.\cite{Fendley2005} The 2D Potts model is known to have a critical point for $Q\leq 4$.
Following Ref.[\onlinecite{Fendley2005}], we consider the norm of the chromatic polynomial state with $Q\leq 4$. In order to compute the norm, we have to square the weight, resulting in a partition function involving the sum of the {\em square} of the chromatic polynomial. It is then natural to ask for a value of $Q$ such that $\chi_Q^2[\mathcal{L}] \propto \chi_{Q_{{\rm eff}}}[\mathcal{L}]$, for some $Q_{\rm eff}$. Then the nets will be critical provided $Q_{{\rm eff}} \leq 4$. It turns out\cite{Fendley2005} that, up to a suitably chosen fugacity for trivalent vertices\cite{Fidkowski2006}, this property holds only for $\sqrt{Q}=\frac{1+\sqrt{5}}{2}$, the {\em Golden Ratio}, with $Q_{\rm eff}=2+\frac{1+\sqrt{5}}{2}<4$. Thus, for this state the nets are critical.
This case is interesting for several reasons. One is that strong arguments\cite{Fendley2005} suggest that it is possible to define for this wave function an excitation (a defect) which is denoted by $\tau$, a Fibonacci anyon (not to be confused with the modular parameter!) with the fusion rule, $\tau \times \tau={\bf 1} + \tau$. Fibonacci anyons are of prime interest in the topological approach to quantum computation.\cite{Freedman2002} However, for this approach to work it is necessary that this state should describe a topological state, which requires that its local excitations (not the nets) be gapped. Fendley\cite{Fendley2008} has recently given strong arguments that imply that this state, with the naive inner product we use here, is not topological but a quantum critical state.
Another feature that makes this state interesting is that the correlations encoded in the norm of the state for $\sqrt{Q}=\frac{1+\sqrt{5}}{2}$ are described by a RCFT, the minimal model of the Friedan-Qiu-Shenker\cite{Friedan1984} series of unitary RCFTs at level $m=9$, with central charge $c=\frac{14}{15}$. This minimal model has a large number of primaries (36) and has not been studied in detail. Nevertheless, its modular $S$-matrix is known (as it is for the entire series\cite{Ginsparg:1988nr}). Although to the best of our knowledge the boundary CFT of this minimal model has not been investigated, we conjecture that the boundary state corresponding to the fixed boundary condition is the analog of the state $\ket{\tilde 0}$ in the 2D critical Ising model (the $m=3$ member of the same series.), {\it i.e.\/} the state in the conformal block of the identity.\cite{Cardy1989} Thus, if we consider this state on a cylinder with fixed boundary conditions, the entanglement entropy for observing only half of the system, has a universal term $\gamma_{QCP}$ of the form given in Eq.\eqref{eq:gammaQCP-simple}, and hence is given in terms of the $S_0^0$ element of the modular $S$-matrix of this RCFT:\cite{Ginsparg:1988nr}
\begin{equation}
\gamma_{QCP}=-\ln S_0^0=-\ln \left(\frac{\sin (\frac{\pi}{9})}{15+3\sqrt{5}}\right).
\label{eq:gammaQCP-fibonacci}
\end{equation}
\section{Conclusions and Discussion}
\label{sec:conclusions}
We have shown that at 2D conformal QCPs (with dynamical exponent $z=2$), the entanglement entropy for a region with a smooth boundary quite generally has universal finite contributions which we denoted by $\gamma_{QCP}$:
\begin{equation}
S_{QCP}=\mu \ell+\gamma_{QCP}.
\nonumber
\end{equation}
We studied the universal nature of $\gamma_{QCP}$ with two complementary approaches for large classes of 2D conformal QCPs:
First for the QLM universality class, we calculated $\gamma_{QCP}$ explicitly in terms of
the partition functions (that of compactified boson) associated with the norm squared of the wave function.
Later we used known results from boundary CFT to show that $\gamma_{QCP}$ is determined by the detailed structure of the associated RCFT encoded in the modular $S$-matrix and the OPE fusion coefficients for the primary fields.
We also applied this general results to compute $\gamma_{QCP}$ in several systems of interest: the quantum Lifshitz model, the generalized quantum dimer and quantum eight-vertex models, and quantum loop and net models.
However, we showed (c.f. Eq.\eqref{eq:gammaQCP-simple}) that for a general conformal quantum critical point, whose ground state wave function is given by the Gibbs weights of a Euclidean rational unitary CFT, the universal term $\gamma_{QCP}$ is determined by the modular $S$-matrix associated with the norm squared of the wave function. Thus, the modular $S$-matrix of the topological phase and that of the wave functions of 2D conformal quantum critical points have a conceptually different origin.
We note that while our result for the entanglement entropy has the {\em same form} as the entanglement entropy for a {\em topological phase}, \cite{Kitaev2006a,Levin2006} the finite universal terms $\gamma_{QCP}$ and $\gamma_{\rm topo}$ have a different origin and structure. In the case of a topological phase, $\gamma_{\rm topo}$ is in general determined by the modular $S$-matrix of the
topological field theory of the topological phase, and it is given in terms of topological invariants of the effective topological field theory that describes this phase.\cite{Kitaev2006a,Levin2006,Dong2008}
This modular $S$-matrix governs the transformation properties of the ground state within the degenerate ground state Hilbert space of the topological phase
under modular transformations on a torus: $\tau \to -1/\tau$, where $\tau$ is the modular parameter of the torus\cite{Witten1989}.
On the other hand, for 2D conformal QCPs whose ground state wave function is given by the Gibbs weights of a Euclidean rational unitary CFT,
the universal term $\gamma_{QCP}$ is determined by the modular $S$-matrix associated with the norm squared of the wave function and the $S$-matrix connects between different boundary conditions. Hence the roles of the modular $S$-matrix in the computation of the universal $\mathcal{O}(1)$ terms to the entanglement entropy have
conceptually different origin. Moreover, $\gamma_{QCP}$ and $\gamma_{topo}$ enter with opposite signs in their contributions to their respective entanglement entropies.
In fact, in all the cases we looked at we found that $\gamma_{QCP}>0$, except for the Kitaev state which is topological, and we recovered the known result. (It is unclear to us how general this difference actually is and, more importantly, if it has a deeper meaning.) In any case, the fact that the entanglement entropy has the universal form of Eq.\eqref{eq:Stopo} has led to the widespread assumption that this scaling is a signature of a topological phase. However we have shown here that this is not necessarily the case as this scaling is also obeyed at conformal quantum critical points in 2D.
It is also interesting to note the striking similarity of the structure of Eq.\eqref{eq:gammaQCP} (with its dependence on the $S$-matrix and the fusion rules) with the results of Fendley, Fisher and Nayak\cite{Fendley2007c} for the change in the entanglement entropy of a 2D topological fluid, a fractional quantum Hall state, by the action of a point contact. Recently, Refs.[\onlinecite{Caraglio2008,Furukawa2008}] found finite universal terms in the entanglement entropy for $1+1$ dimensional CFTs with a similar structure to what we found here in 2D conformal QCPs. Calculations of quantum fidelity in 1D also find a similar structure.\cite{Abasto2008,Venuti2008} Recent work by Li and Haldane\cite{Haldane2008} also raises the interesting possibility of computing the entanglement spectrum for a theory with a wave function described by a known CFT, but this is beyond the scope of this paper.
Finally, given the close connection between the universal piece of the entanglement entropy $\gamma_{QCP}$ and the Affleck-Ludwig entropy of the associated 2D classical partition functions it is interesting to inquire if $\gamma_{QCP}$ may flow under some perturbation. Clearly this cannot happen under the action of a {\em boundary perturbation} (as in the Affleck-Ludwig case) as that would require one to make a physical change of the wave function on the boundary $\Gamma$, rather than a measurement. However, it is interesting to consider instead how the entanglement entropy (and in particular the finite term $\gamma_{QCP}$) would
evolve as one perturbed the (bulk) system either by a finite non-zero temperature into the quantum critical regime,
or by a relevant operator that drives the system into a nearby topologically ordered phase that can be accessed by local perturbations
\cite{Moessner2001,Fendley2002,Ardonne2004,Fendley2005,Fendley2008} and to investigate possible connections with RCFT.\cite{Dorey:1999cj, Dorey:2004xk, Green:2007wr}
\begin{acknowledgments}
We thank John Cardy, Paul Fendley, Greg Moore, and Joel Moore for their comments and suggestions. BH and MM thank the
Les Houches Summer
School for its hospitality. The work of EF and BH was supported by the
National Science Foundation Grant No. DMR 0758462 and DMR 0442537 at the University of Illinois. MM was supported by the Stanford Institute for Theoretical Physics, the NSF under grant PHY-0244728, the DOE under contract DE-AC03-76SF00515, and the ARCS Foundation. EAK was supported by the Stanford Institute for Theoretical Physics during a part of this work.
\end{acknowledgments} |
0812.0945 | \section{Introduction}
\label{intro}
Circumstellar disks around young Pre-Main Sequence (PMS) stars have been deeply studied since the discovery of their infrared emission \citep{Mendo66,Mendo68}. Until now, several disk models have been developed and a large number of star-forming regions have been observed, in order to understand the physical properties and the evolution of circumstellar disks. Several authors (e.g. \citealt{Ha01}) claim that the emission from the inner region of the disks, responsible for NIR excess in T-Tauri stars, declines on a timescale between $\sim$1 and $\sim$10 Myr. Important clues about the typical timescale of disk evolution come from the studies of PMS stars that have cleared the inner region of their circumstellar disks, thus showing excesses in mid- and far-infrared but not in near-infrared bands. These stars with disk are usually considered to be in a transitional phase between Class~II and disk-less Class~III PMS stars.\par
The evolution of circumstellar disks can be influenced by nearby massive stars, as it was shown by the observations of the evaporating protoplanetary disks in the Trapezium, near the massive star $\Theta^1$ Ori (see, for example, \citealp{Stor99} and references therein). In this case, the photoevaporation process, induced by UV radiation emitted by $\Theta^1$ Ori, dissipates the nearby disks on timescales shorter than $10^6$ years. On the other hand, several authors (for example \citealp{Eis06}) claim that the externally induced photoevaporation is not an efficient mechanism for the truncation of circumstellar disks around low mass PMS stars. The debate about this topic is still open.\par
Comparing the spatial variations of disk frequencies respect to the positions of massive stars in young open clusters it is possible to study how the radiation from massive stars affects the evolution of circumstellar disks and the formation of new generations of stars. \citet{Bal07} and \citet{io07}, hereafter GPM07, applied this approach to two young clusters: NGC~2244 and NGC~6611 respectively. In their Spitzer/IRAC-MIPS study of NGC~2244 (distant 1.4-1.7 Kpc from the Sun and with an age of about 2-3 Myr), \citet{Bal07} found that disk frequency drops quickly at a distance of 0.5 parsec from the massive cluster members. For larger distances, however, stars with disk do not experience effects induced by massive stars, since the disk frequency is not spatially correlated with their positions. \par
Also the young open cluster NGC~6611, in the Eagle Nebula, is an ideal target for this kind of study, thanks to its rich PMS population and its large number of massive members (56 with spectral classes earlier than B5, \citealp{Hil93}) that are distributed irregularly in the central region of the cluster. In order to obtain a reliable list of cluster members GPM07 used a membership criterion based on infrared excesses (to select members with disk) and X-ray emission (to select members without disk). In fact, thanks to the high X-ray luminosity of PMs stars, this criterion allows to obtain an unbiased sample of PMS cluster members. With this method, a total of 1122 candidate PMS stars associated to NGC~6611 or to the outer part of the Eagle Nebula were identified. In NGC~6611, GPM07 found evidences that the UV radiation from massive stars have dissipated the disks around nearby PMS members on short timescales, since the disks frequency declines for small distances from massive members. To obtain this result, GPM07 calculated the flux emitted by all the massive stars incident on each cluster members and then obtained the disk frequency for various bins of incident flux. To this calculation, the projected distances were used, instead the real ones. However, GPM07 showed that this approximation have not affected their result. \par
The effects of the energetic radiation from massive stars on the evolution of nearby circumstellar disks are also studied with simulations of the evolution of open clusters. For example \citet{Ada06} claim that externally induced photoevaporation does no affect the evolution of disks in clusters with less than 1000 members. Instead, more populated clusters, as shown by \citet{Fatu08}, can be environments in which disks are efficiently evaporated in small timescales, since they have intense UV fields practically independent from the number of stellar members. \par
This paper is the follow up of the GPM07 study, enriching the list of cluster members by using Spitzer/IRAC data and confirming their principal results; both works, therefore, will be the base of a subsequent paper, focused on the analysis of the Spectral Energy Distributions of selected cluster members. \par
The Spitzer/IRAC observations of NGC~6611 used in this work are obtained from the Galactic Legacy Infrared Mid-Plane Survey Extraordinaire (GLIMPSE; \citealt{Ben03}). The {\it Infrared Array Camera} (IRAC), mounted on Spitzer Space Telescope \citep{Faz04}, allows studying stars with circumstellar disk in four infrared bands, centered on 3.6~$\mu m$, 4.5~$\mu m$, 5.8~$\mu m$ and 8.0~$\mu m$. In this region of the spectrum, the contribution from the stellar photosphere is usually small compared to the emission from disk and envelope, and the effects of extinction by interstellar medium is smaller than at shorter wavelengths. These facts permit a more reliably selection of stars with circumstellar disks than using 2MASS data alone. \par
Our paper is organized as follow: in Sect. \ref{catal} we show the results of the cross identification between GLIMPSE catalog and that compiled in GPM07; in Sect. \ref{colcolsp} and \ref{Qpar} we will describe the identification of stars with disk respectively using IRAC color-color diagram and suitable reddening-free color indices that combine optical and NIR colors; in Sect. \ref{compat} we will compare the two diagnostics used in this paper; in Sect. \ref{memcat} we describe the catalog of the cluster members; in Sect. \ref{final} we will re-examine the results obtained in GPM07 using the updated member list.\par
\section{Multi-wavelength catalog}
\label{catal}
\subsection{Description of GPM07 multiband catalog and NGC~6611 parameters}
\label{gpmcat}
GPM07 compiled a multi band catalog of NGC~6611, using optical observations in BVI bands in a $33^{\prime}\times34^{\prime}$ field of view centered on the cluster, obtained with ESO 2.2m/WFI camera (PI: Mundt), 2MASS public data of the same sky region and an X-ray observation performed with Chandra/ACIS-I in a $17^{\prime}\times17^{\prime}$ region (obs.ID 978; \citealt{Lin07}). Hereafter, we will call WFI FOV and ACIS FOV the fields of view of the respective instruments; the latter FOV is contained in the former and it is approximately at its center. The catalog compiled in GPM07 consists of 38995 sources falling in the WFI FOV. \par
Using this catalog, GPM07 estimated the age of the PMS members (that are mostly younger than 3 Myr), the distance of the cluster ($\sim$ 1750 parsec), the anomalous reddening law (with $R_V\sim3.3$), the average extinction for cluster members ($A_V=2.6^m$) and the relaxation time for the core (4.2 Myr, longer than the age of PMS members). The age of the cluster, however, is not well constrained as other cluster parameters. For example, several authors claimed that star formation in this nebula started about 6 Myears ago (see \citealp{Gvara08} for a detailed discussion about the age spread in this cluster). \par
In the ACIS FOV we found 997 X-ray sources using {\it PWDetect}, a wavelet-based source detection algorithm \citep{Dami97}, with 10 expected spurious sources. Among the X-ray sources with optical counterpart, 31 have $V$ magnitudes and $V-I$ colors compatible with foreground main sequence stars. This fraction of X-ray sources ($\sim 3\%$) is comparable to the estimated contamination in similar X-ray observations (see, for example, \citealt{Dami06}). However, we will not exclude a priori these sources from our list of candidate cluster members, since they can be Class~II YSOs with optical colors altered by effects due to their PMS nature (as disk gas accretion on stellar surface). Instead, they will be marked with a specific tag. \par
\subsection{Cross identification with GLIMPSE catalog}
\label{glimcat}
\begin{table*}[ht]
\centering
\caption {Results of the cross-identification.}
\vspace{0.5cm}
\begin{tabular}{ccccc}
\hline
\hline
Number of stars& WFI detection & 2MASS detection & IRAC detection & Number of X-ray sources \\
\hline
$146 $ &$no$& $no$& $no $& $146$\\
$20476$ &$no$& $no$& $yes$& $0 $\\
$2732 $ &$no$& $yes$& $no$& $26 $\\
$17768$ &$no$& $yes$& $yes$& $64 $\\
$12535$ &$yes$& $no$& $no$& $196$\\
$1071 $ &$yes$& $no$& $yes$& $178$\\
$2648 $ &$yes$& $yes$& $no$& $74 $\\
$2782 $ &$yes$& $yes$& $yes$& $370$\\
\hline
\hline
\multicolumn{5}{l} {}
\end{tabular}
\label{cross}
\end{table*}
GLIMPSE observations in the field containing NGC~6611 cover the entire $33^{\prime}\times34^{\prime}$ WFI FOV, with the exception of a region at North-West of about $7^{\prime}\times10^{\prime}$. Following the explanatory manual\footnote{available at http://www.astro.wisc.edu/sirtf/docs.html}, we selected 41985 IRAC sources falling in this region. \par
We identified the sources in common to both the GLIMPSE catalog and the catalog compiled by GPM07 with an identification radius of 0\Sec3; this value is equal to the astrometric precision of GLIMPSE catalog relative to 2MASS, as reported in the explanatory manual (in GPM07 we used the 2MASS Point Source Catalog as astrometric reference). In this work we also include 146 X-ray sources without any optical or 2MASS counterpart, which were not included in GPM07, to identify their possible IRAC counterpart. With the chosen identification radius, we expect about 26 spurious identifications, evaluated as in \citet{Dami06}.\par
The results of the cross-identification are summarized in Table \ref{cross}. We found 2782 stars with WFI, 2MASS and IRAC detections. This sample shows a significant spatial clumping in correspondence of the cluster and it includes 370 X-ray sources, suggesting that it is mostly composed by cluster members. \par
Table \ref{cross} also shows that the 146 X-ray sources without any optical or 2MASS counterpart are not detected even in IRAC observations. Fig. \ref{xmiss} shows the spatial distribution of these sources, evidently clustered near the center of the ACIS FOV. They can be low-mass cluster members, with masses lower than the limit of our optical-IR data, since our catalog includes X-ray sources down to the limiting magnitudes in WFI, 2MASS and IRAC observations. However, it is also possible that some of these objects can be extragalactic sources, detectable in X-ray since the nebula is less dense in correspondence of the cavity cleared by the cluster itself.
\begin{figure}[]
\centering
\includegraphics[width=9cm]{Xmiss.ps}
\caption{WFI image, in the I band, of the central region of NGC~6611. Crosses identify the X-ray sources without both optical and infrared counterpart. The box outlines the $17^{\prime}\times17^{\prime}$ ACIS FOV.}
\label{xmiss}
\end{figure}
GPM07 used the detection in ACIS observation as membership criterion. This was justified by the fact that almost all our X-ray sources with optical emission are PMS stars compatible with an age between 0.1 and 3 Myears. Fig. \ref{vvi3} shows the $V$ vs. $V-I$ diagram for the stars in WFI FOV with errors in $V$ smaller than 0.1$^m$ and in $V-I$ smaller than 0.15$^m$. In this diagram, diamonds mark the optical sources detected also at 5.8$\mu m$. About 54\% of these objects are bright stars ($V \leq 16^m$), while the remaining 46\% are fainter and mainly concentrated in the PMS region of the color-magnitude diagram. This region overlaps with that traced by X-ray sources, confirming the youth of these optical-IRAC stars. The $V$ vs. $V-I$ diagrams of the sources detected in the other IRAC bands have similar characteristics. Hereafter, the color-magnitude and color-color diagrams of this paper will include only stars with errors in the specific magnitude smaller than 0.1$^m$ and in color smaller than 0.15$^m$.
\begin{figure}[]
\centering
\includegraphics[width=9cm]{VVI3.ps}
\caption{$V$ vs. $V-I$ diagram of stars in WFI FOV. The thick solid line is the ZAMS (from \citealt{Sie00}), at the distance of 1750~pc and with $A_V=1.45^m$ (the average extinction of field stars, evaluated in GPM07). The dashed lines are the isochrones at 0.1, 0.25, 1, 2.5, 3 and 5 Myears, with the average extinction appropriate for cluster members ($A_V=2.6^m$). The extinction vector is obtained from the law of \citet{Muna96}. The diamonds are optical sources with emission at 5.8 $\mu m$. }
\label{vvi3}
\end{figure}
In the next two sections we will describe the criteria adopted to select stars with circumstellar disk based on the excesses in IRAC colors, and we will compare them with the criterion based on 2MASS data, used in GPM07. These criteria are applied to stars with magnitude errors smaller than $0.1^m$; this is a very stringent rule if applied to faint stars: at 3.6$\mu m$ more than 50\% of stars fainter than $12.5^m$ have error greater than $0.1^m$. The same 50\% fraction is found at $12^m$ at 4.5$\mu m$ and at $10.5^m$ at both 5.8$\mu m$ and 8.0$\mu m$. It is evident, then, that a criterion for the selection of stars with disk using all IRAC bands simultaneously does not allow to select faint sources with disk, since most of them have large errors in the IRAC bands at longer wavelenghts. We will partially overcome this problem using two independent disk diagnostics: the usual color-color IRAC diagram (Sect. \ref{colcolsp}) and a diagnostic based on suitable reddening-free color indices (Sect. \ref{Qpar}), involving optical+2MASS photometry and only a subset of IRAC bands.
\section{T-Tauri stars from IRAC color-color diagram}
\label{colcolsp}
The [3.6]-[4.5] vs. [5.8]-[8.0] diagram is an excellent tool to identify stars with circumstellar disk (see, for example, \citealt{Alle04}). In this diagram stars with photospheric colors are clustered around the origin, while stars with disk show colors significantly different from zero. The two populations can be easily distinguished even if the stars are affected by large interstellar extinctions. This is possible since the reddening vector is almost vertical (due to a very small reddening in [5.8]-[8.0]) or it points to bluer [5.8]-[8.0] colors (due to the partial overlap of the IRAC 8.0 $\mu m$ band with the interstellar silicate feature), depending on the adopted extinction law; see, for example, \citet{Mege04} and \citet{Fla07}. Unlike what happens for other possible infrared diagrams, then, in the IRAC color-color plane the locus of reddened photospheres and that of sources with intrinsic red colors are not blended, even for large extinctions.\par
With this diagram it is also possible to distinguish roughly embedded Class~I and Class~II T-Tauri YSOs (Young Stellar Objects): the former have both colors redder than the latter since the presence of the collapsing envelope surrounding the star and the disk; see, for example, \citet{Alle04} who used the models developed by \citet{dale98,dale99,dale01} for Class~II stars and by \citet{Ken93} and \citet{Calve94} for embedded Class~I stars. \par
\begin{figure}[]
\centering
\includegraphics[width=9cm]{CCsp.ps}
\caption{Color-color diagram for IRAC sources in WFI FOV (dots). The box marks the approximate locus of Class~II YSOs (crosses). The dashed lines roughly separate reddened photospheres (left), reddened Class~II YSOs (center) and Class~I YSOs (in the upper right part of the diagram, marked with triangles). Squares mark stars either classifiable as Class~II or Class~I YSOs. The reddening vectors with A$_V$=30$^m$ and A$_K$=5$^m$ are obtained from \citet{Mege04}, and \citet{Fla07}, respectively.}
\label{CCsp}
\end{figure}
\subsection{Classification of selected YSOs}
\label{ysoclass}
GLIMPSE data of the Eagle Nebula were already analyzed, in combination with 2MASS data, by \citet{In07}. We reanalyze the data, using also different methods, for several reasons. First, we note that the selection of stars with $K$ excess performed by \citet{In07} and GPM07 is different. The latter used the reddening free color indices, a method much more efficient than the use of the T-Tauri locus \citep{Mey97} in infrared color-color diagrams, adopted by the former. Moreover, in GPM07 only stars with color errors ($\sigma_{colors}$) smaller than 0.15$^m$ have been used, and we apply the same approach here in order to be more conservative as possible. With this condition we produce a different IRAC color-color diagram respect to \citet{In07}. In the present paper, this diagram is used together with reddening-free color indices defined with IRAC bands, in order to perform a selection of the cluster members with disk as much as possible model-independent. Furthermore, 2MASS and IRAC data are complemented by optical and X-ray data, that are fundamental to asses the nature of the candidate members. \par
Fig. \ref{CCsp} shows the color-color diagram for the IRAC sources in WFI FOV used to classify the YSOs. Stars with colors typical of photospheres are clustered around the origin, with a spread consistent with photometric uncertainties and reddening, and are clearly separated from sources with intrinsic red colors. \par
To select and classify stars with disk, we used the reddening vector obtained by \citet{Mege04} from the reddening law of \citet{Mat90}, shown as the inclined vector in Fig. \ref{CCsp}. The problem of the correct extinction law in IRAC bands is still open. For this reason, in Fig. \ref{CCsp} we also show the extinction vector (the $A_K=5^m$ arrow) obtained from the mean reddening law recently estimated by \citet{Fla07} in the direction of five nearby star-forming regions. It is evident that the difference between the two extinction laws is relevant only for a very large extinction.\par
Using this diagram, we identify 147 Class~II YSOs as the stars within the box in Fig. \ref{CCsp} (crosses), taking into account photometric uncertainties and extinction. This box was obtained from the models of stars with circumstellar disk developed by \citet{dale98,dale99,dale01}. In addition, 13 stars are unambiguously classified as Class~I YSOs (triangles in Fig. \ref{CCsp}). \par
22 stars (marked with squares in Fig. \ref{CCsp}) are sources with only one IRAC color redder than the Class~II locus. Those with $[3.6]-[4.5]\geq0.8^m$ can be either Class~I sources or very embedded Class~II YSOs; those with $[5.8]-[8.0]\geq1.2^m$ are usually interpreted as Class~II YSOs, with no emission detected from the inner region of the disk, due to a larger inner hole or to the disk inclination with respect to the line of sight. However, as suggested by \citet{Ken93}, also Class~I YSOs with large centrifugal radii\footnote{the distance from the central star at which the infalling material from the envelope with most angular momentum, i.e. near the equatorial plane, hits the circumstellar disk.} and optically thin envelopes, that give a silicate band in emission, may have a spectral energy distribution compatible with the latter colors. For these reasons, all these 22 stars are not clearly classified. The possible contamination by extragalactic sources is discussed in the next section.\par
The IRAC color-color diagram allows us to select a total of 182 members with excesses, with 120 new identifications with respect to GPM07, where BVIJHK bands were used. This result is not in disagreement with GPM07, since in that work it was not possible to classify most of these stars. Among these 120 stars, in fact, only 48 have good measurements in $K$, 25 in both $V$ and $I$ (shown in Fig. \ref{vvisp}) and just 9 in all these three bands simultaneously. We also note that 19 of these stars are also X-ray sources (among the total of 57 that are inside the ACIS FOV) and that the $V-I$ and $V$ values of the subsample with good WFI photometry are compatible with the cluster. \par
Fig. \ref{CCsp2m} shows the IRAC colors of 76 X-ray sources with good photometry in the four IRAC bands. A large number of these sources (55) are classified as Class~II YSOs while none as Class~I sources. This is likely due to lower luminosity in X-ray of Class~I objects respect to more evolved YSOs, in accord to that showed by previous studies about star-forming regions analyzed with X-ray observations deeper than our one (i.e. \citealt{Lore08}).
\begin{figure}[!h]
\centering
\includegraphics[width=7cm]{VVI_disksp.ps}
\caption{$V$ vs. $V-I$ diagram analogous to that in Fig. \ref{vvi3}. Diamonds mark candidate stars with disk selected by the IRAC color-color diagrams but not in GPM07}
\label{vvisp}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=7cm]{CCsp_Xray.ps}
\caption{IRAC color-color diagrams of the X-ray sources (crosses). Dots mark the stars with disk selected from the IRAC diagram of Fig. \ref{CCsp}, for the sky region falling in the ACIS FOV. The Class~II locus and the reddening vectors are analogous to those in Fig. \ref{CCsp}.}
\label{CCsp2m}
\end{figure}
\subsection{Contaminating sources}
\label{cont}
In order to evaluate the contamination due to extragalactic sources, we use the criteria defined in \citet{Gute08}. These criteria allow to identify candidate AGNs, or galaxies dominated by PAH emission, using various color-color and color-magnitude IRAC diagrams. Using these criteria, we identify only 4 stars that can be extragalactic sources, but their $A_V$ values predicted by the optical colors are too small (between 2.5$^m$ and 5$^m$) to confirm this classification.\par
However, in IRAC color-color plane PAH-rich (Polycyclic Aromatic Hydrocarbons) star-forming galaxies can be found in the locus at $-0.1^m<[3.6]-[4.5]<0.6^m$ and $[5.8]-[8.0] > 1^m$ (in the part of the diagram redder than the Class~II locus). We have selected 25 candidate disk-bearing members in this region of the diagram. Some of them could be PAH-rich star-forming galaxies, and we will definitively classify them in our subsequent paper.\par
\subsection{Spatial distribution of Class~II and Class~I YSOs}
\label{classdis}
\begin{figure}[]
\centering
\includegraphics[angle=0,width=9cm]{80classe1nnid_massive.ps}
\caption{IRAC image of the Eagle Nebula at 8.0 $\mu m$, centered on the cluster and covering approximately a region of $22^{\prime} \times 32^{\prime}$. The circles mark Class~II YSOs, while the crosses mark Class~I YSOs and stars compatible with both classifications. The small boxes mark massive stars. The box delimits the structures known as ``elephant trunks''. North is up, East is to the left.}
\label{classspadis}
\end{figure}
Fig. \ref{classspadis} shows the IRAC image of the Eagle Nebula at 8.0$\mu m$, with over plotted the Class~II objects (circles) and the sources sorted either as Class~I or as reddened Class~II YSOs (crosses). In the center of the cluster (approximately in the center of the image) there is a lack of sources of the latter group, with respect to the outer regions. This is not a real result, since it is due to the intense diffuse emission of the dense structures in the central region of the nebula, like the elephant trunks inside the box in Fig. \ref {classspadis}, that complicates the extraction of IRAC point sources. These structures, in fact, are heated directly by the radiation from the massive members of NGC~6611, and are very bright in the IRAC bands at longer wavelengths. Besides, these diffuse structures are very dense, accounting for a very high extinction inside them. \par
The evidence that star formation activity is still ongoing in the central region of NGC~6611 was provided by several authors. For example, \citet{Mcc02}, using observations at high spatial resolution, showed that the trunks are stellar ``nurseries'', with several embedded YSOs inside them. The trunks are continuously eroded by the intense incident UV flux emitted by nearby massive cluster members. As results, some of the young sources formed inside the trunks are emerging from the photodissociation regions that delimit these structures. As effect of the intense diffuse emission by the nebula, we identify only one Class~I YSO emerging from the trunks, at $\alpha=18:18:52$ and $\delta=-13:49:38$ (the cross inside the box in Fig. \ref{classspadis}). \par
As shown in Fig. \ref{classspadis}, 12 sources classified either as Class~I or reddened Class~II are clustered in a region at North-West. Only one of the few Class~II stars present in this region has a faint optical counterpart (with $V=23.34^m$). Therefore, this small cluster can be associated to a denser intra cluster medium and/or to more recent star formation events, if compared with the center of NGC~6611. The presence of this rich star-formation site, already suggested by \citet{In07}, shows that the star formation activity is still ongoing in the whole nebula, and not only in the central region. \par
\section{Cluster members from reddening-free color indices}
\label{Qpar}
In Sect. \ref{catal}, we discussed the different sensitivities in the four IRAC bands and that this can limit the selection of stars with disk with the IRAC color-color diagram. To partially overcome this problem, we select the stars with infrared excesses also using four suitable reddening-free color indices, that will be described in detail in the following (see also GPM07 and \citealt{Dami06}). \par
\subsection{Properties of the $Q$ color indices}
\label{Qprop}
\begin{figure*}[]
\centering
\includegraphics[width=19cm]{indici_noX.ps}
\caption{Diagrams $Q_{VIJ[sp]}$ vs. the $J-[sp]$ colors for the stars in WFI FOV (points). Circles mark stars with excess in the corresponding index; the horizontal dotted-dashed lines are the lower limits for photospheric indices; the inclined dashed lines separate the locus of normal stars from that of reddened sources (or stars with small excesses). Crosses mark the optical sources used to define these limits (as explained in the text). The squares mark stars with multiple identifications.}
\label{Qdiag}
\end{figure*}
$Q$ indices are defined in order to compare infrared colors with $V-I$, the latter assumed to be representative of photospheric emission:
\begin{equation}
Q_{VIJ[sp]} = \left( V-I \right) - \left( J-[sp] \right) \times E_{V-I}/E_{J-[sp]}
\label{Qdef}
\end{equation}
where $Q_{VIJ[sp]}$ is the index; $V$, $I$ and $J$ are the magnitudes in the respective bands; $[sp]$ is the magnitude in a specific IRAC band; $E_{V-I}$ and $E_{J-[sp]}$ are the reddening in the corresponding colors. $E_{V-I}$ has been obtained from the reddening law of \citet{Muna96}, while the extinction in IRAC bands has been taken from the reddening law of \citealt{Mat90} (see Fig. \ref{CCsp}). The effectiveness of this approach is that $Q_{VIJ[sp]}$ indices allow us to determine excesses in stars for which the magnitudes in all the IRAC bands are not well measured simultaneously, if $V$, $I$ and $J$ magnitudes are known. \par
The identification of stars with excesses is performed using the diagrams $Q_{VIJ[sp]}$ vs. $J-[sp]$ in Figs. \ref{Qdiag}. In these diagrams the extinction gives a shift only along the $x-axis$, while the excess in the $[sp]$ band downward the $y-axis$. In this way, stars with excesses and reddened photospheres can be separated, even if a significant number of stars remain unclassified between reddened photospheres and stars with excesses. In Figs. \ref{Qdiag} the stars with excesses are marked by circles; they are defined as the stars with the $Q_{VIJ[sp]}$ indices significantly (i.e. by more than 3 times the error in $Q$ indices) smaller than the horizontal dotted-dashed lines, that mark the lowest limits of photospheric $Q_{VIJ[sp]}$ indices. We define these limits as the low boundaries of the loci, in these diagrams, of the IRAC sources with optical counterpart that are clustered around the origin in the IRAC color-color plane in Fig. \ref{CCsp}. The spatial distribution and the positions in the other color-color and color-magnitude diagrams of these stars confirm that they are field normal stars or cluster members without disk. In Fig. \ref{Qdiag} they are marked with crosses, and it is evident that they are separated either from the stars with excesses and the reddened sources. Hereafter we will call {\it NS locus} that of the normal stars in the diagrams in Fig. \ref{Qdiag}; {\it EX locus} that of the stars with excesses; {\it UNC locus} that of the stars compatible with both interpretations. \par
The minimum excesses necessary so that a star with disk can be selected, by the use of the $Q_{VIJ[sp]}$ here and $Q_{2MASS}$ in GPM07, is mass and age dependent. To analyze these property of $Q$ indices, we compute these minimum excesses in [3.6] and $K$ for stars with masses equal to 0.5, 1, 1.5, 2, 2.5, 3, 4, 5 solar masses and with ages of 0.1, 0.5, 1, 2.5, 3.5, 5 Myears. Their {\it photospheric} indices are computed with the colors predicted by the evolutionary tracks of \citet{Sie00}, using the color transformation of \citet{KH95}. The minimum excesses are then the excesses in [3.6] or $K$ necessary so that their $Q$ indices become more negative than the lower limit of photospheric colors for more than 3$\sigma$, where $\sigma$ is the mean error in $Q$ of the sources in the WFI FOV. \par
\begin{figure*}[]
\centering
\includegraphics[width=11cm]{plot_iso_2x1.ps}
\caption{Trend of the minimum excesses necessary to identify stars with excesses in $K$ and [3.6] for the studied masses and ages values. The excesses in $K$ are detected with the use of $Q_{VIJK}$ index, defined in GPM07; the excess in [3.6] with $Q_{VIJ[3.6]}$.}
\label{Qtrend}
\end{figure*}
Fig. \ref{Qtrend} shows the variation of the minimum excesses for one among the four indices used in GPM07 to detect excesses in $K$ and for that used here to detect excesses in [3.6]. It is evident that for each mass and for each $Q$ index the minimum excesses have irregular paths with increasing age. This is due to the irregular behavior of the indices when both the involved colors become more red (or more blue), as in comparing normal stars with different effective temperatures. \par
Fig. \ref{Qtrend} also shows that $Q_{VIJK}$ is more efficient in detecting excesses in stars with lower masses respect to $Q_{VIJ[3.6]}$ index. This behavior depends on the shape of the isochrones in the $Q$ diagrams. For example, the minimum of the 5Myears isochrone in $Q_{VIJ[3.6]}$ of Fig. \ref{Qdiag} occurs at $\sim$B9 spectral type, corresponding to 2.5 $M_{\odot}$. This implies that stars of lower mass with small excesses easily fall in the $UNC$ region, where we are unable to distinguish between small excesses and reddening and therefore we need stronger excesses to unambiguously asses the presence of disks. The corresponding minimum in $Q_{VIJK}$ diagram occurs at $\sim$K4 spectral type (using the 5 Myears isochrone), making this index much more efficient in detecting excesses in stars with smaller masses.
\subsection{Stars with $Q$ excesses}
\label{Qres}
\begin{table}[]
\centering
\caption {The first column yields the $Q_{VIJ[sp]}$ indices used in this work; the second column shows the number of stars with excess detected with each $Q_{VIJ[sp]}$ index; the third column shows the total number of stars for which we compute the indices.}
\vspace{0.5cm}
\begin{tabular}{ccc}
\hline
\hline
$Q$ index& Number of stars with excess & Number of stars \\
\hline
$Q_{VIJ[3.6]}$ &113 &1746\\
$Q_{VIJ[4.5]}$ &124 &1474\\
$Q_{VIJ[5.8]}$ & 79 & 357 \\
$Q_{VIJ[8.0]}$ &104 & 297 \\
\hline
\hline
\multicolumn{3}{l} {A total of 174 stars with excesses are selected}
\end{tabular}
\label{comQ}
\end{table}
Table \ref{comQ} gives the total number of stars for which we compute the indices and the number of stars with excesses. These numbers can be lower limits since the criterion we adopt to detect the stars with excesses is very conservative. \par
If we consider the total number of stars in our catalog with good photometry in the bands used to define the indices (third column in Table \ref{comQ}), we find a higher percentage of stars with excesses in the IRAC bands at longer wavelengths. This confirms that a large number of optical sources are detected in the IRAC bands with longest wavelengths thanks to their nature of PMS stars, as suggested in Sect. \ref{glimcat}. \par
Table \ref{comQ} also shows a drop of the numbers of selected stars with excesses at [5.8], due to the smaller numbers of stars with good measurement in this band. This is an effect of the decrease in sensitivity with increasing wavelength. Except for 2 stars, the excesses in the three short IRAC wavelength bands are correlated. This suggests that they are related to the same physical region of the disk. This region can only be the inner rim at which disk dusts sublimate. As proposed by several authors (i.e. \citealt{dale98,dale99,dale01}), in fact, this region is optically thick and it emits as a blackbody at the dust sublimation temperature (between 1500K and 2000K). 16 stars have excess only at [8.0]. This is a consequence of the fact that [8.0] can be affected by the 10$\mu m$ silicate emission feature. A Class~II YSO, then, can show excess in [8.0] even if the inner disk is mostly evacuated, as we can hypothesize for at least 4 among the 16 members of NGC~6611 with excesses detected only with $Q_{VIJ[8.0]}$ (since the other 12 fall in the {\it UNC loci} in the other $Q_{VIJ[sp]}$ diagrams).
\section{Comparison among the used disk diagnostics}
\label{compat}
Using the $Q_{VIJ[sp]}$ indices, we select 30 new stars with excesses (including 5 X-ray sources) not found with the $Q_{2MASS}$ indices in GPM07. Among these stars, as explained, 16 have excesses only at [8.0]; the other 14 have large errors in $K$ band or fall in the {\it UNC} loci of the $Q_{2MASS}$ diagrams in GPM07. \par
Similarly, 186 stars with excesses in 2MASS bands (mostly in K) selected in GPM07, are not classified as stars with excesses with $Q_{VIJ[sp]}$ indices. This number of stars can seem very large, since usually more stars with excesses are observed at longer wavelengths. However, it must be noted that 94 of these stars have poor IRAC photometry, consistently with the different sensitivities between 2MASS and GLIMPSE catalogs, and therefore they are not included in IRAC sample. All the other 92 fall in the {\it UNC} loci of the diagrams in Fig. \ref{Qdiag}. \par
We conclude that the set of indices defined in this work is consistent with that defined in GPM07 (with 144 sources with at least one excess in 2MASS bands and at least one in IRAC bands). This consistency means that if $Q$ indices detect excesses in some particular 2MASS or IRAC band, the lack of excesses in the other bands is almost always due to poor photometry or to the ambiguity between excesses and reddening, as explained above. This is compatible with a scenario in which excesses in $K$ (and $H$) are due to the same physical region of the disk of the excesses in IRAC bands. It is important to note, however, that this consistency rules only between the $Q$ indices, that are an efficient diagnostic for the selection of Class~II YSOs having moderate disk inclination (with respect to the line of sight), because of the use of the optical and $J$ bands. The color-color IRAC diagram is instead a peerless tool for the selection of embedded YSOs and Class~II stars with highly inclined disk. Therefore the two methods may be considered complementary. \par
\begin{figure}[]
\centering
\includegraphics[width=8cm]{Q1_noKec.ps}
\caption{Diagram of $Q_{VIJ[3.6]}$ vs. $J-[3.6]$, similar to the diagrams in Fig. \ref{Qdiag}; the stars with excesses detected in 2MASS but not in IRAC bands are marked as triangles.}
\label{Q1}
\end{figure}
Fig. \ref{CCspX} and Fig. \ref{Q1ccsp} show how the excesses detected with the IRAC color-color diagram translate into those detected with $Q$ indices, and vice versa. In Fig. \ref{CCspX} diamonds mark the 70 candidate stars with disk selected in GPM07 with $Q_{2MASS}$ indices having good photometry in all IRAC bands, while dots mark the candidate stars with disk selected with the IRAC color-color diagram. Only 8 sources with excesses detected with $Q_{2MASS}$ indices are close to the origin of the diagram (in the locus of the normal stars), while the other have red IRAC colors. The characteristics of these 8 sources will be studied by SED analysis in our subsequent paper. Fig. \ref{Q1ccsp} shows the $Q_{VIJ[3.6]}$ index of candidate cluster members with disk selected with the IRAC color-color diagram (marked with triangles). As expected, almost no one of these stars fall in the {\it NS locus}. \par
As further test, Fig. \ref{K112} shows the $K-[3.6]$ vs. $[3.6]-[4.5]$ diagram for the stars in ACIS FOV, with the X-ray sources and the stars with excesses in 2MASS bands selected in GPM07. All the stars with excesses in 2MASS bands have $K-[3.6] \geq 0.7^m$ and/or $[3.6]-[4.5] \geq 0.2^m$, with the exception of a couple of stars that have also both IRAC colors $\sim0$. \par
In the diagram in Fig. \ref{K112} it is also clear that the distribution of the stars with normal colors (marked with points) shows a gap at $K-[3.6]\sim0.5^m$, likely due to a rapid increase of the interstellar extinction at the distance of the Eagle Nebula: the foreground contaminating sources are clustered around the origin of the diagram and they are clearly separated from the sample of the more extincted stars, dominated by background sources. In fact, the gap between these two samples is populated by a large number of X-ray sources, mostly associated with the nebula.
\begin{figure}[!h]
\centering
\includegraphics[width=6cm]{CCsp_ecK.ps}
\caption{IRAC color-color diagrams of the stars with disk selected with the IRAC diagram of Fig. \ref{CCsp} (points) and of the stars with excesses in 2MASS bands selected in GPM07 (diamonds). The Class~II locus and the reddening vectors are analogous to those in Fig. \ref{CCsp}.}
\label{CCspX}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=8cm]{QVIJ1CCsp.ps}
\caption{Diagram $Q_{VIJ[3.6]}$ vs. $J-[3.6]$ for the stars in the WFI FOV, with overplotted the stars with excesses selected with the IRAC color-color diagram (triangles).}
\label{Q1ccsp}
\end{figure}
\begin{figure}[]
\centering
\includegraphics[width=8cm]{K1_12_acis.ps}
\caption{Color-color diagram with $K$ and IRAC bands for the stars in ACIS FOV (points). Diamonds are stars with excesses in 2MASS bands detected with $Q_{2MASS}$ indices defined in GMP07, while crosses mark X-ray sources. The reddening vector, corresponding to $A_{V} = 10^m$, has been obtained from the extinction law of \citet{Mege04}.}
\label{K112}
\end{figure}
\begin{table}[]
\centering
\caption {Number of stars with excesses detected with the different diagnostics used in this paper and in GPM07.}
\vspace{0.5cm}
\begin{tabular}{cc}
\hline
\hline
Disk diagnostics& Numbers of stars with excesses \\
\hline
IRAC color-color diagram &182\\
$Q_{2MASS}$ indices &330\\
$Q_{VIJ[sp]}$ indices &174\\
\hline
\hline
\multicolumn{2}{l} {A total of 458 different candidate members with disk are selected.}
\end{tabular}
\label{dd}
\end{table}
Table \ref{dd} summarizes the number of stars with excesses selected with the diagnostics used here and in GPM07.
\section{Catalog of the candidate cluster members}
\label{memcat}
Combining all the diagnostics used in this paper and in GPM07, we select a total of 1264 candidate members of NGC~6611: 790 candidate Class~III members (inside the ACIS FOV) and 474 candidate Class~II and Class~I PMS members (inside the WFI FOV). \par
The catalog, available in electronic format, comprises:
\begin{itemize}
\item stars IDs and coordinates;
\item the magnitudes and the errors in BVIJHK and IRAC bands; if some value is not available it is set equal to ``NAN'';
\item a tag ({\it tagx}) that is equal to $1$ if the star is also an X-ray source, otherwise the tag is equal to $0$;
\item a tag ({\it tagM}) that is equal to $A01$ if the star is classified as a candidate members with disk only by its position in the IRAC color-color diagram, $A10$ if the infrared excesses are detected only by $Q$ indices and $A11$ if both diagnostics detect the emission from the disk; $B$ if the star is a disk-less X-ray source; $C$ if it is an X-ray source with optical colors compatible with a foreground main sequence star (see Sect. \ref{catal}).
\end{itemize}
\section{Spatial variation of disks frequency}
\label{final}
In order to confirm the results obtained in GPM07, we take advantage of the new list of cluster members obtained here. Inside the central $17^{\prime}\times17^{\prime}$ ACIS FOV, where X-ray data are available, we select 790 candidate PMS members without disk, thanks to their X-ray emission and the presence of an optical/IR counterpart, among which 31 are likely foreground contaminants. In addition, in the same FOV we found 257 stars with NIR excesses, 54 more than in GPM07, that are candidate to have a circumstellar disk (118 are also X-ray sources). \par
The average disk frequency inside the ACIS FOV is $ 24\% \pm 2\%$, larger than the value obtained in GPM07 ($19\% \pm 1\%$). \citet{Oli05} found a disk frequency equal to $\sim 58\%$ in a small region inside the ACIS FOV. The different fraction are likely due to the different sensitivities of the two surveys. In fact, the $L$ survey used in \citet{Oli05} is deeper than the GLIMPSE catalog (about 1 magnitude respect to [3.6] sources catalog). Moreover, \citet{Oli05} shown that the disk frequency increases with decreasing mass of the central star, likely since the more massive is the central star the more rapid is the erosion of the circumstellar disk, as it was found in other star forming regions (for example, the study of \citealp{Carpe06}, on the Scorpius OB association). This may be the origin of the discrepancy with \citet{Oli05} results.\par
However, more than the absolute value of disk frequency, we are interested to its spatial variation respect to the position of massive members, that allows to understand if UV radiation emitted by the latter members may alter the disk lifetimes of nearby T-Tauri YSOs. In order to reach this goal we need only to be sure to use a consistent criterion in different sky positions. GPM07 already used this approach, and we verify here their results using the new list of cluster members. \par
For every cluster member, with and without disk, we compute the incident flux emitted by 52 massive members of the cluster, with spectral class earlier than B5 \citep{Hil93}. To this calculation, we use the projected distances from massive stars, as already explained (Sect. \ref{intro}). We then compute the disk frequencies for various bins of incident flux, as shown in Fig. \ref{isto}. It is evident that the main GPM07 result, i.e. that members with disk are more frequent at larger distances from massive stars, where they are irradiated by lower UV fluxes, is confirmed. Disk frequency, in fact, increases with decreasing incident fluxes: from $31\% \pm 4\%$ in the bin with lowest flux, to $27\% \pm 4\%$, $21\% \pm 3\%$ and finally $16\% \pm 3\%$ in the bin with highest flux. In the histogram in Fig. \ref{isto}, the bins size are defined in order to have the same number of disk less members in each bin. These disk frequencies are obviously more reliable than those computed in GPM07, thanks to the new and more complete list of cluster members. This paper, then, reinforces the result of GPM07 about the influence of massive stars on the evolution of nearby members with disk in NGC~6611.
\begin{figure}[]
\centering
\includegraphics[width=6cm]{isto.ps}
\caption{Percentage of members with disk in the ACIS FOV vs. the estimated incident flux from OB members. The four values are $31\% \pm 4\%$, $27\% \pm 4\%$, $21\% \pm 3\%$ and finally $16\% \pm 3\%$, from left to the right.}
\label{isto}
\end{figure}
\subsection{Spatial distribution of the cluster}
\label{cluspadis}
\begin{figure*}[]
\centering
\includegraphics[width=6.7cm]{spadis.ps}
\includegraphics[width=6.7cm]{uvfield.ps}
\includegraphics[width=6.7cm]{spadis_disk.ps}
\includegraphics[width=6.7cm]{spadis_X.ps}
\caption{The upper left panel shows the spatial distribution of candidate cluster members. The dotted square delimits the ACIS FOV. The upper right panel shows the intensity map of the radiation emitted by the massive members of the cluster. The different shades correspond to the four bins in Fig. \ref{isto}. The lower left panel shows the spatial distribution of candidate members with disk; the lower right panel that of candidate members without disk. In all the panels, the cross marks the center of the ACIF FOV}
\label{maps}
\end{figure*}
The panels in Fig. \ref{maps} allow to compare directly the spatial distribution of the cluster members and the intensity of the radiation emitted by massive members. The cluster members are not equally distributed in the ACIS FOV, with an empty region at South-West. The intensity map shown in the upper right panel of Fig. \ref{maps} is more concentrated that the members density, reflecting the central concentration of massive members. \par
The two lower panels of Fig. \ref{maps} show the spatial distributions of disk-bearing and disk-less members. Note that the latter stars have a more symmetric distribution; while the former are more present where the UV flux is low, and also their distribution present an ``hole'' in the center of the field, where the UV flux is lower. Moreover, the region corresponding to the highest fluxes of Fig. \ref{isto} have a radius of 0.6 parsecs, the distance from massive stars at which \citet{Bal07} found a drop in disk frequency in NGC~2244.
The upper right panel of Fig. \ref{maps} also allows a more easy confrontation of our result with that of \citet{Bal07}. As explained, these authors studied the spatial variation of the disk frequency respect to the positions of massive stars in the young cluster NGC~2244. They have not found a correlation between the two distributions, but observed a significant drop in disk frequency for distance smaller than 0.5 pc from O stars. The central regions of the intensity map in Fig. \ref{maps}, corresponding to the highest fluxes in Fig. \ref{isto}, has a radius equal to about 0.6 parsec, so the decrease of the disk frequency in the last bin in Fig. \ref{isto} is in agreement with the finding of \citet{Bal07}. \par
To have an estimation of the extension of NGC~6611, even if it has not a symmetric spatial distribution, we use the 2-parameters density profile of \citet{Ki66}:
\begin{equation}
\sigma(r)=\frac{\sigma_0}{1+\left(r/r_{core} \right)^2}
\label{kingeq}
\end{equation}
where $\sigma_0$ is the central cluster density, $r$ is the distance of stars from cluster center and $r_{core}$ is the core radius. We calculated from the observed radial density profile: $\sigma_0 = 149$ $\pm 8 N_{stars} pc^{-1}$ and $r_{core} = 1.39 \pm 0.08 pc$, in agreement with the results obtained in GPM07.
\section{Summary}
\label{thatsallfolks}
In this paper, we analyze Spitzer/IRAC data of the young open cluster NGC~6611, in the Eagle Nebula, in a large field wide approximately $30^{\prime}\times30^{\prime}$. In a previous paper, we have already selected the members of this cluster using our published BVIJHK and X-ray multi band catalog. The obtained list of cluster members, both with and without disk, is used to verify that the former are more frequent at larger distances from the massive members of the cluster. \par
In this work, in order to select new members with disk, we use and compare two different disk diagnostics: the IRAC color-color diagram, that use all the IRAC bands simultaneously, and four suitable reddening-free color indices ($Q$ indices), each defined to detect the excesses in a specific IRAC band. \par
Using the IRAC color-color diagram, we identify 182 Young Stellar Objects (147 Class~II; 13 Class~I and 22 with colors compatible with both classifications). This diagnostic is very efficient to select largely embedded young sources, but it can be used only for stars with good photometry in all IRAC bands. We discuss how significantly this affects the selection of faint YSOs. With the use of $Q$ indices we partially overcome this problem, since they allow us to select stars with excesses in each IRAC band taken alone; with this diagnostic, we select 174 YSOs with excesses in IRAC bands. Among them, 66 stars cannot be detected with the IRAC color-color plane, since they are not well measured in all the IRAC bands.\par
Combining the outcome of both diagnostics, we identify 146 new cluster members with disk with respect to our previous work, which was based on 2MASS photometry alone (116 from the IRAC color-color diagram and 30 from $Q$ indices). At the end, our catalog includes a total of 474 candidate Pre-Main Sequence members with disk and 790 without disk (the latter only in the central $17^{\prime}\times17^{\prime}$ field, where we found 118 X-ray sources with disk). \par
Comparing all the $Q$ indices defined with 2MASS and IRAC bands (with the exception of [8.0] band), we claim that they are all sensitive to the emission from the same physical region of the disk, namely the inner rim at the dusts sublimation radius. These indices will be efficiently used, then, to select Class~II stars that are not largely embedded (i.e. for which it is possible to detect the photospheric emission) and whose inner disk is not evacuated. The embedded YSOs associated to the cluster can be selected only thank to the IRAC color-color diagram, and not with the $Q$ indices.\par
We discuss evidence suggesting that star formation activity is ongoing in the outer region of the Eagle Nebula, and not only in the center of NGC~6611 as showed by previous works. For example, the diagnostic based on IRAC color-color diagram allow us to identify a probable new star formation site, rich of Class~I and embedded Class~II objects, at North with respect to the center of the cluster. \par
In the central $17^{\prime}\times17^{\prime}$ field we find an average disk fraction equal to $24\% \pm 2\%$. In this field, the disk frequency also has a strong dependence on the incident radiation emitted by massive members, varying from $31\% \pm 4\%$ to $16\% \pm 3\%$ across the whole range of values of incident flux. This is an evidence of the influence of massive stars radiation on the evolution of circumstellar disks and star formation process, as already shown in GPM07.
\begin{acknowledgements}
We thank the anonymous referee for its useful comments that allowed us to improve our manuscript. We acknowledge financial support from the contract PRIN-INAF. This publication makes use of Spitzer's GLIMPSE survey data.
\end{acknowledgements}
\addcontentsline{toc}{section}{\bf Bibliografia}
\bibliographystyle{apj} |
0812.1253 | \section{Introduction}
There is a new paradigm in Big Bang Nucleosynthesis (BBN) studies which promises enhanced probes of the early universe and a window into new physics. In the past,
BBN predictions have been used to place constraints on the baryon number at three minutes after the Big Bang. This was done by comparing the observationally-inferred
primordial light element abundances to abundances predicted by BBN calculations over a wide range of baryon-to-photon ratio values. With the high precision results of the Wilkinson Microwave Anisotropy Probe (WMAP), however, the baryon-to-photon ratio, $\eta$, is now independently determined
-- at 300,000 years after the Big Bang -- from observations of the cosmic microwave background (CMB) relative acoustic peak amplitudes \cite{WMAP, WMAP1, 3yrwmap}. Currently, the WMAP Three Year Mean value for the baryon-to-photon ratio is $\eta = \left(6.11\pm .22\right) \times 10^{-10} $. Future missions ($e.g.$, Planck\cite{bond}) promise considerably higher precision determinations of $\eta$.
Since the baryon-to-photon ratio is known independently, and to excellent precision albeit at much later times, BBN calculations can now be used to probe or constrain new physics or heretofore poorly determined parameters. For example, we can use BBN predictions to constrain not only the lepton numbers but also the physics behind these lepton numbers . The existence of a nonzero electron lepton number follows from charge neutrality and the observed proton content of the universe. The contributions of neutrinos and antineutrinos to the electron, muon, and tau $(e, \mu, \tau)$ lepton numbers are not known, since we do not directly observe these relic particles. The neutrino contribution to the lepton number for a given flavor, $\alpha={\rm e},\mu,\tau$, is defined analogously to the baryon-to-photon ratio, $\eta \equiv (n_b-n_{\bar b})/n_\gamma$, as
\begin{equation}
L_{\nu_\alpha} \equiv {{n_{\nu_\alpha}-n_{\bar\nu_\alpha}}\over{n_\gamma}},
\label{lepton}
\end{equation}
where $n_\gamma = (2\zeta(3)/\pi^2) T^3_\gamma$ is the proper photon number density at temperature $T_\gamma$, and $n_{\nu_\alpha}$ and $n_{\bar\nu_\alpha}$ are the neutrino and antineutrino number densities. Observational bounds on the lepton numbers\cite{abfw, kfs, Kneller:2001cd, abb, wong, dolgov, Simha:2008mt, Cuoco:2003cu, Serpico:2005bc} remain large compared to the values of these that could significantly affect BBN when there is new leptonic sector physics ($e.g.$, sterile neutrinos)\cite{abfw}.
The neutrino lepton numbers influence BBN and the resulting primordial element abundances in a number of ways\cite{wfh}. The energy density in the neutrino sector contributes to the total energy density of the universe which determines the expansion rate. The expansion rate is crucial to the outcome of BBN because it determines the weak freeze-out temperature which in turn effectively sets the neutron-to-proton ratio and, therefore, the primordial abundances of $^4$He and the other light elements.
Not only is the total number of neutrinos important to the outcome of BBN, but the neutrino distribution functions are key components of the phase space integrals in the weak reaction rates in BBN. The weak reactions of greatest interest are those that inter-convert neutrons and protons:
\begin{equation}
\nu_e+n\rightleftharpoons p+e^-,
\label{nuen}
\end{equation}
\begin{equation}
\bar\nu_e+p\rightleftharpoons n+e^+,
\label{nuebarp}
\end{equation}
\begin{equation}
n \rightleftharpoons p+e^-+\bar\nu_e. \label{ndecay}
\end{equation}
Since the rates for the weak reactions are strongly energy dependent, the energy distributions of the neutrinos and antineutrinos can figure prominently in both the forward and reverse rates in the processes in Eqs.~(\ref{nuen}), (\ref{nuebarp}), and (\ref{ndecay}). In standard BBN scenarios the neutrino distribution functions are assumed to be thermally-shaped Fermi-Dirac distributions. However, it is possible that non-thermal neutrino distribution functions arise after the neutrinos decouple from the background plasma around $T \approx 3\,{\rm MeV}$ and during times crucial to BBN.
There are many possible mechanisms that could alter the neutrino spectra. Altered neutrino energy spectra, in turn, could change the resulting primordial element abundances from what one would expect given a particular lepton number. Neutrino energy spectrum-altering scenarios include, but are not limited to, active-active neutrino oscillations\cite{dolgov, abb, abfw, wong}, active-sterile neutrino oscillations\cite{abfw, kfs, sfka, cirelli, FV, fv97}, or particle decay into the neutrino sea\cite{pastor}. Moreover, active-sterile neutrino flavor mixing and other mechanisms for creating sterile neutrino dark matter before neutrino decoupling are a focus of current research\cite{Dodelson:1993je, afp, Dolgov:2000ew, Shaposhnikov:2006xi, Kusenko:2006rh, Petraki:2007gq, Petraki:2008ef, Shi:1998fu, Chiu:1977ds, Boyanovsky:2007zz, Boyanovsky:2006it, Boyanovsky:2007ba, Abazajian:2002yz}, as is the constraint of these scenarios via x-ray observations and large-scale structure considerations\cite{Abazajian:2001vt, Abazajian:2006yn, Boyarsky:2006fg, Boyarsky:2005us, Yuksel:2007xh, Watson:2006qb, Viel:2005qj, Abazajian:2005xn, Seljak:2006qw}. Though these models may not directly affect BBN through the spectral distortion of $\nu_e$ and $\bar\nu_e$ energy distribution functions discussed here, they nevertheless may affect the overall values of lepton number, entropy, and energy density which are relevant to BBN. In the end, the existence of sterile neutrino states changes the meaning and utility of lepton number\cite{Foot:1995qk, Shi:1996ic}. To use BBN predictions to probe or constrain any such scenario requires an approach that self-consistently includes neutrino and antineutrino energy spectra of arbitrary shape.
We have performed detailed calculations of primordial nucleosynthesis in which we include neutrino and antineutrino spectral distortion. Our results are surprising. We find that even modest distortions of the neutrino and/or antineutrino spectral shapes from Fermi-Dirac black body forms can result in significant modification of the net neutron-proton interconversion rates and, hence, alteration of the light element abundances.
To study the effects of neutrino spectral distortion, we have modified the original Kawano/Wagoner BBN code described in Ref. \cite{skm} to calculate the primordial element abundances self-consistently with arbitrarily-specified non-thermal and/or time-dependent neutrino distribution functions. This paper is structured as follows: Section II describes the calculation of weak charge-changing reaction rates in the early universe and our prescription for employing non-thermal neutrino and antineutrino energy distribution functions;
Section III discusses our new BBN code;
Section IV will present example results for non-thermal neutrino distribution functions resulting from various physical scenarios; and Section V gives conclusions.
\section{BBN and the Weak Reaction Rates}
\begin{figure}
\includegraphics[width=2.5in,angle=270]{occ.ps}
\caption{Example neutrino occupation probabilities. The upper dark (black) curve is the standard Fermi-Dirac thermally-distrubuted neutrino occupation probability and the lower light (red) curve is an example non-thermal neutrino occupation probability which can result from active-sterile neutrino transformation.}
\label{occprob}
\end{figure}
At early times and high temperatures, $t\sim 1$ sec and $T\gtrsim 1$ MeV, the primordial element abundances are given by nuclear statistical equilibrium (NSE). In NSE the rates for the processes which create a particular nucleus are equal to the rates that destroy it, so that the abundance for each element is given by the Saha equation.
As the universe expands and cools, reaction rates slow down to the point where they will not be fast enough to maintain NSE and the
neutron and proton abundances, and subsequently the abundances of $^4$He and the other light nuclei, \textquotedblleft freeze-out". For example, the $^4$He abundance falls below its equilibrium NSE track at $T\approx 0.6$ MeV, essentially as a consequence of the small NSE deuterium abundance. BBN can be looked at crudely as a series of freeze-outs from NSE, but with considerable post-equilibrium nuclear processing.
Because the entropy per baryon is high, alpha particles form copiously during BBN. Nearly all the neutrons in the universe at the epoch where $\alpha$'s form end up in alpha particles.
A key factor in the outcome of BBN is the value of the neutron-to-proton ratio. Like the nuclear abundances in NSE, at high enough temperatures ($T > 3$~MeV) the {\it weak} neutron-proton inter-conversion rates are fast enough to maintain chemical equilibrium and the neutron-to-proton ratio can be determined from a Saha equation when the neutrinos have thermally-shaped distribution functions (as we will describe later).
For general conditions the neutron-to-proton ratio is determined by the weak reaction processes shown in Eqs.\ (\ref{nuen}-\ref{ndecay}). The rates for these weak reactions are given in Eqs.~(\ref{genep}-\ref{revrate}) below. The forward rate for the reaction in Eq.\ (\ref{nuen}) is given by $\lambda_{\nu_en}$, Eq.~(\ref{nueonrate}), and the corresponding reverse rate is given by $\lambda_{e^-p}$, Eq.~(\ref{genep}). Likewise, the forward and reverse rates for the process in Eq.\ (\ref{nuebarp}) are $\lambda_{\bar\nu_ep}$ and $\lambda_{e^+n}$ respectively. Eq.\ (\ref{ndecayrate}) gives the rate for free neutron decay denoted by $\lambda_{\rm n-decay}$, while the reverse three-body reaction rate is denoted by $\lambda_{pe^+\bar\nu_e}$ given in Eq.\ (\ref{revrate}). These rates are detailed below\cite{abfw, dicus, FFNI, FFNII, FFNIII, FFNIV}:
\begin{widetext}
\begin{equation}
\lambda_{e^-p} \approx {{ \ln{2}}\over{\langle ft\rangle { \left( m_ec^2 \right)}^5 }}
\int_{0}^\infty {{F \left[Z,E_\nu + Q_np\right] E_\nu^2 \left(E_\nu + Q_{np}\right)\left(\left(E_\nu + Q_{np}\right)^2 -m_ec^2\right)^{1/2} }\left[S_{e^-}\right] \left[ 1-{{S}}_{\nu_e} \right] dE_\nu },\label{genep}
\end{equation}
\begin{equation}
\lambda_{\bar\nu_e p} \approx {{ \ln{2}}\over{\langle ft\rangle {\left(m_ec^2 \right)}^5 }}
\int_{Q_{np} + m_ec^2}^\infty {{E_\nu ^2 \left( E_\nu - Q_{np} \right) \left(\left( E_\nu - Q_{np} \right)^2 -m_ec^2\right)^{1/2}}\left[S_{\bar\nu_e} \right] \left[1-S_{e^+}\right] dE_\nu}, \label{nuonprate}
\end{equation}
\begin{equation}
\lambda_{e^+n} \approx {{ \ln{2}}\over{\langle ft\rangle {\left(m_ec^2 \right)}^5 }}
\int_{Q_{np} + m_ec^2}^\infty{{E_\nu^2 \left(E_\nu - Q_{np}\right)\left(\left(E_\nu - Q_{np}\right)^2 -m_ec^2\right)^{1/2}} \left[S_{e^+}\right] \left[1-S_{\bar\nu_e}\right] dE_\nu}, \label{eonnrate}
\end{equation}
\begin{equation}
\lambda_{\nu_e n} \approx {{ \ln{2}}\over{\langle ft\rangle {\left(m_ec^2 \right)}^5 }}
\int_{0}^\infty{{F \left[Z,E_\nu + Q_np\right] E_\nu ^2 \left( E_\nu + Q_{np} \right) \left(\left( E_\nu + Q_{np} \right)^2 -m_ec^2\right)^{1/2}}\left[S_{\nu_e}\right] \left[1-S_{e^-}\right] dE_\nu}, \label{nueonrate}
\end{equation}
\begin {equation}
\lambda_{\rm n-decay} \approx {{ \ln{2}}\over{\langle ft\rangle {\left(m_ec^2 \right)}^5 }}
\int_{0}^{Q_{np}-m_ec^2} {F \left[Z,Q_np - E_\nu \right]E_\nu^2 \left( Q_{np}-E_\nu \right) \left( \left( Q_{np}-E_\nu \right)^2 -m_ec^2\right)^{1/2}} \left[1-S_{\bar\nu_e}\right] \left[1-S_{e^-} \right] dE_\nu , \label{ndecayrate}
\end{equation}
\begin{equation}
\lambda_{pe^+\bar\nu_e} \approx {{ \ln{2}}\over{\langle ft\rangle {\left(m_ec^2 \right)}^5 }}
\int_{0}^{Q_{np}-m_ec^2} {F \left[Z,Q_np - E_\nu \right] E_\nu^2 \left( Q_{np}-E_\nu \right) \left( \left( Q_{np}-E_\nu \right)^2 -m_ec^2\right)^{1/2}} \left[S_{\bar\nu_e}\right] \left[S_{e^-} \right] dE_\nu ,\label{revrate}
\end{equation}
\end{widetext}
where $E_e$ and $E_\nu$ are the appropriate electron/positron and neutrino/antineutrino energies. In these expressions the neutron-proton mass difference is $Q_{np} \approx 1.293$ MeV. Here $\ln2/ \langle ft\rangle$ is proportional to the effective weak coupling applying to free nucleons with $\langle ft\rangle$ the effective $ft$-value defined in Ref.\cite{FFNI}. The weak matrix element is $\ln2/ \langle ft\rangle \propto G_F^2(1+3g^2_A)$, where $G_F$ is the Fermi constant and $g_A$ is the ratio of axial to vector coupling for the free nucleons. In the BBN calculation the value for $\ln2/ \langle ft\rangle$ is normalized by the free neutron decay lifetime at zero-temperature. Here $F\left[Z,E_e\right]$ is the relativistic coulomb correction factor (or Fermi factor)\cite{FFNI},
\begin{equation}
F(\pm Z,w) \approx 2(1+s)(2pR)^{2(s-1)}e^{\pi\eta}\Bigg\vert{{\Gamma(s+i\eta)}\over{\Gamma(2s+1)}}\Bigg\vert.
\label{coulomb}
\end{equation}
In this expression the upper signs are for electron emission and capture, the lower signs are for positron emission and capture, $s=[1-(\alpha Z)^2]^{1/2}$, $Z$ is the appropriate nuclear charge (which is $Z=1$ for the proton), $\alpha$ is the fine structure constant, $\eta = \pm Zw/p$, and $R$ is the nuclear radius in electron Compton wavelengths. $R\approx 2.908\times 10^{-3} A^{1/3} - 2.437A^{-1/3}$ where $A$ is the nuclear mass number and $\omega\equiv (p^2+m_e^2)^{1/2}$ with $m_e$ the electron rest mass.
This expression appears in the phase space integrand of the weak rates which require a Coulomb factor in either the initial or final state \cite{coulfac, dicus, L&T}.
$S_{e^-/+}$ and $S_{\nu_e/\bar\nu_e}$ are the phase space occupation probabilities for electrons/positrons and neutrinos/antineutrinos, respectively. For example, the $\left[1-S_{\nu_e}\right]$ factor in $\lambda_{e^-p}$ is the Pauli phase space blocking factor for processes which create a neutrino. In the limit that the neutrinos have {\it thermally-shaped} Fermi-Dirac distribution functions, these phase space occupation probabilities become two parameter functions:
\begin{equation}
S_{\nu_e} = {1\over { e^{E_{\nu_e}/{T_\nu} - \eta_{\nu_e}} +1}},
\label{nuocc}
\end{equation}
\begin{equation}
S_{\bar\nu_e} = {1 \over{ e^{E_{\nu_e}/{T_\nu} - \eta_{\bar\nu_e}} +1}}.
\label{nubarocc}
\end{equation}
The two parameters, $T_\nu$ and $\eta_{\nu_e}$, correspond to neutrino temperature and degeneracy parameter (the ratio of chemical potential to temperature), respectively. For example, a thermally-shaped neutrino phase space occupation probability function is graphed in Fig.~\ref{occprob} as the upper black curve.
The total
weak neutron destruction rate is $\lambda_n = \lambda_{\nu_e n} + \lambda_{e^+ n} + \lambda_{n-{\rm decay}}$ and the corresponding total weak proton destruction rate is $\lambda_p = \lambda_{\bar\nu_ep} + \lambda_{e^- p} + \lambda_{\bar\nu_e e^- p}$. It is convenient to define
\begin{equation}
\Lambda_{\rm tot}=\lambda_n +\lambda_p.
\label{total}
\end{equation}
With this definition, the rate of change of the net electron number per baryon, $Y_e$, with Friedmann-Lema$\hat{\rm i}$tre-Robertson-Walker (FLRW) time-like coordinate $t$ in the early universe will be
\begin{equation}
{{dY_e}\over{dt}}=\lambda_{n}-Y_e\, \Lambda_{\rm tot}.
\label{dyedt}
\end{equation}
At early times where temperatures are high, the forward and reverse rates of these reactions are fast compared to the expansion rate of the universe. In this regime the neutron-to-proton ratio is just
\begin{equation}
{{n}\over{p}} = {{\lambda_{\bar\nu_ep}+\lambda_{e^-p}+\lambda_{pe\bar\nu_e}}\over{\lambda_{\nu_en}+\lambda_{e^+n}+\lambda_{n\ {\rm decay}}}}.
\label{ntopp}
\end{equation}
This can be approximated as
\begin{equation}
\label{ntoppp}
{{n}\over{p}} \approx {{\lambda_{\bar\nu_ep}+\lambda_{e^-p}}\over {\lambda_{\nu_en}+\lambda_{e^+n}}}
\end{equation}
because neutron decay and the reverse three-body reaction are negligible by comparison at high temperatures. When the neutrino distribution functions have thermally-shaped Fermi-Dirac forms, the neutron-to-proton ratio is given by
\begin{equation}
\label{thernalnp}
{{n}\over{p}} \approx {{\left(\lambda_{e^-p}/\lambda_{e^+n}\right)+e^{-\eta_{\nu_e}+\eta_e-\xi}}\over{\left(\lambda_{e^-p}/\lambda_{e^+n}\right) e^{\eta_{\nu_e}-\eta_e+\xi}+1}},
\end{equation}
where $\eta_{\nu_e}=\mu_{\nu_e}/T$ is the electron neutrino degeneracy parameter, $\eta_e=\mu_e/T$ is the electron degeneracy parameter, and $\xi$ is the neutron-proton mass difference divided by temperature, $\xi= (m_n-m_p)/T$\cite{abfw}. This equation is generally true whenever the lepton distribution functions have Fermi-Dirac forms and identical temperature parameters and whenever we can neglect neutron decay and its reverse process. Of course, at lower temperatures the neutrino and electron-photon plasma temperatures will differ and free neutron decay will be important.
\begin{figure}
\includegraphics[width=2.5in,angle=270]{ntopfig.ps}
\caption{The neutron to proton ratio, $n/p$, as a function of temperature for three nucleosynthesis scenarios. The lower solid curve is for BBN with degenerate neutrinos and no neutrino transformation, where $L_{\nu_e} = L_{\nu_\tau} = L_{\nu_\mu} = .05$. The upper solid curve is the $n/p$ ratio with the same lepton numbers as above but now including a particular active-sterile neutrino transformation scenario. The dotted cure is the $n/p$ ratio for standard BBN (no lepton numbers or neutrino oscillation). The dashed line is the $n/p$ equilibrium prediction for standard BBN (no lepton numbers or sterile neutrinos) with enforced weak chemical equilibrium.}
\label{ntopfig}
\end{figure}
If the weak reactions occur rapidly enough to maintain chemical equilibrium, then the Saha equation, $\mu_{\nu_e} + \mu_n = \mu_{e^-} + \mu_p$, can be used to predict the neutron-to-proton ratio. Interestingly, both the Saha equation and the steady state rate equilibrium condition in Eq.\ (\ref{thernalnp}), with the full lepton capture rates of Eqs.~(\ref{genep}-\ref{revrate}), can be written as\cite{abfw}
\begin{equation}
\label{chemnp}
{{n}\over{p}} \approx e^{\left({\mu_e-\mu_{\nu_e}-\delta m_{np}}\right)/{T}}.
\end{equation}
This equilibrium neutron-to-proton ratio is shown in Fig.~\ref{ntopfig} as the dashed (green) line for zero electron and neutron chemical potentials, $\mu_e=\mu_{\nu_e} =0$.
As the universe cools, the weak reaction rates become slow compared to the expansion of the universe and the neutron-to-proton ratio falls out of equilibrium. This is called \textquotedblleft weak freeze-out" and occurs over a range of
temperatures. Fig.~\ref{ntopfig} shows the actual neutron-to-proton ratio evolving as a function of temperature for the standard BBN scenario (thermal neutrino distribution functions and zero chemical potentials $\mu_e=\mu_{\nu_e} =0$). At high temperatures, the actual neutron-to-proton ratio follows the equilibrium value and then around 1 MeV, the weak freeze-out commences. This happens because the weak rates have a stronger dependence on temperature than does the expansion rate of the universe. The lepton capture/decay rates given in Eqs.~(\ref{genep}-\ref{revrate}) scale very roughly as $T^5$ (see Ref.\cite{FFNIV} for the detailed temperature dependence), while the expansion rate of the universe is $\propto T^2$. As a result, the neutron-proton weak interconversion rates eventually will fall below the expansion rate.
Although the weak rates become relatively slow, they still have a significant effect on the neutron-to-proton ratio, even for temperatures well below $T = 0.8$ MeV. In fact, free neutron decay continues to lower the $n/p$ ratio until there are virtually no more free neutrons or until the neutrons are sequestered in alpha particles, where they are effectively shielded from the weak interaction. This is illustrated in Fig.~\ref{ntopfig} where the dotted (blue) line continues to decrease until $T \approx .08$ MeV (when the neutrons have been captured during rapid alpha particle formation). It is important to correctly calculate the weak reactions in order to appropriately track the $n/p$ ratio. This ratio sets the scale, in varying degrees, for all the primordial element abundances\cite{skm, wfh}.
\section{New BBN Code}
A nucleosynthesis code was written by Robert V. Wagoner in 1969\cite{wag73, wag69} to track and time evolve the nuclear abundances and the neutron-to-proton ratio in an expanding cooling universe. It was later updated and revised by Lawrence Kawano in 1988\cite{kawano1}.
This code time-evolves three main quantities, the electron fraction, $Y_e$, the baryon-to-photon ratio, $\eta$, and the temperature, along with the primordial element abundances. It follows 48 nuclides using a reaction network composed of 168 nuclear reactions, whose rates have primarily been based on, and in some cases extrapolated from, laboratory cross sections. The main numerical technique is a 2nd order Runga-Kutta routine.
The code also tracks the neutron-to-proton ratio by calculating the weak reaction rates using the standard thermally-shaped Fermi-Dirac neutrino distribution functions, setting $S_{\nu_e}$ and $S_{\bar\nu_e}$ as given in Eq.~(\ref{nuocc}) and Eq.~(\ref{nubarocc}).
In their approach, electron energy is used as the integration variable, instead of neutrino energy as given in Eqs.~(\ref{genep}-\ref{revrate}) above. To save computational time, they calculate only the sum of each of the forward $n\rightarrow p$ rates and the reverse $p\rightarrow n$ rates:
\begin{equation}
\lambda_{n} = \lambda_{\nu_e+n\rightarrow p+e^-} + \lambda_{n+e^+\rightarrow p+\bar\nu_e} + \lambda_{n\rightarrow p+e^-+\bar\nu_e}
\label{n-rates}
\end {equation}
\begin{equation}
\lambda_p = \lambda_{p+e^-\rightarrow \nu_e+n} + \lambda_{\bar\nu_e+p\rightarrow n+e^+} + \lambda_{p+e^-+\bar\nu_e\rightarrow n}.
\label{p-rates}
\end{equation}
With an algebraic trick, this simplifies the calculation by condensing the six phase space integrals (for each weak reaction rate) into two integrals:
\begin{widetext}
\begin{eqnarray}
\label{ntotwag}
\lambda_n & \approx & {{\ln{2}}\over {\langle ft \rangle {\left(m_ec^2 \right)}^5 }}
\\
& \times & \int_{m_ec^2}^{\infty} E_e\left( E_e^2 -\left(m_ec^2\right)^2\right)^{1/2} \left[ {{\left(E_e+Q_{np}\right)^2}\over{\left(e^{E_e/T} +1\right) \left( e^{-\left(E_e+Q_{np}\right)/T_\nu -\eta_{\nu_e}}+1\right)}} +{{\left(E_e-Q_{np}\right)^2}\over{\left(e^{-E_e/T} +1\right) \left(e^{\left(E_e-Q_{np}\right)/T_{\nu} -\eta_{\nu_e}} +1\right)}}\right]dE_e
\nonumber
\end{eqnarray}
\begin{eqnarray}
\label{ptotwag}
\lambda_p & \approx & {{\ln{2}}\over {\langle ft \rangle {\left(m_ec^2 \right)}^5 }}
\\
& \times & \int_{m_ec^2}^{\infty} E_e\left( E_e^2 -\left(m_ec^2\right)^2\right)^{1/2} \left[ {{\left(E_e+Q_{np}\right)^2} \over {\left ( e^{E_e/T} +1\right) \left(e^{\left (E_e+ Q_{np} \right)/{T_\nu} + \eta_{\nu_e}} +1 \right)}} + {{\left(Q_{np} - E_e\right)^2} \over{ \left( e^{E_e/T} +1\right) \left( e^{\left (Q_{np} - E_e \right) /T_\nu +\eta_{\nu_e} }+1\right)}} \right] dE_e.
\nonumber
\end{eqnarray}
\end{widetext}
This algebraic trick requires the approximation of thermally-shaped Fermi-Dirac neutrino and antineutrino distribution functions. This summed rate cannot properly treat the Coulomb correction, $F[Z, E_e]$, which should be included in the phase space integral of reaction rates which have an electron and proton in either the final or initial state.
\begin{figure}
\includegraphics[width=2.5in,angle=270]{fulldist.ps}
\caption{Two example electron neutrino distribution functions, where the upper black line is the standard thermal spectrum and the lower red line is a spectrum resulting from a particular scenario for active-sterile neutrino mixing. The vertical dashed lines show where a weak rate calculation employing the lower distribution function would be broken up to be integrated piece-wise in our new version of the code.}
\label{nudist}
\end{figure}
We have modified the Kawano/Wagoner BBN code so that it can accommodate and integrate any arbitrary neutrino and/or antineutrino distribution function with any specified time dependence. The
majority of our changes lie in the weak reaction rate calculation.
We first separated the summed neutron destruction and production rates, $\lambda_n$ and $\lambda_p$. This
enabled us to use non-thermal distribution functions and to change the neutrino and antineutrino distribution functions independently. Then, we removed
a series approximation for $\lambda_n$ and $\lambda_p$ which is applied when the lepton numbers are zero. This approximation results in an erroneous $\approx 0.5\%$ increase in the neutron-to-proton ratio\cite{kawano,kawano1}. Furthermore, we added the capability to separate a weak rate calculation into an arbitrary number of neutrino energy bins. This is useful for calculating a reaction rate where the neutrino energy spectrum is comprised of different functions over different energy ranges.
For example, in Fig.~\ref{nudist}, we have shown two electron neutrino distribution functions. The upper curve is just the standard thermally-shaped Fermi-Dirac distribution function,
\begin{equation}
\label{nudistributionfunction}
f_{\nu_\alpha}(E_\nu) = {{1}\over{T_{\nu_\alpha}^3 F_2\left(\eta_{\nu_\alpha}\right)}}{{{E_{\nu}}^2}\over{e^{E_\nu/T_{\nu_\alpha}-\eta_{\nu_\alpha}}+1}},
\end{equation}
which is consistent with the occupation probability derived from Eq.\ (\ref{nuocc}). The lower curve is a distribution function resulting from a particular active-sterile neutrino oscillation scheme described in Refs.~\cite{kfs, sfka}. In this scheme, electron neutrinos have been completely converted into steriles at low and high energies (1 and 3), leaving only active neutrinos in the center (2) energy band. To calculate a rate using this non-thermal distribution function, we break up the rate into three parts. The first part integrates from zero to $\epsilon_1$ using the neutrino distribution function $f(E_\nu /T) = 0$. The second part integrates from $\epsilon_1$ to $\epsilon_2$ using the modified function shown in 2. The third part integrates from $\epsilon_2$ to $\infty$ and again use $f(E_\nu /T) = 0$. Finally, the total rate is calculated by summing all three pieces.
\begin{figure}
\includegraphics[width=3.5in,angle=0]{chart.ps}
\caption{Flow chart for our modified BBN calculation.}
\label{flowchart}
\end{figure}
To perform these non-thermal piece-wise calculations in the BBN code, we completely replaced the original weak rate calculation with a series of four modules. These modules allow the user to define the distribution functions, break up the integration into specifiable pieces and define the energy ranges for each piece, and set any desired time/temperature dependence of the distribution functions. A flow chart of the weak rate calculations is shown in Fig.~\ref{flowchart}. At each time step, the BBN code calls the weak rate calculation subroutine, Module 1 in Fig.~\ref{flowchart}, to time-evolve the neutron to proton ratio and, subsequently, all the nuclear abundances.
Module 1 acts as the central line of communication in that it calls the other modules and reports back the value of the weak rates at every time step in the BBN code. In this module, the user can first define how many pieces to split the rate integration into for reactions involving either neutrinos or antineutrinos or both. For example, if the user wanted to use the lower non-thermal neutrino distribution function in Fig.~\ref{nudist} and a thermal antineutrino distribution function, the user can specify that the rate integrations involving neutrinos should be integrated in three parts and that rates involving antineutrinos should be integrated with one energy bin.
Next, Module 1 calls Module 2 to retrieve the integration limits for each piece, $i.e.$, where the user wants each energy bin to begin and end. In Module 2, the user can define these integration limits and couple them to any time dependences desired. Module 1 makes an array with these limits so they can be accessed later in the integration. This procedure can be extended to an arbitrary number of energy bins for any neutrino type.
The first module calculates all six weak reaction rates by utilizing two main loops. These loop over the number of energy bins. One loop calculates the two reaction rates that include neutrinos and the other loop calculates the four remaining weak reaction rates that include antineutrinos. The number of iterations for each loop is determined by the number of energy bins. Each loop iteration integrates the weak reaction rates over the range of energy and neutrino distribution function specified for that energy bin. At the end of the iteration, each rate is summed.
For every loop cycle, the first module calls the integrator which inputs the function to be integrated and the limits of the energy bins (from Module 2). The matrix elements and integrands for the six weak reaction rates, as shown in Eqs.~(\ref{genep}-\ref{revrate}), are retrieved from Module 3. Here, the electron occupation probability is set as $S_e=1/(e^{E_e/T} +1)$ and the neutrino and antineutrino occupation probabilities are called from Module 4.
The sole purpose of Module 4 is to house the neutrino and antineutrino occupation probabilities. This makes it easy for a user to modify the neutrino distribution functions
-- by inputting analytic functions for $S_{\nu_e}$ and $S_{\bar\nu_e}$ -- without having to modify any other portion of the weak rate calculation. The user can also define different functions or populations for each integration energy bin. After each energy bin is integrated, the total rate is summed and the values for the six weak reaction rates are returned to the main BBN code driver.
Our modified Kawano/Wagoner BBN code -- which can now accommodate and integrate any arbitrary neutrino and/or antineutrino distribution function with any specified time dependence -- will be available to the community at bigbangonline.org\cite{bigbangonline}.
\section{Example Code Results}
\begin{figure}
\includegraphics[width=2.5in,angle=270]{nu-on-n.ps}
\caption{The rate of electron neutrino capture on a neutron as a function of temperature. The upper curve is $\lambda_{\nu_e {\rm n}}$ in the lepton number only case for lepton numbers of $L_{\nu_e} = L_{\nu_\tau} = L_{\nu_\mu} = .05$. The lower curve is the rate when there is active-sterile neutrino transformation along with the same lepton numbers as above.}
\label{nuonnfig}
\end{figure}
We have utilized this code to study nucleosynthesis abundance yields in the presence of a light-mass sterile neutrino over a range of lepton numbers\cite{kfs, sfka}. The lower red line in Fig.~\ref{occprob} shows a final non-thermal neutrino occupation probability function that can result from active-sterile neutrino transformation. In this particular scenario, we started with normal thermal electron neutrino and antineutrino distribution functions and an assumed initial lepton number. The lepton numbers that we have taken are within the range which is allowed by conventional BBN (primordial $^4{\rm He}$) considerations. But, of course, the point is that a sterile neutrino which mixes with an active neutrino can result in non-thermal neutrino and/or antineutrino energy spectra which produce BBN abundance yields which can be quite different than in the standard scenario. This, in turn, could provide new, more appropriate constraints on lepton numbers or on active-sterile neutrino mass and mixing parameter space or on both.
The presence of a significant net lepton number can delay significant sterile neutrino production until after the weak decoupling temperature. With a positive net lepton number, a Mikheyev-Smirnov-Wolfenstein (MSW) resonance occurs first for low neutrino energies. This resonance subsequently sweeps to higher neutrino energies as the universe expands and cools. At first, this resonance sweep process occurs adiabatically, efficiently converting all active neutrinos into sterile neutrinos. This continues until the rate of active-sterile conversion becomes too fast to maintain adiabaticity. At this point, production becomes inefficient. However, at high enough resonance energies transformations can occur adiabatically again.
Accurately following such a scenario requires all the modifications in our new code. Without being able to include a dynamically changing neutrino distribution function, for example, we could not calculate correctly the neutron-to-proton inter-conversion rates. In fact, in the example scenario presented here, not only are there non-thermal neutrino distribution functions to handle, but these change on time scales which are important to BBN. In Fig.~\ref{nuonnfig}, we show the rate for electron neutrino capture on a neutron, the forward process in Eq.~\ref{nuen}, as a function of temperature. The top curve is the rate when there is no active-sterile neutrino oscillation. The lower curve shows the decreased rate when there is active-sterile mixing and the final neutrino distribution function is that of Fig.~\ref{occprob}. By reducing the number of electron neutrinos available for capture on neutrons, the capture rate is decreased. Additionally, the altered neutrino distribution function also results in a modestly increased reverse rate (electron capture on protons). The depleted electron neutrino distribution function in this scenario has the effect of increasing the electron capture rate because of the smaller neutrino phase space blocking factor.
The final integrated effect in this scenario can be gauged by the changes in the light element abundances. For example, with adopted lepton numbers of $L_{\nu_e}=L_{\nu_\mu}=L_{\nu_\tau} = 0.05$, which corresponds to a electron, mu, and tau neutrino degeneracy parameters of, $\eta_{\nu_e}=\eta_{\nu_\mu}=\eta_{\nu_\tau} \approx 0.073$ ({\it i.e.,} near the conventional BBN upper limits on these quantities), we see a $4.9\%$ increase of $^4$He over the standard (no neutrino mixing and no lepton numbers) BBN value and a $12.7\%$ increase over the $^4$He calculation with only lepton numbers included but no active-sterile neutrino oscillation effects. With this example scenario we find an increase in D/H (deuterium abundance relative to hydrogen) of $2.8\%$ over the standard BBN calculation and an increase of $6.9\%$ from the lepton number only calculation.
The increase in helium for these adopted parameters is likely unacceptable, exceeding observational bounds\cite{OS, Olive, IT}. Likewise, if the observationally-determined value of D/H can be increased in precision sufficiently (to better than $\pm 5 \%$ \cite{sfka}), it may be possible that D/H could compete with helium as an avenue for constraint of new neutrino physics. Ultimately, allowing for dynamically-altered neutrino and antineutrino distribution functions could add a new dimension to the way in which BBN and light element abundances might constrain new physics in the weak sector.
We have also used our new code to apply a relativistic version of the Coulomb correction into the appropriate weak rate integrands\cite{coulfac}. This has never been done before in the Wagoner/Kawano BBN code.
\section{conclusion}
We have developed an approach to Big Bang Nucleosynthesis (BBN) calculations where we can treat arbitrarily-specified energy distributions for all neutrino types, including $\nu_e$ and $\bar\nu_e$. We can also allow these distribution functions to be altered dynamically and follow all nuclear and weak reactions self-consistently with these alterations. This new approach can extend the usefulness of BBN predictions for exploring and constraining new physics in the neutrino and weak interaction sectors.
Examples of such new physics include active-sterile neutrino mixing and particle decays that have neutrinos in the final state. We have given an explicit example of the former scenario. In this example we have demonstrated how active-sterile neutrino oscillation physics can alter neutrino or antineutrino distribution functions on short time scales, alter the neutron-proton inter-conversions rates, and so modify BBN abundance yields over those of the standard scenario.
Our calculations hold out the promise that light element abundances could place the best constraints on primordial lepton numbers and active-sterile neutrino mixing parameters when the sterile neutrino mass is in the $\sim 1\,{\rm eV}$ range. Present laboratory experiments, like mini-BooNE, are sensitive to neutrino flavor mixing in the active-sterile channel at the $\sim 1,{\rm eV}$ mass scale only when the appropriate effective $2\times 2$ vacuum mixing angle satisfies $\sin^22\theta \gg {10}^{-4}$. By contrast, in the presence of a net lepton number, BBN abundance yields might be significantly altered for active-sterile neutrino mixing parameters for $\sin^22\theta > {10}^{-8}$. The greater reach in vacuum mixing angle afforded by BBN considerations stems from: (1) the long (gravitational) expansion time scale of the early universe which dictates the MSW resonance sweep rate and sets the minimum mixing angle required for adiabatic and efficient conversion of the active neutrinos into sterile species; and (2) the significant sensitivity of the neutron-proton weak inter-conversion rates to alterations of the neutrino or antineutrino energy distribution functions. Our new calculations allow us to follow simultaneously and self-consistently both of these effects along with all relevant weak, electromagnetic, and strong nuclear reaction rates.
This new approach is incorporated into an update of the Kawano/Wagoner BBN code -- which can now accommodate and integrate any arbitrary neutrino and/or antineutrino distribution function with any specified time dependence.
We will soon make this code available to the community at bigbangonline.org.
\begin{acknowledgments}
We would like to acknowledge discussions with Chad Kishimoto and Kevork Abazajian. ORNL is managed by UT-Battelle, LLC, for the U.S. DOE under contract DE-AC05-00OR22725. The work of G.M.F and C.J.S. was supported in part by a NSF grant and a UC/LANL CARE grant at UCSD.
\end{acknowledgments} |
0812.1112 | \section{Introduction}
In the Standard Model, SM, transition rates of semileptonic processes such as
$d^i \to u^j \ell \nu$, with $d^i$ ($u^j$) being a generic
down (up) quark, can be computed with high accuracy in terms
of the Fermi coupling $G_F$ and the elements $V_{ji}$ of the
Cabibbo-Kobayashi Maskawa (CKM) matrix.
Measurements of the transition rates provide therefore
precise determinations of the fundamental SM couplings.
A detailed analysis of semileptonic decays offers also
the possibility to set stringent constraints on new physics scenarios.
While within the SM all $d^i \to u^j \ell \nu$ transitions
are ruled by the same CKM coupling $V_{ji}$ (satisfying
the unitarity condition $\sum_k |V_{ik}|^2 =1$) and
$G_F$ is the same coupling appearing in the muon decay,
this is not necessarily true beyond the SM.
Setting bounds on the violations of CKM unitarity,
violations of lepton universality, and deviations from
the $V-A$ structure, allows us to put significant
constraints on various new-physics scenarios
(or eventually find evidences of new physics).
In the case of leptonic and semileptonic $K$ decays these tests
are particularly significant given the large amount of
data recently collected by several experiments:
BNL-E865, KLOE, KTeV, ISTRA+, and NA48.
The analysis of these data provides precise determination of fundamental SM couplings,
sets stringent SM test almost free from hadronic uncertainties, and
finally can discriminate between new physics scenarios.
The high statistical precision of measurements and the detailed information
on kinematical distributions have pushed a substantial progress on the theory side,
in particular the theoretical error on hadronic form factors has been reduced
at the 1\% level.
The paper is organized as follows. First in Sec.~\ref{BRfits}
we present fits to world data on the leading branching ratios and lifetimes,
for $K_L$, $K_S$, and $K^\pm$ mesons. Sec.~\ref{slopes} summarizes
the status of the knowledge of form factor slopes from $K_{\ell 3}$ decays.
The physics results obtained are described in Sec.~\ref{resulta}, in particular
the measurement of $|V_{us}f_+(0)|$.
Finally, to the special role
of $\Gamma(K_{e2}^\pm)/\Gamma(K_{\mu 2}^\pm)$ ratio is devoted the Sec.~\ref{ke2}.
\section{Experimental data: BRs and lifetime}
\label{BRfits}
Numerous measurements of the principal kaon BRs, or of various ratios
of these BRs, have been published recently. For the purposes of evaluating
$|V_{us}f_+(0)|$, these data can be used in a PDG-like fit to the BRs and lifetime,
so all such measurements are interesting.
A detailed description to
the fit procedure and the references of all experimental input used
can be found in Ref.~\cite{Flavia2008}.
For $K_L$ the results are given in table~\ref{tab:KLBR}, while
table~\ref{tab:KpmBR} gives the results for $K^\pm$.
\begin{table}
\begin{center}
\begin{tabular}{l|c|r}
Parameter & Value & $S$ \\
\hline
\BR{K_{e3}} & 0.4056(7) & 1.1 \\
\BR{K_{\mu3}} & 0.2705(7) & 1.1 \\
\BR{3\pi^0} & 0.1951(9) & 1.2 \\
\BR{\pi^+\pi^-\pi^0} & 0.1254(6) & 1.1 \\
\BR{\pi^+\pi^-} & \SN{1.997(7)}{-3} & 1.1 \\
\BR{2\pi^0} & \SN{8.64(4)}{-4} & 1.3 \\
\BR{\gamma\gamma} & \SN{5.47(4)}{-4} & 1.1 \\
$\tau_L$ & 51.17(20)~ns & 1.1 \\
\end{tabular}
\end{center}
\vskip 0.3cm
\caption{\label{tab:KLBR}
Results of fit to $K_L$ BRs and lifetime.}
\end{table}
For the $K_S$, the fit is dominated by the KLOE measurements of $BR(K_S\to\pi e\nu)$ and
of $BR(\pi^+\pi^-)/BR(\pi^0\pi^0)$. These, together with
the constraint that the $K_S$ BRs must add to unity, and the assumption of
universal lepton couplings, completely determine the $K_S$ leading BRs
In particular, $\BR{K_S\to\pi e\nu} = \SN{7.046(91)}{-4}$.
For $\tau_{K_S}$ we use \SN{0.8958}{-10}~s, where this is the non-$CPT$
constrained fit value from the PDG.
\begin{table}
\begin{center}
\begin{tabular}{l|c|r}
Parameter & Value & $S$ \\
\hline
\BR{K_{\mu2}} & 63.57(11)\% & 1.1 \\
\BR{\pi\pi^0} & 20.64(8)\% & 1.1 \\
\BR{\pi\pi\pi} & 5.595(31)\% & 1.0 \\
\BR{K_{e3}} & 5.078(26)\% & 1.2 \\
\BR{K_{\mu3}} & 3.365(27)\% & 1.7 \\
\BR{\pi\pi^0\pi^0} & 1.750(26)\% & 1.1 \\
$\tau_\pm$ & 12.384(19)~ns & 1.7 \\
\end{tabular}
\end{center}
\vskip 0.3cm
\caption{\label{tab:KpmBR}
Results of fit to $K^\pm$ BRs and lifetime.}
\end{table}
\section{Experimental data: $K_{\ell 3 }$ form factors}
\label{slopes}
The hadronic $K \to \pi$ matrix element of the vector current
is described by two form factors (FFs), $f_+(t)$ and $f_0(t)$.
By construction, $f_0(0)=f_+(0)$.
In order to compute the phase space integrals
we need experimental or theoretical inputs about the $t$-dependence of FF.
In principle, Chiral Perturbation Theory (ChPT)
and Lattice QCD are useful tools to set theoretical constraints.
However, in practice the $t$-dependence of the FFs at present
is better determined by measurements and by combining measurements
and dispersion relations.
Many approaches have been used, and all have been described in detail in~\cite{Flavia2008}.
Here we list only the averages of quadratic fit results for $K_{e3}$ and $K_{\mu3}$
slopes (\Tab{tab:l3ff}) used to determine $|V_{us}|f_+(0)$.
\begin{table}
\begin{center}
\begin{tabular}{l|c}
\hline\hline
& $K_L$ and $K^-$ \\
\hline
Measurements & 16 \\
$\chi^2/{\rm ndf}$ & 54/13 $(7\times 10^{-7})$ \\
$\lambda_+'\times 10^3 $ & $24.9\pm1.1$ ($S=1.4$) \\
$\lambda_+'' \times 10^3 $ & $1.6\pm0.5$ ($S=1.3$) \\
$\lambda_0\times 10^3 $ & $13.4\pm1.2$ ($S=1.9$) \\
$\rho(\lambda_+',\lambda_+'')$ & $-0.94$ \\
$\rho(\lambda_+',\lambda_0)$ & $+0.33$ \\
$\rho(\lambda_+'',\lambda_0)$ & $-0.44$ \\
$I(K^0_{e3})$ & 0.15457(29) \\
$I(K^\pm_{e3})$ & 0.15892(30) \\
$I(K^0_{\mu3})$ & 0.10212(31) \\
$I(K^\pm_{\mu3})$ & 0.10507(32) \\
$\rho(I_{e3},I_{\mu3})$ & $+0.63$ \\
\hline\hline
\end{tabular}
\end{center}
\caption{Averages of quadratic fit results for $K_{e3}$ and $K_{\mu3}$ slopes.}
\label{tab:l3ff}
\end{table}
\section{Physics results}
\label{resulta}
\subsection{Determination of $| V_{us}|f_{+}(0)$ and
$| V_{us}|/| V_{ud}|f_K/f_\pi$}
The value of $|V_{us}|f_{+}(0)$ has been determined from
the decay rate of kaon semileptonic decays (see~\cite{Flavia2008} for the detailed
decomposition).
using the world average values reported in previous sections
for lifetimes, branching ratios and phase space integrals.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\linewidth]{f0vus.eps}
\caption{ Display of $|V_{us}|f_{+}(0) $ for all channels.
\label{fig:Vusf0} }
\end{figure}
The results are shown in figure~\ref{fig:Vusf0}
for $K_L\to\pi e\nu$, $K_L\to\pi\mu\nu$,
$K_S\to\pi e\nu$, $K^\pm\to\pi e\nu$, $K^\pm\to\pi\mu\nu$,
and for the combination.
The average,
$|V_{us}|f_+(0)=0.21664(48)$, has an uncertainty of about of $0.2\%$.
The results from the five modes are in good agreement, the
fit probability is 58\%.
In particular, comparing the values of $|V_{us}|f_{+}(0)$
obtained from $K^0_{\ell3}$ and $K^\pm_{\ell3}$ we obtain
a value of the SU(2) breaking correction
$\delta^K_{SU(2)_{exp.}}=2.9(4)\%$
in agreement with the CHPT calculation $\delta^K_{SU(2)}= 2.36(22)\%$.
Moreover, recent analyzes
on the so-called violations of Dashen's theorem in the kaon
electromagnetic mass splitting point to $\delta^{K}_{SU(2)}$
values of about $3\%$
The test of Lepton Flavor Universality (LFU) between
$K_{e3}$ and $K_{\mu3}$ modes constraints a possible
anomalous lepton-flavor dependence in the leading
weak vector current. It can therefore be compared
to similar tests in $\tau$ decays, but is different from the
LFU tests in the helicity-suppressed modes $\pi_{l2}$ and $K_{l2}$.
The results on the parameter
$r_{\mu e} = R_{K_{\mu3}/K_{e3}}^{\rm{Exp}}/R_{K_{\mu3}/K_{e3}}^{\rm{SM}}$ is
$r_{\mu e} = 1.0043 \pm 0.0052$,
in excellent agreement with lepton universality.
With a precision of $0.5\%$ the test in $K_{l3}$ decays
has now reached the sensitivity of other determinations:
$r_{\mu e}(\tau) = 1.0005 \pm 0.0041$ and
$r_{\mu e}(\pi) = 1.0042 \pm 0.0033$~\cite{PDG06}
An independent determination of $V_{us}$ is obtained from $K_{\ell2}$ decays.
The most important mode is $K^+\to\mu^+\nu$, which has been recently
updated by KLOE reaching a relative uncertainty of about $0.3\%$.
Hadronic uncertainties are minimized considering the ratio
$\Gamma(K^+\to\mu^+\nu)/\Gamma(\pi^+\to\mu^+\nu)$.
Using the world average values
of BR($K^\pm\to\mu^\pm\nu$) and of $\tau^\pm$ given in Section~\ref{BRfits}
and the value of $\Gamma(\pi^\pm\to\mu^\pm\nu)=38.408(7)~\mu s^{-1}$
from~\cite{PDG06} we obtain:
$|V_{us}|/|V_{ud}|f_K/f_\pi = 0.2760 \pm 0.0006$.
\subsection{Theoretical estimates of $f_+(0)$ and $f_K/f_\pi$ }
The main obstacle in transforming these highly precise determinations of
$|V_{us}|f_{+}(0)$ and
$|V_{us}|/|V_{ud}|f_K/f_\pi$ into a determination of
$|V_{us}|$ at the per-mil level are the theoretical
uncertainties on the hadronic parameters $f_+(0)$ and $f_K/f_\pi$.
This hadronic quantity cannot be computed in perturbative QCD, but
it is highly constrained by $SU(3)$ and chiral symmetry.
In the chiral limit and, more generally, in the $SU(3)$ limit
($m_u=m_d=m_s$) the conservation of the vector current
implies $f_+(0)$=1. Expanding around the chiral limit in powers
of light quark masses we can write
$f_+(0)= 1 + f_2 + f_4 + \ldots$
where $f_2$ and $f_4$ are the NLO and
NNLO corrections in ChPT. The Ademollo--Gatto theorem implies that
$(f_+(0)-1)$ is at least of second order in the breaking of $SU(3)$
This in turn implies
that $f_2$ is free from the uncertainties of the $\mathcal{O}(p^4)$ counterterms in ChPT,
and it can be computed with high accuracy: $f_2=-0.023$.
The difficulties in
estimating $f_+(0)$ begin with $f_4$ or at $\mathcal{O}(p^6)$ in the chiral expansion.
Several analytical approaches to determine $f_4$
have been attempted over the years,
essentially confirming the original estimate by Leutwyler and Roos.
The benefit of these new results, obtained using
more sophisticated techniques, lies in the fact that a
better control over the systematic uncertainties of the calculation
has been obtained. However, the size of the error is still around or
above $1\%$, which is not comparable to the $0.2\%$
accuracy which has been reached for $|V_{us}|f_+(0)$.
Recent progress in lattice QCD gives us more optimism in the reduction of
the error on $f_+(0)$ below the $1\%$ level.
Most of the currently available
lattice QCD results have been obtained with relatively heavy pions and
the chiral extrapolation represents the dominant source of uncertainty.
There is a general trend of
lattice QCD results to be slightly lower than analytical approaches.
An important step in the reduction of the error associated to the
chiral extrapolation has been recently made by
the UKQCD-RBC collaboration.
Their preliminary result $f_+(0)=0.964(5)$
is obtained from the unquenched study with
$N_F=2+1$ flavors, with an action that has good chiral properties
on the lattice even at finite lattice
spacing (domain-wall quarks). They also reached pions masses ($\geq 330$ MeV)
much lighter than that used in previous studies of $f_+(0)$. The
overall error is estimated to be $~0.5\%$, which is very encouraging.
In contrast to the semileptonic vector form factor, the pseudoscalar
decay constants are not protected by the Ademollo--Gatto theorem and
receive corrections linear in the quark masses. Expanding
$f_K/f_\pi$ in power of quark masses, in analogy to $f_+(0)$,
$f_K/f_\pi= 1 + r_2 + \ldots$
one finds that the $\mathcal{O}(p^4)$ contribution $r_2$ is
already affected by local contributions and cannot be unambiguously
predicted in ChPT. As a result, in the determination of $f_K/f_\pi$
lattice QCD
has essentially no competition from purely analytical approaches.
The present overall accuracy is about $1\%$.
The novelty are the new lattice results with
$N_F=2+1$ dynamical quarks and pions as light as $280$~MeV,
obtained by using the so-called staggered quarks.
These analyzes cover a broad range of lattice spacings (i.e.~$a$=0.06 and 0.15 fm) and
is performed on sufficiently large physical volumes ($m_\pi L\geq 5.0$).
It should be stressed, however, that the sensitivity of
$f_K/f_\pi$ to lighter pions is larger
than in the computation of $f_+(0)$ and that chiral extrapolations are far more
demanding in this case.
In the following analysis we will use as reference value the MILC-HPQCD
result $f_K/f_\pi=1.189(7)$.
\subsection{Test of CKM unitarity}
To determine $|V_{us}|$ and $|V_{ud}|$
we use the value $|V_{us}| f_{+}(0)=0.2166(5)$ ,
the result $|V_{us}|/|V_{ud}|f_K/f_\pi = 0.2760(6)$,
$f_+(0) = 0.964(5)$, and $f_K/f_\pi = 1.189(7)$.
From the above we find:
$|V_{us}|= 0.2246\pm 0.0012$ from $K_{\ell 3}$ only, and
$|V_{us}|/|V_{ud}|= 0.2321\pm 0.0015$ from $K_{\ell 2}$ only.
These determinations can be used in a fit together with the
the recent evaluation of $V_{ud}$ from
$0^+\to0^+$ nuclear beta decays: $|V_{ud}|$=0.97418$\pm$0.00026.
This global fit gives $V_{ud} = 0.97417(26)$ and $V_{us} = 0.2253(9)$,
with $\chi^2/{\rm ndf} = 0.65/1$ (42\%). This result does not make use
of CKM unitarity. If the unitarity constraint is included,
the fit gives $V_{us}=0.2255(7)$ and $\chi^2/{\rm ndf}=0.80/2$ (67\%).
Both results are illustrated in \Fig{fig:vusuni}.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\linewidth]{allfit.eps}
\caption{\label{fig:vusuni} Results of fits to $|V_{ud}|$, $|V_{ux}|$, and $|V_{us}|/|V_{ud}|$.}
\end{figure}
The test of CKM unitarity can be also interpreted as a test of universality of
the lepton and quark gauge couplings.
Using the results of the fit (without imposing unitarity) we obtain:
$G_{\rm CKM} \equiv G_\mu \left[ |V_{ud}|^2+|V_{us}|^2+|V_{ub}|^2 \right]^{1/2}
= (1.1662 \pm 0.0004)\times 10^{-5}\ {\rm GeV}^{-2}$,
in perfect agreement with the value obtained from the measurement
of the muon lifetime:
$G_{\mu} = (1.166371 \pm 0.000007)\times 10^{-5}\ {\rm GeV}^{-2}.$
The current accuracy of the lepton-quark universality
sets important constraints on model building beyond the SM.
For example, the presence of a $Z^\prime$ would affect the relation between
$G_{\rm CKM}$ and $G_{\mu}$. In case of a $Z^\prime$ from $SO(10)$ grand unification theories
we obtain $m_{Z^\prime}>700$~GeV at 95\% CL, to be compared with the $m_{Z^\prime}>720$~GeV
bound set through the direct collider searches~\cite{PDG06}.
In a similar way, the unitarity constraint also provides useful bounds in various
supersymmetry-breaking scenarios.
\subsection{$K_{\ell 2}$ sensitivity to new physics}
A particularly interesting test is the comparison of the $|V_{us}|$
value extracted from the helicity-suppressed $K_{\ell 2}$ decays
with respect to the value extracted from the helicity-allowed $K_{\ell 3}$ modes.
To reduce theoretical uncertainties from $f_K$ and electromagnetic
corrections in $K_{\ell 2}$, we exploit the ratio $BR(K_{\ell2})/BR(\pi_{\ell2})$ and
we study the quantity
$$
R_{l23}=\left|\frac{V_{us}(K_{\ell 2})}{V_{us}(K_{\ell 3})}
\frac{V_{ud}(0^+\to 0^+)}{V_{ud}(\pi_{\ell 2})}\right|\,.
$$
Within the SM, $R_{l23}=1$, while deviation from 1 can be induced by
non-vanishing scalar- or right-handed currents.
Notice that in $R_{l23}$ the hadronic uncertainties enter through $(f_K/f_\pi)/f_+(0)$.
In the case of effect of scalar currents due to a charged Higgs,
the unitarity relation between
$|V_{ud}|$ extracted from $0^+\to0^+$ nuclear beta decays and $|V_{us}|$ extracted from
$K_{\ell3}$ remains valid as soon as form factors are experimentally determined.
This constrain together with the experimental information of $\log C^{MSSM}$
can be used in the global fit to improve the accuracy of the determination
of $R_{l23}$, which in this scenario turns to be
$\left. R_{l23} \right|^{\rm exp}_{\rm scalar} = 1.004 \pm 0.007$.
Here $(f_K/f_\pi)/f_+(0)$ has been fixed from lattice. This ratio
is the key quantity to be improved in order to reduce
present uncertainty on $R_{l23}$.
This measurement of $R_{l23}$ can be used to set bounds
on the charged Higgs mass and $\tan\beta$.
Figure \ref{fig:higgskmunu} shows the excluded region at 95\%
CL in the $M_H$--$\tan\beta$ plane.
The measurement of BR($B \to \tau \nu$)
can be also used to set a similar bound in the $M_H$--$\tan\beta$ plane.
While $B\to\tau \nu$ can exclude quite an extensive region of this plane,
there is an uncovered region in the exclusion corresponding
to a destructive interference between the charged-Higgs
and the SM amplitude. This region is fully covered by the $K\to \mu \nu$ result.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\linewidth]{higgskmnu.eps}
\caption{\label{fig:higgskmunu}
Excluded region in the charged Higgs mass-$\tan\beta$ plane.
The region excluded by $B\to \tau \nu $ is also indicated.}
\end{figure}
\subsection{A test of lattice calculation}
\label{sec:CTtest}
The vector and scalar form factors $f_{+,0}(t)$ are analytic functions in the complex
$t$--plane, except for a cut along the positive real axis, starting at the
first physical threshold $t_{\rm th} = (m_K+m_\pi)^2$,
where they develop discontinuities. They are real for $t<t_{\rm th}$.
Cauchy's theorem implies that $f_{+,0}(t)$ can be written as
a dispersive integral along the physical cut where all possible on-shell
intermediate states contribute to its imaginary part.
A number of subtractions is needed to make the integral convergent.
Particularly appealing is an improved dispersion
relation recently proposed
where two subtractions are performed at $t=0$
(where by definition, $\tilde f_0(0)\equiv 1$) and at
the so-called Callan-Treiman point $t_{CT} \equiv (m_K^2-m_\pi^2)$.
Since the Callan-Treiman relation fixes the value of scalar form factor at $t_{CT}$
to the ratio $(f_K/f_\pi)/f_+(0)$,
the dispersive parametrization for the scalar form factor
allows to transform the available measurements of the scalar form factor
into a precise information on $(f_K/f_\pi)/f_+(0)$, completely independent of
the lattice estimates.
Figure \ref{fig:CTtest} shows the values for $f_+(0)$
determined from the scalar form factor slope
measurements obtained using a dispersive parametrization and the Callan-Treiman relation, and
$f_K/f_\pi=1.189(7)$. from result on the FF slope using the dispersive
parameterization
The value of $f_+(0)=0.964(5)$ from UKQCD/RBC is also shown.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\linewidth]{cttest.eps}
\caption{Values for $f_+(0)$ determined from the scalar form factor slope using
the Callan-Treiman relation and $f_K/f_\pi=1.189(7)$. \label{fig:CTtest} }
\end{figure}
\section{The special role of of $\Gamma(K_{e2})/\Gamma(K_{\mu2})$}
\label{ke2}
The ratio $R_K = \Gamma({K_{\mu2}})/\Gamma({K_{e2}})$ can be precisely calculated
within the Standard Model.
Neglecting radiative corrections, it is given by
$
R_K^{(0)} = \frac{m_e^2}{m_\mu^2} \: \frac{(m_K^2 - m_e^2)^2}{(m_K^2 - m_\mu^2)^2}
= 2.569 \times 10^{-5},
$
and reflects the strong helicity suppression of the electron channel.
Radiative corrections have been computed with effective theories,
yielding the final SM prediction
$
R^{\rm SM}_K = R_K^{(0)} ( 1 + \delta R_K^{\rm{rad.corr.}})
= 2.569 \times 10^{-5} \times ( 0.9622 \pm 0.0004 ) =(2.477 \pm 0.001) \times 10^{-5}.
$
Because of the helicity suppression within then SM, the
$K_{e2}$ amplitude is a prominent candidate
for possible sizable contributions from physics beyond the SM. Moreover,
when normalizing to the $K_{\mu2}$ rate, we obtain an extremely precise
prediction of the $K_{e2}$ width within the SM. In order to be visible
in the $K_{e2}/K_{\mu2}$ ratio, the new physics must violate lepton
flavor universality.
Recently it has been pointed out that in a supersymmetric framework
sizable violations of lepton universality can be expected
in $K_{l2}$ decays.
At the tree level, lepton flavor violating terms are forbidden in the MSSM.
However, these appear at the one-loop level, where an effective
$H^+ l \nu_\tau$ Yukawa interaction is generated.
The non-SM contribution to $R_K$ can be written as
$
R_K^{\rm{LFV}} \approx R_K^{\rm{SM}} \left[ 1 + \left( \frac{m_K^4}{M_{H^\pm}^4} \right) \left( \frac{m_\tau^2}{M_e^2} \right) |\Delta_{13}|^2 \tan^6 \beta \right]{\mbox ,}
$ where $\Delta_{13}$, the lepton flavor violating coupling , being generated at the
loop level, could reach values of $\mathcal{O}(10^{-3})$.
For moderately large $\tan \beta$ values, this contribution may therefore
enhance $R_K$ by up to a few percent.
Experimental knowledge of $K_{e2}/K_{\mu2}$ has been poor so far.
The current world average
of $R_K = \BR{K_{e2}}/\BR{K_{\mu2}}= (2.45 \pm 0.11) \times 10^{-5}$ dates back to three
experiments
of the 1970s~\cite{PDG06} and has a precision of about 5\%.
Three new preliminary measurements were reported by NA48/2 and KLOE
(see~\cite{Flavia2008} for details).
Both, the KLOE and the NA48/2 measurements are inclusive with respect to final state
radiation contribution due to bremsstrahlung.
Combining these new results with the current PDG value yields a current world average of
$R_K = ( 2.457 \pm 0.032 ) \times 10^{-5}$, with a relative error of $1.3\%$,
a factor three more precise than the previous world average.
This value is in very good agreement with the SM expectation
and gives strong constraints
for $\tan \beta$ and $M_{H^\pm}$, as shown in Fig.~\ref{fig:susylimit}.
For values of $\Delta_{13} \approx 5 \times 10^{-4}$
and $\tan \beta > 50$ the charged Higgs masses is pushed
above 1000~GeV/$c^2$ at 95\% CL.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\linewidth]{rklim.eps}
\caption{Exclusion limits at $95\%$ CL on $\tan \beta$ and the charged
Higgs mass $M_{H^\pm}$
from $|V_{us}|_{K\ell2}/|V_{us}|_{K\ell3}$ for different
values of $\Delta_{13}$. }
\label{fig:susylimit}
\end{figure} |
0812.0216 | \section{Introduction}
In an earlier communication \cite{fecr} we had introduced
the augmented space recursion (ASR) \cite{asr} based orbital peeling method (OP) \cite{burke} as an useful and numerically accurate method for the
calculation of the `pair energies'. This allowed us to map the binary alloy problem onto an
effective Ising-like model and study the stability or otherwise of different ordered phases that might
arise if the disordered alloy is cooled below some critical temperature. The aim of this paper is to extend the use of this
method to study MnCr alloys.
These pair
energies are small differences of relatively large electronic energies and a brute force calculation
is likely to yield these small numbers with errors which are usually of the order of or larger than the numbers
themselves. Direct estimations like the OP method are therefore appropriate in these situations.
Our earlier analysis of FeCr, FePd and PdV alloys illustrated the success of the ASR-OP method in
predicting the low temperature phases of these binary alloys and their stability.
The analysis involved ordering and mixing energies obtained from a generalized perturbation framework (GPM) \cite{duc,turchi} and the Fourier transform $V(\vec{k}\/)$ of the pair energies which are related to diffuse scattering
intensities as described by Krivoglatz, Clapp and Moss (KCM) \cite{cm}. The basic ideas behind these
methods have been described in detail in the references quoted above.
For the sake of completeness, we shall briefly dwell upon the main points used in our analysis.
\begin{figure}
\centering
\resizebox{4in}{3.5in}{\includegraphics{fig0.eps}}
\caption{\label{fig1}(Color Online) Densities of states for a series of alloy compositions
for 50-50 MnCr alloy.}
\end{figure}
The basis of our subsequent analysis is the electronic structure calculation on the MnCr alloy system. We
have chosen the tight-binding linear muffin-tin orbitals (TB-LMTO) approach \cite{tblmto} which
provides a first-principles, density functional based
tight-binding sparse representation of the Hamiltonian. Such a Hamiltonian is appropriate for
describing the electronic structure of random substitutional
alloys in which the configurational fluctuations due to disorder are {\sl local}.
The sparseness of the TB-LMTO Hamiltonian is a suitable input for the recursion method.
Realistic models for disordered alloys require us to go beyond the single-site mean-field approximations and treat effects of configuration fluctuations more accurately. This we shall attempt through the ASR \cite{tf, sasnet}. As mentioned earlier, the pair-energies have to be accurately determined. We shall adopt the recursion
based OP for such calculations.
The approach followed here will provide
a unified recursion based methodology to address such problems.
In this communication our choice of system is the MnCr alloy. We shall first obtain the configuration averaged density of states for a series of alloy compositions using the TB-LMTO-ASR. The ASR has been discussed in great detail earlier. We shall only stress here that it generalizes the single-site mean-field approximations to include the effect of the configuration fluctuations of the
near neighbourhood of a site and yields configuration averaged Green functions which retain their analytic and lattice translational symmetries even after approximation. The results for the 50-50 alloy are shown in
Fig. \ref{fig1}. The converged potential parameters are input into the OP calculations that follow.
We shall organize the paper as follows : in section 2 we shall describe the OP-ASR method for obtaining the effective pair energies, the expression for the ordering and mixing energies and
the analysis for the Fourier transform of the pair energies. In the section 3 we shall analyze
the results for the equi-atomic MnCr alloy. Concluding remarks will be given in the final section.
\section{Methodology}
\subsection{Total energy and pair energies}
The simplest model which analyzes the emergence of long ranged order from a disordered
phase is the Ising model. Our approach will attempt to map the energetics of the binary alloy
problem onto an equivalent `spin-half' Ising model.
We need a derivation of the lowest configurational
energy for the alloy system in terms of effective
multi-site interactions, in particular ``effective pair
energies" (EPE) \cite{epi}. We need to accurately
and reliably determine the EPE.
Our approach will be to start with the
disordered phase, set up a perturbation in the form of
concentration fluctuations associated with an ordered phase and, from its
response,
study whether the alloy can sustain such a perturbation. This
approach includes the generalized perturbation method (GPM) \cite{kn:gpm}, the
embedded cluster method (ECM) \cite{kn:ecm} and the concentration wave approach
\cite{kn:cwm}.
We shall begin with a homogeneously disordered alloy A$_x$B$_{1-x}$, where every site is occupied by either an A or a B type of atom with probabilities proportional to their concentrations. We define the `occupation' variable $n_{\vec{R}}$ to be a random variable which takes on the values 1 and 0 according to
whether the site labeled $\vec{R}$ is occupied by an A or a B atom. Its average
$\ll\!\! n_{\vec{R}}\!\!\gg = x$. This perturbative approach expands the total internal energy of a particular atomic configuration as follows :
\begin{equation}
E\ =\ V^{(0)} + \sum_{\vec{R}} V^{(1)}_{\vec{R}} \ \delta n_{\vec{R}} + \frac{1}{2}\sum_{\vec{R}}\sum_{\vec{R'} \ne \vec{R}} V^{(2)}_{\vec{R}\vec{R'}} \ \delta n_{\vec{R}}\ \delta n_{R'} + \ldots
\label{pair}
\end{equation}
\noindent here $\delta n_{\vec{R}} = n_{\vec{R}} - x$ and $\ll\!\! \delta n_{\vec{R}}\!\!\gg = 0$.
If the configuration is homogeneously disordered then it immediately follows that $\ll\!\! E\!\!\gg = E_{dis} = V^{(0)}$.
From the above definition we can interpret the other two expansion terms as follows : if $E^I$ is the configuration averaged total energy of a configuration in which any arbitrary site labeled $\vec{R}$ is
occupied by a atom of the type $I$ and the other sites are randomly occupied, and
$E^{IJ}$ is the averaged total energy of another configuration in which the sites $\vec{R}$ and $\vec{R'}$ are occupied by atoms of the types I and J respectively and all other sites
are randomly occupied, then from equation (\ref{pair}) it follows that :
\begin{equation} V^{(1)}_{\vec{R}} \ =\ E^A - E^B \qquad V^{(2)}_{\vec{R}\vec{R'}} \ =\ E^{BB}+E^{AA}-E^{AB}-E^{BA}\label{eq2}\end{equation}
The one-site energy $V^{(1)}_{\vec{R}}$ is unimportant for bulk ordered structures emerging from disorder. It is
important for emergence of inhomogeneous disorder at surfaces and interfaces \cite{indra}. The pair energies
$V^{(2)}_{\vec{R}\vec{R'}}$ are the most important factors governing emergence of bulk ordering.
The interpretation of equation (\ref{eq2}) immediately allows us to introduce a method to obtain the
pair potentials directly rather than calculate the total energies and then subtract them. Since they are small differences (of the order of mRyd) of large energies (of the order of $10^3$ Ryd), a direct calculation will produce errors larger than the differences themselves. The orbital peeling method (OP)\cite{burke} based on recursion \cite{hhk} was introduced by Burke precisely to calculate such small differences, albeit in a different situation.
The total energy of a solid may be separated into two terms : a
one-electron band contribution E$_{BS}$ and an electrostatic
term E$_{ES}$ which includes several contributions : the Coulomb
repulsion of the ion cores, the correction for double counting
terms due to electron-electron interaction in E$_{BS}$ and a
Madelung energy in case the model of the alloy has atomic spheres which are not
charge neutral. The
renormalized cluster interactions defined in equation (\ref{pair})
should, in principle, include both E$_{BS}$ and E$_{ES}$
contributions. Since the renormalized cluster interactions
involve the difference of cluster energies, it is usually assumed
that the electrostatic terms cancel out and only the band
contribution is important. Obviously, such an
assumption is not rigorously true, but it has been shown to be
approximately valid in a number of alloy systems \cite{turchi}.
We shall accept such an assumption and our stability arguments
starting from the disordered side, will be based on the band
structure contribution alone.
The effective pair interactions can be related to
the change in the configuration averaged local density of states :
\begin{equation} V_{\vec{R}\vec{R'}}^{(2)}\enskip = \enskip \int_{-\infty}^{E_{F}} dE\ (E-E_{F})\ \Delta n(E)
\label{eq3}\end{equation}
where $\Delta n(E)$ is given by :
\[
\Delta n(E) = -\frac{1}{\pi}\ \Im m\ \sum_{IJ}^{AB} \mbox{Tr}
\ll\!\! (EI-H^{(IJ)})^{-1}\!\!\gg \xi_{IJ}
\]
$\xi_{IJ}= 2\delta_{IJ}-1$, i.e. is $\pm$ 1 according to whether $I=J$ or $I\not= J$.
There are four possible pairs $IJ$~: AA, AB, BA and BB. H$^{(IJ)}$
is the Hamiltonian of a system where all sites except $\vec{R}$ and $\vec{R'}$ are
randomly occupied. The sites labeled $\vec{R}$ and $\vec{R'}$ are occupied by
atoms of the type $I$ and $J$. This change in the averaged local density of
states can be related to the generalized phase shift $\eta$(E) through the
equation :
\[
\Delta n(E) \ =\ {{d\eta (E)} \over {dE}} = \frac{d}{dE}\ \left\{\log {{\det \ll G^{AA}(E)\gg \det\ll G^{BB}(E)\gg} \over {\det
\ll G^{AB}(E)\gg \det\ll G^{BA}(E)\gg}}\right\}
\]
G$^{IJ}(E)$ is the resolvent of the Hamiltonian H$^{(IJ)}$. The generalized phase shift $\eta(E)$ can be calculated
following the orbital peeling method of Burke \cite{burke}.
We shall quote only the final result : The pair energy function is defined as :
\begin{eqnarray}
f_{\vec{R}\vec{R'}}(E) & = & f(\vec{R}-\vec{R'}) = \sum_{IJ}^{AB} \sum_{\alpha = 1}^{L_{\rm max}}
\xi_{IJ} \int_{-\infty}^{E}dE'\ (E'-E) \log \ll G_{\alpha}^{IJ}(E')\gg\nonumber\\
& = & \sum_{IJ} \sum_{\alpha = 1}^{L_{\rm max}}
\left[ \sum_{k=1}^{p-1} Z^{\alpha,IJ}_{k} - \sum_{k=1}^{p}
P^{\alpha,IJ}_{k} + \left( N^{\alpha,IJ}_{P} -
N^{\alpha,IJ}_{Z} \right) E \right]
\end{eqnarray}
$\ll G_{\alpha}^{IJ}\gg (E)$ denote the configuration averaged resolvents in
which the orbitals from $L$ = 1 to $(\alpha-1)$ are deleted.
$Z^{\alpha,IJ}_{k}$ and $P^{\alpha,IJ}_{k}$ are its zeros and
poles and
$N^{\alpha,IJ}_{Z}$ and $N^{\alpha,IJ}_{P}$ are the number of
such zeros and poles of $\ll G_{\alpha}^{IJ}(z)\gg $ below $E$. The zeros and poles are
obtained directly from the recursion coefficients for the averaged resolvents and these
are obtained from the TB-LMTO-ASR.
This method of zeros and poles enables one to
carry out the integration in equation (\ref{eq3}) easily avoiding the multi-valuedness of
the integrand involved in the evaluation of the integral by
parts.
The pair energy is then given by : $V_{\vec{R}\vec{R'}}^{(2)} = f_{\vec{R}\vec{R'}}(E_F) =
V^{(2)}(\vec{R}-\vec{R'})$. The last expression follows if the background system, in which
the A and B type atoms are immersed at $\vec{R}$ and $\vec{R'}$, is homogeneously
disordered.
\subsection{The Krivoglatz-Clapp-Moss analysis}
Philhours and Hall \cite{ph} have suggested, and Clapp and Moss \cite{cm} have formally shown that a sufficient (but not necessary) condition for a stable ground state is that the
wave vector of concentration waves corresponding to an ordered phase lie in the positions of the minima of the Fourier transform of the pair energy function
\[ V(\vec{k}) = \sum_{\vec{R}-\vec{R'}} \exp\left\{i\vec{k}\cdot(\vec{R}-\vec{R'})\right\}\ V^{(2)}(\vec{R}-\vec{R'})\]
The above statement follows from the expression for the inverse susceptibility which measures the
response of the disordered system to the concentration fluctuation perturbation described above.
\[
\chi^{-1}(\vec{k}) \propto 1+x(1-x)\beta V_{\rm eff}(\vec{k})
\]
In a zero-th approximation $V_{\rm eff}(\vec{k}\/) = V(\vec{k}\/)$. Corrections to the effective
pair-function has been described in detail by Chepulskii and Bugaev \cite{cb}.
We have used here the Ring Approximation suggested by the authors as the one most
suitable for our analysis :
\begin{equation}
V_{\rm eff}(\vec{k}) =V(\vec{k}) - (\beta/2)(1-2x)^2 \ \int\frac{d^3\vec{q}}{8\pi^3}\ F(\vec{q})F(\vec{k}-\vec{q})
\end{equation}
\noindent where
\[ F(\vec{q}) = \frac{V(\vec{q})}{1+x(1-x)\beta V(\vec{q})}
\]
\subsection{Ordering and mixing energies}
Finally, the GPM expression also gives the ordering energy~:
\[ \Delta E_{ord} = \frac{1}{2} \sum_n V_{0n}^{(2)} Q_n \]
where $n$ is a $n$-th nearest neighbour of an arbitrarily chosen site (which we label 0) and
$Q_n = (x/2)(N_n^{BB} - xN_n)$, $N_n^{BB}$ is the number of BB pairs and
$N_n$ the total number of pairs in the
$n$-th nearest neighbour shell of 0.
With reference to the total energies of the pure constituents, in the approximation where we
only restrict ourselves to pair energies and ignore all three body energies and higher, the so-called mixing energy is given by :
\[ \Delta E_{mix} = - \frac{1}{2} x(1-x) \sum_n\ N_n V_{0n}^{(2)} \]
The averaging procedure ASR has been described in great detail in many earlier papers and we refer the reader to
the review \cite{tf} in which the method and its relation to the CPA and its generalizations has been discussed
extensively.
\section{Results and Discussion}
We have calculated the composition dependent pair energy functions using the TB-LMTO-ASR coupled
with the orbital peeling technique and the result for the equi-atomic composition is shown in Fig. \ref{fig2}
The nearest neighbour pair energy function $f_1(\vec{R}-\vec{R'},E)$ shows the characteristic shape of a positive lobe, indicating
ordering, near the position of half filling, flanked by negative lobes indicating segregation near
empty and complete filling fractions.
\begin{figure}[t]
\centering
\vskip 1cm
\resizebox{3in}{2.5in}{\includegraphics{fig1.eps}}
\caption{\label{fig2}(Color Online) Pair functions calculated for Mn$_{50}$Cr$_{50}$, variation shown against E-E$_F$, calculated from TB-LMTO-ASR-OP.
(top) Nearest neighbour pair function at a distance $\sqrt{3}a/2$, $a$ being the equilibrium lattice parameter (bottom) second, third and fourth nearest neighbour pair functions at distances $a$, $\sqrt{2}a$ and $\sqrt{3}a$}
\end{figure}
We should note that in our approach, both the pair energy function itself and the position of the Fermi energy depend upon the composition of the alloy and its band filling.
This is in contrast to some analysis (like the Connolly-Williams) which depend on similar, but composition independent, pair energy functions. The
Fig. \ref{fig3} shows the pair energies $V^{(2)}(\vert \vec{R}-\vec{R'}\vert)$ for the equi-atomic composition.
\begin{figure}[t]
\centering
\vskip 1cm
\resizebox{3in}{2in}{\includegraphics{fig2.eps}}
\caption{\label{fig3} Pair energies calculated for MnCr at a 50-50 composition.}
\end{figure}
\begin{table}
\centering
\caption{\label{tab1}Weights for different neighbouring shells for seven different bcc based equi-atomic superstructures.}
\begin{tabular}{ccccc}\hline\hline
& \multicolumn{4}{c}{Neighbouring shells} \\ \hline
Structure \phantom{X}&\phantom{X} 1 \phantom{X}&\phantom{X} 2 \phantom{X}&\phantom{X} 3 \phantom{X}&\phantom{X} 4 \phantom{X} \\
\hline
Segregated & 1.00 & 0.75 & 1.50 & 3.00 \\
B2 & -1.00 & 0.75 & 1.50& -3.00 \\
B32 & 0.00 & -0.75 & 1.50 & 0.00 \\
B11 & 0.00 & 0.25 & -0.50 & 0.00 \\
ST1 & -0.50 & 0.25 & 0..00 & 0.50 \\
ST2 & 0.00 & -0.25 & -0.50 & 0.00 \\
ST3 & 0.50 & 0.25 & -0.00 & -0.50 \\ \hline
\phantom{X}&&&& \\
\end{tabular}
\end{table}
\begin{figure}[b!]
\centering
\vskip 2cm
\rotatebox{0}{\resizebox{3.5in}{2.5in}{\includegraphics{fig3.eps}}}
\caption{(Color Online) \label{str} Ordering energies for seven different structures for MnCr based on Table 1.}
\end{figure}
The behaviour of the pair energies for MnCr indicate ordering tendencies up to the fourth nearest
neighbours.
The pair energies rapidly converge to zero with distance. In fact, although
we had calculated the pair energies up to the seventh nearest neighbour shells, their values beyond the fourth shell were smaller than the error bars of our
calculational method and therefore these numbers were not really reliable and were not used for our analysis. The same is true for the results of the ordering energies for seven different
structures and super-structures based on the body-centered cubic lattice. We have, therefore, calculated
the ordering energies with contributions only up to the fourth neighbouring shell. The Table \ref{tab1} gives us the weights $Q_n$ for the seven bcc based structures
required to obtain the ordering energies in this alloy system. These superstructures are described in detail by Finel and Ducastelle \cite{fd}.
Based on the Table \ref{tab1} we present the ordering energies for the same seven bcc based structures for MnCr in Fig. \ref{str}.
Unlike our earlier study of FeCr which indicated segregation and possible ordering in the ST3 superstructure, for MnCr ordering
in the B2 structure with a possible competition from the superstructure ST1 is the most energetically probable. The contrast between
the two alloys FeCr and MnCr is interesting : Fe segregates with Cr while Mn orders. The stainless steel alloys, a class of
which are ternary alloys FeCrMn should then see competition between segregation and ordering. We intend to study the ternary compositions
subsequently.
\begin{figure}
\centering
\rotatebox{270}{\resizebox{2.5in}{2.5in}{\includegraphics{fig5a.ps}}}
\rotatebox{270}{\resizebox{2.5in}{2.5in}{\includegraphics{fig5b.ps}}}\\
\rotatebox{270}{\resizebox{2.5in}{2.5in}{\includegraphics{fig4a.ps}}}
\rotatebox{270}{\resizebox{2.5in}{2.5in}{\includegraphics{fig4b.ps}}}
\caption{(Color Online) \label{fig4} $V_{\rm eff}(\vec{k})$ and contours on the plane bounded by (top) (001) and (100)
and (bottom) (001) and (110)}
\end{figure}
Finally we shall examine the Fourier transform $V(\vec{k})$ of the pair energies and carry out a Clapp-Moss type of analysis. The contour diagrams
for Fourier transform $V(\vec{k})$ in the
is plotted in Fig. \ref{fig4} for the (001) and (1$\bar{1}$0) planes. The minimum occurs at $\vec{k} = (001)$ This is indicative
of a possible B2 type of ordering, as indicated by the ordering energy analysis.
In order to ascertain whether the ordered state associated with the minimum is stable compared to the segregated species, we have to carry out
a much more detailed analysis including the contribution of the energy of mixing.
The mixing energy for the B2 structure is 0.16412 mRyd/atom
This indicates that ordering is stable against segregation.
We should comment here that the above discussion is based only on the electronic contribution
to the diffuse scattering intensity. At the temperatures
that we are interested in there are contributions from the vibrational excitations in the system.
Our aim here was to indicate the possibility of ordering tendencies
in MnCr rather than accurate estimation of energetics and transition temperatures. In any
detailed and accurate statistical mechanical calculations we must include the contribution of vibrational
excitations to the free energy of the alloy.
\section{Conclusion}
In a previous work \cite{fecr} we had introduced and examined the suitability and accuracy of the
Augmented Space Recursion (ASR) based Orbital Peeling (OP) method for the generation of pair energies.
In this communication we have extended these ideas into body-centered cubic MnCr alloy.
We have looked at the phase stability of the equi-atomic alloy. Unlike our previous work on
FeCr, FePd and PtV alloys, for MnCr we had no earlier theoretical work to compare with.
However, since our applications to FeCr, FePd and PtV gave us
satisfactory results, in good agreement with
experimental evidence, we have confidence in our present results. With FeCr, this work will
form the background of our extension of this work to ternary FeMnCr stainless steel alloys.
The recursion and ASR is now available with both relativistic
corrections (including spin-orbit terms)\cite{huda} and non-collinear magnetism \cite{bergman,
tarafder}. Since earlier we have proposed the ASR as a
analyticity preserving generalization of the single-site mean-field coherent potential
approaches, this work will provide further incentive to extend the use of the ASR to problems
beyond the simple density of states calculations and in problems were relativistic corrections
and non-collinear magnetism will play significant roles. |
2005.03095 | \section{Introduction}
Facility location games lie in the intersection of AI, game theory, and social choice theory. The
basic version of the problem has been widely studied in the literature \cite{Mo80, barbera1994characterization, schummer2002strategy}.
In this setting, a central planner has to locate a facility on a real line based
on the \emph{reported} locations of selfish agents who want to be as close as possible
to the facility. The goal of the planner is to locate the facility in a way that the
sum of the utilities of the agents is maximized.~\footnote{In~\cite{PT09} the objective
was to minimize the social cost.}
However, the agents can \emph{misreport} their locations in order to manipulate
the planner and increase their utility.
One main objective of the planner is to design procedures to locate the facility,
called \emph{mechanisms}, that incentivize the agents to report their true
locations, i.e., the mechanisms are \emph{strategy-proof}.
When monetary payments are not allowed, that is the planner cannot pay the agents
or demand payments from them; it is not always possible to design mechanisms
that implement an optimal solution and remain strategy-proof.
Thus, the goal is to design mechanisms that \emph{approximately} maximize an
objective function under the constraint that they are strategy-proof.
The term \emph{approximate mechanism design without money}, introduced
by~\citeauthor{PT09}, is usually deployed for problems like the one described above.
\citeauthor{PT09} studied \emph{homogeneous} facility location games, where
one, or two, \emph{identical} facilities had to be placed on a real line and every
agent wanted to be as close as possible to any of them. In this setting, the
agents were reporting to the planner a point on the line and the objectives
studied were the maximization of the \emph{social welfare} or the
\emph{minimum utility} among the agents.
In many real-life scenarios, though, both facilities and the preferences of the agents
are \emph{heterogeneous}; every facility serves a different need and every agent
has potentially different needs from the others.
Consider, for example, the case where the government is planning to build a school
and a factory. Citizens' preferences for these facilities might significantly
differentiate. Those who work at the factory and also have children that go to school
wish both facilities to be built close to their homes. Citizens without children might
want the school to be built far because of the noise. Finally, those who do not
work at the factory prefer its location to be far from their home to avoid
emitted pollution.
The example above shows that an agent might want to be \emph{close} to a facility,
be \emph{away} from a facility, or be \emph{indifferent} about its presence.
\citeauthor{FJ15}~\cite{FJ15} studied 1-facility heterogeneous
games where each agent reported his preferred location on the line, while it was
known to the planner, whether he wanted to be close to, or away from, the facility.
\citeauthor{ZL15} \cite{ZL15} extended the model of~\cite{FJ15}
for heterogeneous 2-facility games and studied the social utility objective for
several different scenarios of the information the planner knows.
\citeauthor{SV16}~\cite{SV16} studied heterogeneous 2-facility games
on discrete networks. In their setting, each agent is located on a node of a
graph and either is indifferent or wants to be close to each facility and the
planner knows the location of every agent but not their preferences for the
facilities.
In this paper, we extend the aforementioned models and study heterogeneous
$k$-facility location games; simply $k$-facility games.
Our main focus is to maximize the minimum utility among all the agents, termed
\textsc{Egalitarian}\xspace. As a byproduct, we derive results for the social welfare, termed
\textsc{Utilitarian}\xspace, and the recently proposed minimum \emph{happiness} objective,
termed \textsc{Happiness}\xspace.
\textsc{Happiness}\xspace, which is reminiscent of the proportionality notion in resource allocation problems,
is a fairness criterion for facility location problems introduced in~\cite{MLYZ}.
The happiness of an agent is the ratio between the utility he gets under the
locations of the facilities over the maximum utility the agent could get under any location.
To the best of our knowledge, there is no prior work on this model. We note that
while our model is a natural extension of the aforementioned models
almost none of those results apply in our case.
\subsection{Our contributions}
We study several questions regarding heterogeneous $k$-facility games; our results are summarized in Table \ref{tab:table-of-results}.
Firstly, we focus on the case where there is only one facility to be located.
Feigenbaum and Sethuraman~\cite{FJ15} have proven that there is no deterministic
strategy-proof mechanism with bounded approximation for \textsc{Egalitarian}\xspace for this case where
the preferences of the agents are known, and their locations are unknown. We study
the complementary case where the locations are known and the preferences are not
known to the planner. We prove that in this case, the mechanism that places the
facility on an optimal location for the reported preferences of the agents is
strategy-proof. In fact, our result is much stronger since it holds for any
combination of the following relaxations.
\begin{itemize}
\item The utility function of every agent can be any function that it is monotone
with respect the distance between the location of the agent and the location of the
facility. Thus, if an agent wants to be close to the facility, his utility decreases
with the distance, and if he wants to be away from the facility, his utility increases.
\item The domain $D_i$ of every agent's possible locations and the domain $S$ of allowed
locations for the facility can be any subset of $\ensuremath{\mathbb{R}}\xspace^d$. In addition, it can be the case
that $S \cap D_i = \emptyset$ for every agent $i$.
\end{itemize}
Next, we focus on the \textsc{Egalitarian}\xspace objective. We prove that there is no optimal deterministic
strategy-proof or strategy-proof in expectation mechanism for $k$-facility games
even for instances with $k=2$, two agents, and known locations for the agents.
We complement these results by deriving inapproximability bounds for deterministic
and randomized strategy-proof mechanisms.
The techniques we use are fundamentally different from~\cite{SV16},
since in our model, the facilities can be located anywhere on the segment without
any constraint, making the analysis more complex.
Then, we focus on $2$-facility games and we propose strategy-proof mechanisms
that achieve constant approximation ratio for the \textsc{Egalitarian}\xspace objective even both locations and preferences are private information.
All of our mechanisms are \emph{simple} and require \emph{limited communication}. By limited communication, we mean that our mechanisms require only a constant number of bit-information from every agent.
To the best of our knowledge, this is the first paper to study the communication complexity
on facility location problems and how communication affects approximation.
We propose two deterministic and two randomized mechanisms. The first deterministic
mechanism, called \texttt{Fixed}\xspace, requires zero communication between the planner and the
agents. On any instance, \texttt{Fixed}\xspace locates the facilities symmetrically away from the middle
of the segment without requiring any information from the agents. \hl{Although this mechanism
might seem naive and probably not useful in practice, it achieves constant approximation hence it can be seen as the absolute benchmark for any mechanism.} Furthermore, we prove that \texttt{Fixed}\xspace
is \emph{optimal} when no communication is allowed. No communication means that the agents do not transmit any bits to the planner before the locations for the facilities are decided, or equivalently that the facilities have to be located without getting any information from the agents. The second mechanism, termed
\ensuremath{\texttt{Fixed}^+}\xspace, utilizes the intuition gained from \texttt{Fixed}\xspace and chooses between five different
location-combinations for the facilities and locates the facilities in one of them by using
the information it got from the agents. Furthermore, every agent has to communicate
only 5 bits of information to the agent.
Our first randomized mechanism, termed \texttt{Random}\xspace, places with half probability both facilities
on the beginning of the segment and with half probability both facilities on the end of the segment.
\texttt{Random}\xspace seems naive, but it achieves $\frac{1}{2}$-approximation, it is universally
strategy-proof and requires
zero communication. \hl{Again, this result can be seen as the benchmark for any randomized mechanism.} The second randomized mechanism, \ensuremath{\texttt{Random}^+}\xspace, combines the ideas of
\texttt{Random}\xspace and \ensuremath{\texttt{Fixed}^+}\xspace, it is strategy-proof in expectation and improves upon \texttt{Random}\xspace by requiring again only 5 bits of information per agent.
For the special case where agents' locations are known to the mechanism and all the
agents are indifferent or want to be close to the facilities, we show how we can
utilize the optimal mechanism for the 1-facility game and get a
$\frac{3}{4}$-approximate strategy-proof mechanism for \textsc{Egalitarian}\xspace when $k=2$.
As a byproduct, we show that \texttt{Fixed}\xspace and \texttt{Random}\xspace achieve the same approximation
guarantee for \textsc{Happiness}\xspace and \textsc{Utilitarian}\xspace. Thus, we establish lower bounds
that were not known before and complement the results of~\cite{ZL15}.
\begin{table}[h!]
\centering
\begin{tabular}{|c||l|l|c|c|c||c|c|}
\hline
\textbf{\#Facilities} & \textbf{Bound} & \textbf{Mechanism} & \textbf{Bits} & \textbf{Preferences} & \textbf{Theorem} & {\bf Loc.} & {\bf Prefs.} \\ \hline \hline
1 & 1 & OPT-1 & - & $\{-1,0,1\}$ & \ref{thm:one-true} & \cmark & \xmark \\ \hline \hline
2 & $0.851^{\bf{*}}$ & - & - & $\{-1,0,1\}$ & \ref{alg-rand-inapprox} & \xmark & \cmark \\ \hline \hline
2 & $0.292$ & \texttt{Fixed}\xspace & 0 & $\{-1,0,1\}$ & \ref{thm:mech2} & \xmark & \xmark \\ \hline
2 & $0.366$ & \ensuremath{\texttt{Fixed}^+}\xspace & 5 & $\{-1,0,1\}$ & \ref{thm:fixedp-appx}, \ref{thm:fixedp-cc} & \xmark & \xmark \\ \hline
2 & $0.5$ & \texttt{Random}\xspace & 0 & $\{-1,0,1\}$ & \ref{thm:rand} & \xmark & \xmark \\ \hline
2 & $0.538$ & \ensuremath{\texttt{Random}^+}\xspace & 5 & $\{-1,0,1\}$ & \ref{thm:randp-apx}, \ref{thm:rand-cc} & \xmark & \xmark \\ \hline
2 & $0.75$ & $OPT^2$ & 0 & $\{0,1\}$ & \ref{thm:opt2} & \cmark & \xmark \\ \hline \hline
k & $0.5$ & \ensuremath{\texttt{Fixed}^{\{0,1\}}}\xspace & 0 & $\{0,1\}$ & \ref{thm:fzo} & \xmark & \xmark \\ \hline
k & $\frac{\lfloor \frac{k}{2}\rfloor}{k}$ & \ensuremath{\texttt{Fixed}^{\{0,-1\}}}\xspace & 0 & $\{-1,0\}$ & \ref{thm:fzm} & \xmark & \xmark \\ \hline
\end{tabular}
\caption{Snapshot of our results. The bound $0.851^*$ is an inapproximability result. The column ``bits'' corresponds to the bits per agent each of our mechanism needs. The preferences show the allowed preferences of the agents. An agent has preference $-1$ if he wants to be away from a facility; 1 if he wants to be close to a facility; and 0 if he is indifferent about the facility. The last two columns correspond to the information that is publicly available: ``Loc.'' corresponds to the locations of the agents while ``Prefs.'' corresponds to the preferences of the agents. Signs \cmark and \xmark~indicate whether this information is public or private respectively. \label{tab:table-of-results}}
\end{table}
\subsection{Further related work}
There is a long line of work on homogeneous facility location
games~\cite{AFPT10,DFMN12,FT10,FT14,LM+19,Lu10,Lu09,M19,ZL14}.
Different objectives and different utility functions have been
studied as well. In~\cite{FSY} the objective was the sum of
$L_p$ norms of agent's utilities, while in~\cite{FW} it was the sum of least
squares. \cite{FLZZ} introduced double-peaked utility functions.
The obnoxious facility game on the line, where every agent wants to be
away from the facilities, was introduced in~\cite{CWZ11} and later the model was
extended for trees and cycles in~\cite{CWZ13}. In~\cite{YMZ}, the objective of least
squares for obnoxious agents, was studied. The maximum envy was recently introduced
as an objective for facility location games in \cite{CFT16}. In
that paper as well as in \cite{GT17}, the authors studied the approximation of
mechanisms according to additive errors. False-name proof mechanisms
for the location of two identical facilities were studied in \cite{STY16} while
\cite{Th10} gave a characterization of strategy-proof and group strategy-proof
mechanisms in metric networks for 1-facility games with private locations of the agents.
Since the conference version of this paper~\cite{AD18}, other papers on
heterogeneous facility location games have appeared. In~\cite{DLLX}, the authors
studied heterogeneous 2-facility games on a line segment, under the extra constraint
where the locations between the two facilities have to be at least a certain distance.
In~\cite{KVZ}, the authors studied heterogeneous facility location games where the agents
were located on a line but the facility could be placed in a region on the plane.
\hl{Finally,} \cite{li2019strategyproof} \hl{studies a closely related model for 2-facilities
heterogeneous games under the social welfare objective. In their model, there are two
facilities, $f_1$ and $f_2$, to be located on the line. Every agent has as private information
his location, and a subset of ``acceptable'' facilities. The} {\em cost} \hl{an agent has, is the minimum distance between his location and the closest acceptable facility. There the objective is to choose locations for the facilities such that the sum of the costs of the agents is minimized.}
Simple mechanisms received a lot of attention lately; see \cite{GN17} for example and
the references therein for simple auctions. Informally, a simple mechanism is
easy to implement and allows the agents to ``easily'' deduce the strategy-proofness of
the mechanism. One way to capture simplicity is to use \emph{verifiably truthful}
mechanisms \cite{BraP15}, where agents can check whether a mechanism is strategy-proof
by using some, possibly exponential, algorithm. Simple mechanisms were formalized in
\cite{LiOSP} by introducing \emph{obviously} strategy-proof mechanisms.
\cite{FV17} analysed this type of mechanisms for homogeneous 1-facility games.
After a long history in theoretical computer science~\cite{KN}, communication complexity problems
have been studied in auction settings~\cite{BNS07} and in facility location games~\cite{feldman2016voting} but with ordinal preferences of the agents as input to the mechanisms.
Communication complexity has also been studied in other more general mechanism design problems
~\cite{MT14,Zandt}. To the best of our knowledge, no one studied the communication complexity
of facility location games on the line with cardinal utilities.
\section{Model}
In a \emph{$k$-facility game}, there is a set $N = \{1, \ldots, n\}$
of agents located in $\ensuremath{\mathbb{R}}\xspace^d$ and a set of $k$ distinct facilities
$F = \{1, \ldots, k\}$ that need to be placed in $S \subseteq \ensuremath{\mathbb{R}}\xspace^d$.
Each agent $i$ is associated with a location $x_i \in \ensuremath{\mathbb{R}}\xspace^d$ and a vector
$t_i \in \{-1,0,1\}^k$ that represents his preferences for the facilities.
If agent $i$ wants to be \emph{far} from facility $j$, then $\ensuremath{t_{ij}}\xspace=-1$; if he
is \emph{indifferent}, then $\ensuremath{t_{ij}}\xspace = 0$; if he wants to be \emph{close} to
$j$, then $\ensuremath{t_{ij}}\xspace = 1$.
We will use $\ensuremath{\mathbf{y}}\xspace = (\ensuremath{\mathbf{y}}\xspace_1, \ldots, \ensuremath{\mathbf{y}}\xspace_k)$ to denote the locations of
the facilities and $s=(s_1,\ldots, s_n)$ to denote the profile of the agents,
i.e. their declared tuples $s_i=(x_i,t_i), \forall i \in N$.
A vector $s_{-i}=(s_1, \ldots, s_{i-1}, s_{i+1}, \ldots, s_n)$ is the vector of
tuples excluding $s_i$, thus we can denote a profile as $(s_i,s_{-i})$.
The utility that agent $i$ gets from facility $j$, denoted as $u_{ij}(x_i,t_i,\ensuremath{\mathbf{y}}\xspace)$, depends
on the distance $\ensuremath{\texttt{dist}}\xspace(x_i, \ensuremath{\mathbf{y}}\xspace_j)$ between the location of the agent and the
location of the facility $j$, and on the agent's preference $t_{ij}$ for that facility.
We assume that $u_{ij}$ follows the rules below:
\begin{itemize}
\item If $t_{ij} = -1$, then $u_{ij}(x_i,t_i,\ensuremath{\mathbf{y}}\xspace)$ is strictly increasing with $\ensuremath{\texttt{dist}}\xspace(x_i,\ensuremath{\mathbf{y}}\xspace_j)$.
\item If $t_{ij} = 0$, then $u_{ij}(x_i,t_i,\ensuremath{\mathbf{y}}\xspace)$ is a constant independent of $\ensuremath{\texttt{dist}}\xspace(x_i,\ensuremath{\mathbf{y}}\xspace_j)$.
\item If $t_{ij} = 1$, then $u_{ij}(x_i,t_i,\ensuremath{\mathbf{y}}\xspace)$ is strictly decreasing with $\ensuremath{\texttt{dist}}\xspace(x_i,\ensuremath{\mathbf{y}}\xspace_j)$.
\end{itemize}
The total utility agent $i$ gets under \ensuremath{\mathbf{y}}\xspace is defined as the sum of the utilities
he gets for each of the facilities, i.e.
$u_i(x_i, t_i,\ensuremath{\mathbf{y}}\xspace)=\sum_{j \in [k]} u_{ij}(x_i,t_i,\ensuremath{\mathbf{y}}\xspace_j)$.
We consider three different objective functions:
\textsc{Egalitarian}\xspace, defined as $\max_\ensuremath{\mathbf{y}}\xspace \min_i u_i(x_i, \ensuremath{t_{i}}\xspace, \ensuremath{\mathbf{y}}\xspace)$; \textsc{Utilitarian}\xspace defined
as $\max_\ensuremath{\mathbf{y}}\xspace \sum_i u_i(x_i,t_i,\ensuremath{\mathbf{y}}\xspace)$; and \textsc{Happiness}\xspace defined as
$\max_\ensuremath{\mathbf{y}}\xspace \min_i\frac{u_i(x_i,t_i,\ensuremath{\mathbf{y}}\xspace)}{u_i^*(x_i,t_i)}$
where $u_i^*(x_i,t_i)=\max_{\ensuremath{\mathbf{y}}\xspace}u_i(x_i,t_i,\ensuremath{\mathbf{y}}\xspace)$.
A \emph{mechanism} $M$ is an algorithm that takes as input a profile $s$ and
outputs the locations of the facilities, $\ensuremath{\mathbf{y}}\xspace$.
A mechanism is \emph{deterministic} if it chooses \ensuremath{\mathbf{y}}\xspace deterministically and
it is \emph{randomized} if \ensuremath{\mathbf{y}}\xspace is chosen according to a probability distribution.
Let $\textsc{OPT}\xspace(s)$ and $M(s)$ denote the optimal value and the value of mechanism $M$
for an objective function under the profile $s$ respectively. A mechanism $M$ achieves
an approximation ratio $\alpha\leq 1$, or it is $\alpha$-approximate, if for any type
profile $s$, it holds that $M(s) \geq \alpha \cdot \textsc{OPT}\xspace(s)$.
A mechanism is called strategy-proof if no agent can benefit by misreporting \hl{his
location} or his preferences. Formally, a mechanism $M$ is strategy-proof if for any true profile
$(s_i,s_{-i})$ it returns locations \ensuremath{\mathbf{y}}\xspace and any misreported profile
$(s_i',s_{-i})$ it returns $\ensuremath{\mathbf{y}}\xspace'$, we have that
$u_i(x_i, \ensuremath{t_{i}}\xspace, \ensuremath{\mathbf{y}}\xspace) \geq u_i(x_i, \ensuremath{t_{i}}\xspace, \ensuremath{\mathbf{y}}\xspace')$. A randomized mechanism is
universally strategy-proof if it is a probability distribution over deterministic
strategy-proof mechanisms and strategy-proof in expectation if no agent can increase
his \emph{expected} utility by misreporting his type. Furthermore, a mechanism is
called {\em false-name proof} if no agent can benefit by using multiple and different
identities in the game.
The strongest notion of strategy-proofness for a mechanism is to be {\em group
strategy-proof}. \hl{For any subset $Z \subseteq N$ of agents, let $(s_Z, s_{-Z})$ denote
a profile of the agents' declarations. Furthermore, let $\ensuremath{\mathbf{y}}\xspace$ be the output of the
a mechanism under the true types $(s_Z, s_{-Z})$ and let $\ensuremath{\mathbf{y}'}\xspace$ be the output of the
the mechanism under $(s'_Z, s_{-Z})$ where agents in $Z$ had coordinated their
declarations.
A mechanism $M$ is group strategy-proof, if for any $Z \subseteq N$, any $i \in Z$, and any $s'_Z \neq s_Z$ it holds that $u_i(x_i, \ensuremath{t_{i}}\xspace, \ensuremath{\mathbf{y}}\xspace) \geq u_i(x_i, \ensuremath{t_{i}}\xspace, \ensuremath{\mathbf{y}'}\xspace)$.}
\paragraph{\bf Communication Complexity.}\hl{ The communication complexity of a mechanism is the number of bits each agent has to send to the mechanism in order to compute the output. We say that a mechanism has zero communication complexity if it requires 0 bits from every agent.}
\subsection{Facility location on a line segment}
A special case of $k$-facility games is when all the agents are located on the line
segment $[0, \ell]$, where $\ell > 0$.
This case is studied in the literature~\cite{PT09,SV16} since the definitions
above are greatly simplified. For normalization purposes, we assume that the
maximum utility an agent can get from any facility is $\ell$, and we define
the utility function of agent $i$ as follows.
\noindent
\begin{align}
\label{eq:util}
u_{ij}(x_i,t_i ,\ensuremath{\ybf_j}\xspace)=\left\{ \begin{array}{rl}
|x_i - \ensuremath{\mathbf{y}}\xspace_j|, & \quad \text{if $\ensuremath{t_{ij}}\xspace=-1$}\\
\ell, & \quad \text{if $\ensuremath{t_{ij}}\xspace=0$}\\
\ell-|x_i - \ensuremath{\mathbf{y}}\xspace_j|, & \quad \text{if $\ensuremath{t_{ij}}\xspace=1$.} \end{array}\right.
\end{align}
\section{1-facility games with known locations}
\label{sec:onef}
We first study the case where the locations of the agents are publicly
known and only one facility has to be placed.
We will show that the mechanism which places the facility on an optimal location
using any {\em declaration-independent} tie-breaking rule is strategy-proof
for \textsc{Egalitarian}\xspace, \textsc{Utilitarian}\xspace, \textsc{Happiness}\xspace objectives.
\begin{definition}
A mechanism $M$ has a declaration-independent tie-breaking rule if it outputs
the same $\ensuremath{\mathbf{y}}\xspace$ for any two profiles $s \neq s'$ with $M(s) = M(s')$.
\end{definition}
Hence, a mechanism has a declaration-independent tie-breaking rule if it outputs
the same location for the facility for all profiles that yield the same value
for the objective we are trying to optimise. An example of such a rule is the
lexicographic minimum.
\alg{alg:onef}
\begin{tcolorbox}[title=OPT-1 Mechanism]
\begin{itemize}
\item[{\bf In:}] For every agent $i$: public location $x_i \in \ensuremath{\mathbb{R}}\xspace^d$,
private preference $t_i \in \{-1,0,1\}$; region $S \subseteq \ensuremath{\mathbb{R}}\xspace^d$;
objective \ensuremath{\mathcal{O}}\xspace; declaration-independent tie-breaking rule $T$.
\item[{\bf Out:}] Location $\ensuremath{\mathbf{y}^*}\xspace \in S$ for the facility.
\end{itemize}
\begin{enumerate}
\item Let $Y \subseteq S$ such that every $y \in Y$ optimizes \ensuremath{\mathcal{O}}\xspace for the given locations
and preferences, excluding the agents with preference 0.
\item Choose $\ensuremath{\mathbf{y}^*}\xspace \in Y$ according to the tie-breaking rule $T$.
\end{enumerate}
\end{tcolorbox}
Mechanism~\ref{alg:onef} does not make any assumptions about the dimensions of the
agents locations and the region $S$. So, the actual locations of the agents can be
in $\ensuremath{\mathbb{R}}\xspace^{d_1}$ and the region $S \subseteq \ensuremath{\mathbb{R}}\xspace^{d_2}$, where $d_1 \neq d_2$.
In addition, $S$ can be of an arbitrary form, i.e. it can be the union of several
disjoint regions of $\ensuremath{\mathbb{R}}\xspace^d$.
\subsection{Analysis for \textsc{Egalitarian}\xspace objective}
\label{sec:onef-util}
In this section we focus on the \textsc{Egalitarian}\xspace objective, i.e. $\ensuremath{\mathcal{O}}\xspace = \max_y \min_i u_i(x_i,t_i,y)$.
In order to prove that Mechanism~\ref{alg:onef} is strategy-proof for \textsc{Egalitarian}\xspace,
we partition the agents into two sets \ensuremath{\mathcal{T}_l}\xspace and \ensuremath{\mathcal{T}_h}\xspace.
\ensuremath{\mathcal{T}_l}\xspace contains the agents with the minimum utility when the facility is placed
on \ensuremath{\mathbf{y}^*}\xspace and $\ensuremath{\mathcal{T}_h}\xspace = N \setminus \ensuremath{\mathcal{T}_l}\xspace$.
Since agents with preference type 0 have constant utility independently of $\ensuremath{\mathbf{y}}\xspace$ and
they are excluded from the computation of $\ensuremath{\mathbf{y}}\xspace$, in our analysis we will assume that
there is no agent $i$ with $t_i=0$. We first prove that no agent from the set \ensuremath{\mathcal{T}_h}\xspace has an incentive to lie.
\begin{lemma}
\label{lem:one-high}
No agent from \ensuremath{\mathcal{T}_h}\xspace can increase his utility by lying.
\end{lemma}
\begin{proof}
For the sake of contradiction suppose that an agent $i \in \ensuremath{\mathcal{T}_h}\xspace$ with
preference $\ensuremath{t_{i}}\xspace$ declares preference $\ensuremath{t_{i}}\xspace'$ and increases his utility.
Let \ensuremath{\mathbf{y}'}\xspace be the optimal location of the facility in this case.
Since we have assumed that agent $i$ increases his payoff, we have that
$u_i(x_i, \ensuremath{t_{i}}\xspace, \ensuremath{\mathbf{y}^*}\xspace) < u_i(x_i, \ensuremath{t_{i}}\xspace, \ensuremath{\mathbf{y}'}\xspace)$.
We will consider two cases depending on the declaration $\ensuremath{t_{i}}\xspace'$.
\begin{itemize}
\item $\ensuremath{t_{i}}\xspace' = 0$. Recall, in this case, Mechanism~\ref{alg:onef} excludes
agent $i$ from the computation of $\ensuremath{\mathbf{y}'}\xspace$.
Since $u_i(x_i, \ensuremath{t_{i}}\xspace, \ensuremath{\mathbf{y}^*}\xspace) < u_i(x_i, \ensuremath{t_{i}}\xspace, \ensuremath{\mathbf{y}'}\xspace)$, we get that $\ensuremath{\mathbf{y}^*}\xspace \neq \ensuremath{\mathbf{y}'}\xspace$.
In addition, we get that
$\min_{j \neq i} u_j(x_j, t_j, \ensuremath{\mathbf{y}'}\xspace) > \min_{j \neq i} u_j(x_j, t_j, \ensuremath{\mathbf{y}^*}\xspace)$; if
this was not the case, the mechanism could return \ensuremath{\mathbf{y}^*}\xspace and increase the value of
the objective. Hence, we get that
$\min_{j} u_j(x_j, t_j, \ensuremath{\mathbf{y}'}\xspace) > \min_{j} u_j(x_j, t_j, \ensuremath{\mathbf{y}^*}\xspace)$.
This means that $\ensuremath{\mathbf{y}'}\xspace$ is a better solution than $\ensuremath{\mathbf{y}^*}\xspace$ for the \textsc{Egalitarian}\xspace objective, which
contradicts the assumption that \ensuremath{\mathbf{y}^*}\xspace is an optimal solution.
\item $t_i' \neq 0$. The utility of agent $i$ will change only if the location of
the facility changes; this is due to the declaration-independent tie-breaking rule $T$.
This will happen only if $u_i(x_i, \ensuremath{t_{i}}\xspace', \ensuremath{\mathbf{y}^*}\xspace) < \min_{j \neq i} u_j(x_j, t_j, \ensuremath{\mathbf{y}^*}\xspace)$.
This means that $u_i(x_i, \ensuremath{t_{i}}\xspace', \ensuremath{\mathbf{y}^*}\xspace) < u_i(x_i, \ensuremath{t_{i}}\xspace', \ensuremath{\mathbf{y}'}\xspace)$.
Without loss of generality let $\ensuremath{t_{i}}\xspace' = 1$.
In this case $\ensuremath{\texttt{dist}}\xspace(x_i,\ensuremath{\mathbf{y}^*}\xspace) > \ensuremath{\texttt{dist}}\xspace(x_i,\ensuremath{\mathbf{y}'}\xspace)$, i.e. the new optimal location is
closer to $x_i$. But this means that $u_i(x_i, \ensuremath{t_{i}}\xspace, \ensuremath{\mathbf{y}^*}\xspace) < u_i(x_i, \ensuremath{t_{i}}\xspace, \ensuremath{\mathbf{y}'}\xspace)$
and since $\ensuremath{t_{i}}\xspace = -1$ this contradicts the assumption that agent $i$ can increase his
utility by misreporting his preference.
\end{itemize}
\end{proof}
Next, we prove that no agent from \ensuremath{\mathcal{T}_l}\xspace has an incentive to lie about his
preferences.
\begin{lemma}
\label{lem:one-low}
No agent from \ensuremath{\mathcal{T}_l}\xspace can increase his utility by lying.
\end{lemma}
\begin{proof}
We will prove the claim by contradiction. Suppose
that an agent $i \in \ensuremath{\mathcal{T}_l}\xspace$ with preference $\ensuremath{t_{i}}\xspace$ can increase his utility by
declaring $\ensuremath{t_{i}}\xspace'$.
Using exactly the same arguments as in Lemma~\ref{lem:one-high} we can see that
$\ensuremath{t_{i}}\xspace' \neq 0$.
Let $\ensuremath{\mathbf{y}'}\xspace \neq \ensuremath{\mathbf{y}^*}\xspace$ be the optimal location for the facility when agent $i$
declares $\ensuremath{t_{i}}\xspace'$. Clearly, if $\ensuremath{\mathbf{y}'}\xspace = \ensuremath{\mathbf{y}^*}\xspace$ agent $i$ has no reason to lie.
We now consider the following two cases:
\begin{itemize}
\item $u_i(x_i, \ensuremath{t_{i}}\xspace', \ensuremath{\mathbf{y}^*}\xspace) \geq u_i(x_i, \ensuremath{t_{i}}\xspace, \ensuremath{\mathbf{y}^*}\xspace)$.
Since we have assumed that agent $i$ increases his utility by declaring $\ensuremath{t_{i}}\xspace'$,
we have that $u_i(x_i, \ensuremath{t_{i}}\xspace, \ensuremath{\mathbf{y}'}\xspace) > u_i(x_i, \ensuremath{t_{i}}\xspace, \ensuremath{\mathbf{y}^*}\xspace)$. Hence,
$\min_{j \neq i} u_j(x_j, t_j, \ensuremath{\mathbf{y}'}\xspace) > \min_{j \neq i} u_j(x_j, t_j, \ensuremath{\mathbf{y}^*}\xspace)$
since an agent $j \ne i$ who now has the minimum utility is the one who determines the
new outcome $\ensuremath{\mathbf{y}'}\xspace$. We note that $\min_{j \in N}$ $u_j(x_j, t_j, \ensuremath{\mathbf{y}'}\xspace)$ should
be strictly larger than $\min_{j \in N} u_j(x_j, t_j, \ensuremath{\mathbf{y}^*}\xspace)$ since
Mechanism~\ref{alg:onef} uses a declaration-independent tie-breaking rule.
So the location should not change if the value of the objective remains the same.
But then we have that $\min_{j \in N} u_j(x_j, t_j, \ensuremath{\mathbf{y}'}\xspace) >
\min_{j \in N} u_j(x_j, t_j, \ensuremath{\mathbf{y}^*}\xspace)$
which contradicts the fact that \ensuremath{\mathbf{y}^*}\xspace is an optimal location for the facility.
\item $u_i(x_i, \ensuremath{t_{i}}\xspace', \ensuremath{\mathbf{y}^*}\xspace) < u_i(x_i, \ensuremath{t_{i}}\xspace, \ensuremath{\mathbf{y}^*}\xspace)$.
This means that agent $i$ under the declaration $\ensuremath{t_{i}}\xspace'$ has the smallest
utility over all the agents.
Hence, one of the following cases must be true since we assumed that $\ensuremath{t_{i}}\xspace' \neq 0$.
The first one is when $\ensuremath{t_{i}}\xspace = -1$ and $\ensuremath{t_{i}}\xspace' = 1$. Since the utility of agent $i$
under the declaration $\ensuremath{t_{i}}\xspace'$ increased, it means that $\ensuremath{\texttt{dist}}\xspace(x_i,\ensuremath{\mathbf{y}'}\xspace) < \ensuremath{\texttt{dist}}\xspace(x_i,\ensuremath{\mathbf{y}^*}\xspace)$,
i.e. the facility must be placed \emph{closer} to his location $x_i$. But this means that
his utility under the true preference $\ensuremath{t_{i}}\xspace$ decreased because the agent wants
to be away from the facility.
Similarly when $\ensuremath{t_{i}}\xspace = 1$ and $\ensuremath{t_{i}}\xspace' = -1$ the facility must be placed further away from the position of the
agent, while the agent wants to be close to the facility. Hence, in both cases the utility
of agent $i$ decreases.
\end{itemize}
As a result, in every case agent $i$ cannot increase his
utility by lying, which contradicts our assumption.
\end{proof}
Notice that Mechanism~\ref{alg:onef} places the facility on the location that
maximizes our objective, i.e. it is optimal. Furthermore, the combination of
Lemmas~\ref{lem:one-high} and~\ref{lem:one-low} shows that no agent can increase
his utility by lying. The next theorem follows:
\begin{theorem}
\label{thm:one-true}
OPT-1 is an optimal strategy-proof mechanism for the \textsc{Egalitarian}\xspace objective.
\end{theorem}
Theorem~\ref{thm:one-true} complements in a sense the result of~\cite{FJ15}, where it was
proven that there is no deterministic strategy-proof mechanism with
bounded approximation for the \textsc{Egalitarian}\xspace objective for 1-facility games even on a line segment with
known preferences but unknown locations.
\subsection{Analysis for the \textsc{Utilitarian}\xspace objective}
\label{sec:onef-welfare}
In this section we focus on the \textsc{Utilitarian}\xspace objective, i.e. $\ensuremath{\mathcal{O}}\xspace = \max_y \sum_i u_i(x_i,t_i,y)$.
Again, since agents with preference type 0 have constant utility independently of $y$,
we will assume that there is no agent $i$ with $t_i=0$.
\begin{theorem}
\label{thm:one-true-welfare}
Mechanism~\ref{alg:onef} is an optimal strategy-proof for the \textsc{Utilitarian}\xspace objective.
\end{theorem}
\begin{proof}
We will prove the theorem by contradiction. So, assume that
there exists an agent $i$ who can increase his utility by declaring $t_i' \neq t_i$.
Let $\ensuremath{\mathbf{y}^*}\xspace$ be the optimal location of the facility when $i$ declares $t_i$ and
$\ensuremath{\mathbf{y}'}\xspace \neq \ensuremath{\mathbf{y}^*}\xspace$ be the location of the facility when he declares $t'_i$.
So, by assumption, we have that $u_i(x_i,t_i,\ensuremath{\mathbf{y}^*}\xspace) < u_i(x_i,t_i,\ensuremath{\mathbf{y}'}\xspace)$.
Firstly, assume that $t_i' = 0$. Then, Mechanism~\ref{alg:onef} excludes agent $i$ from
the computation of $\ensuremath{\mathbf{y}'}\xspace$. In addition, since the mechanism uses a
declaration-independent tie-breaking rule and $\ensuremath{\mathbf{y}'}\xspace \neq \ensuremath{\mathbf{y}^*}\xspace$, it must be true that
$$\sum_{j \neq i}u_j(x_j,t_j,\ensuremath{\mathbf{y}^*}\xspace) < \sum_{j \neq i}u_j(x_j,t_j,\ensuremath{\mathbf{y}'}\xspace).$$
If this was not the case, we could increase the value of the objective by choosing
$\ensuremath{\mathbf{y}^*}\xspace$ instead.
Thus, since we assumed that $u_i(x_i,t_i,\ensuremath{\mathbf{y}^*}\xspace) < u_i(x_i,t_i,\ensuremath{\mathbf{y}'}\xspace)$, we get that
$\sum_{j}u_j(x_j,t_j,\ensuremath{\mathbf{y}^*}\xspace) < \sum_{j}u_j(x_j,t_j,\ensuremath{\mathbf{y}'}\xspace)$ which contradicts the
assumption that \ensuremath{\mathbf{y}^*}\xspace maximizes the social welfare.
Having established that $t_i' \neq 0$, we consider the following two cases depending on the
utilities of the rest of the agents under \ensuremath{\mathbf{y}^*}\xspace and \ensuremath{\mathbf{y}'}\xspace. In what follows we will assume
that $t_i = 1$ and $t_i'=-1$; the arguments for $t_i = -1$ and $t_i'=1$ are similar.
\begin{itemize}
\item $\sum_{j \neq i}u_j(x_j,t_j,\ensuremath{\mathbf{y}^*}\xspace) < \sum_{j \neq i}u_j(x_j,t_j,\ensuremath{\mathbf{y}'}\xspace)$. Then,
as above, we get that \ensuremath{\mathbf{y}^*}\xspace does not maximize the welfare objective since we have
assumed that $u_i(x_i,t_i,\ensuremath{\mathbf{y}^*}\xspace) < u_i(x_i,t_i,\ensuremath{\mathbf{y}'}\xspace)$.
\item $\sum_{j \neq i}u_j(x_j,t_j,\ensuremath{\mathbf{y}^*}\xspace) \geq \sum_{j \neq i}u_j(x_j,t_j,\ensuremath{\mathbf{y}'}\xspace)$.
Since $\ensuremath{\mathbf{y}^*}\xspace \neq \ensuremath{\mathbf{y}'}\xspace$ and since Mechanism~\ref{alg:onef} has a declaration-independent
tie-breaking rule, it should be true that
$$\sum_{j \neq i}u_j(x_j,t_j,\ensuremath{\mathbf{y}^*}\xspace) + u_i(x_i,t'_i,\ensuremath{\mathbf{y}^*}\xspace) < \sum_{j \neq i}u_j(x_j,t_j,\ensuremath{\mathbf{y}'}\xspace)
+ u_i(x_i,t'_i,\ensuremath{\mathbf{y}'}\xspace).$$
So, we get that $u_i(x_i,t'_i,\ensuremath{\mathbf{y}^*}\xspace) < u_i(x_i,t'_i,\ensuremath{\mathbf{y}'}\xspace)$ and since we have assumed that
$t_i' = -1$ we get that $\ensuremath{\texttt{dist}}\xspace(x_i, \ensuremath{\mathbf{y}^*}\xspace) < \ensuremath{\texttt{dist}}\xspace(x_i, \ensuremath{\mathbf{y}'}\xspace)$. This, in turn means that
$u_i(x_i,t_i,\ensuremath{\mathbf{y}^*}\xspace) > u_i(x_i,t_i,\ensuremath{\mathbf{y}'}\xspace)$ which is a contradiction.
\end{itemize}
Hence, we have shown that for any declaration $t_i'$ the utility of the agent cannot increase.
Thus, the theorem follows.
\end{proof}
\subsection{Analysis for the other objectives}
Observe that in the analyses in Sections~\ref{sec:onef-util} and~\ref{sec:onef-welfare},
the only assumption about the utility functions of the agents is that they are monotone
with respect to the distance between the location of the agent and the location of the
facility. Thus, every agent can have his own type of utility function, completely different
than the types of the other agents.
Recall, for \textsc{Happiness}\xspace we have that
$\ensuremath{\mathcal{O}}\xspace = \max_\ensuremath{\mathbf{y}}\xspace \min_i\frac{u_i(x_i,t_i,\ensuremath{\mathbf{y}}\xspace)}{u_i^*(x_i,t_i)}$ where
$u_i^*(x_i,t_i)=\max_{\ensuremath{\mathbf{y}}\xspace}u_i(x_i,t_i,\ensuremath{\mathbf{y}}\xspace)$. Observe though, $u_i^*(x_i,t_i)$ is
a constant, hence the analysis of Section~\ref{sec:onef-util} applies here as well.
So, we can get the following as a corollary of Theorem~\ref{thm:one-true}.
\begin{corollary}
\label{cor:one-true-happy}
Mechanism~\ref{alg:onef} is an optimal strategy-proof for the \textsc{Happiness}\xspace objective.
\end{corollary}
\begin{comment}
We note that Mechanism~\ref{alg:onef} is not group strategy-proof. Consider as an example the case with three agents with locations $x_1 = 0, x_2 = \frac{\ell}{2}$ and $x_3 = \ell$ and preferences $t_1=t_2=1$ and $t_3 = 0$. In the coalition of agents 2 and 3, if agent 3 declares $t_3 = 1$, his utility does not change while the utility of agent 2 increases.
\begin{claim}
Mechanism~\ref{alg:onef} is GSP even when the agents have preferences in \{-1,1\}.
\end{claim}
\begin{proof}
\lefteris{random approaches below}
Consider a coalition $C \subseteq N$ of the agents who misreport their preferences with $|C| \geq 2$. We first assume that the coalition contains agents of both preferences since otherwise, the utility of at least one of them can get worse. \lefteris{needs further explanation}
\lefteris{possibly with induction on the number of players?}
Base $|C| = 2$: Let $a_i$ and $a_j$ be located in $x_i$ and $x_j$ respectively. Without loss of generality suppose that $x_i < x_j$. We consider two sub-cases:
\begin{itemize}
\item $y < x_i$:
\item $y = x_i$:
\item $x_i < y < x_j$: Since the agents have opposite preferences the facility should be
\item $y = x_j$: Same arguments as with previous case.
\item $x_j < y$:
\end{itemize}
Consider now the case where $C = k \geq 2$. Again take all the possible intervals between consecutive pairs of players as in the case with two players.
\end{proof}
\lefteris{Is mechanism 1 GSP and pareto efficient for agents with preferences in \{-1,1\}?}
\end{comment}
\section{Inapproximability results}
\label{sec:inapprox}
In the remainder of the paper, unless specified otherwise, we study the \textsc{Egalitarian}\xspace objective.
In this section, we provide inapproximability results for strategy-proof
mechanisms for 2-facility games. We show that the second facility changes dramatically
the landscape of strategy-proofness. We prove that the extension of the optimal mechanism
for two facilities, i.e. placing the facilities on the locations that maximize the objective
under the declared preferences of the agents, is not strategy-proof even in the setting
of a line segment with two agents and known locations.
Furthermore, we provide inapproximability results for strategy-proof mechanisms.
We first prove that there is no 0.851-approximate deterministic strategy-proof
and then extend it to strategy-proof in expectation mechanisms.
\begin{theorem}
\label{thm:2fub}
There is no $\alpha$-approximate deterministic strategy-proof mechanism for the
2-facility game with $\alpha \geq 0.851$.
\end{theorem}
\begin{proof}
Let us consider the instances $I$ and $I'$ depicted in Figure~\ref{fig:fig1}.
Each white circle corresponds to an agent. Agent $a_1$ is located on 0 and agent $a_2$ on
$x > \frac{2\ell}{3}$, where the exact value of $x$ will be specified later in the proof.
Without loss of generality, we assume that $\ell = 1$. Firstly, we will prove that the
mechanism that places the facilities on their optimal locations is not strategy-proof
even when the locations of the agents are known. Then, we will use these instances to
derive our inapproximability result.
On instance $I$ agents $a_1$ and $a_2$ have preferences $t_1=(-1,1)$ and
$t_2=(0, 1)$ respectively.
It is not hard to see that the optimal locations for the facilities are
$\ensuremath{\mathbf{y}}\xspace_1 = 1$ and $\ensuremath{\mathbf{y}}\xspace_2=\frac{x}{2}$ where each agent gets utility
$2-\frac{x}{2}$. The optimal locations of the facilities are depicted by black circles
in the figure.
On instance $I'$ agent $a_1$ has the same preferences as on instance $I$
while the preferences of agent $a_2$ are $t'_2=(-1,1)$. The optimal locations
for the facilities in this instance are $\ensuremath{\mathbf{y}}\xspace_1 = 1$ and $\ensuremath{\mathbf{y}}\xspace_2 = x$ where
each agent gets utility $2 - x$.
\noindent
\begin{figure}[h!]
\begin{center}
\subfigure[Instance I]{
\begin{tikzpicture}[thick, scale=0.5]
\tikzstyle{every node}==[fill=white,minimum size=4pt,inner sep=0pt]
\draw (-4.4,-0.7) node(v)[label=below:$-1 1$]{};
\draw (-4,0.7) node(v1)[label=above:$0$]{};
\draw (-0.5,0.7) node(v2)[label=above:$\frac{x}{2}$]{};
\draw (3,0) node(v3)[draw, fill=white, circle]{};
\draw (-0.5,0) node(v3)[draw, fill=black, circle]{};
\draw (6,0) node(v3)[draw, fill=black, circle]{};
\draw (-4,0) node(v7)[draw, fill=white, circle]{};
\draw (-0.5,-1.9) node(u)[label=below:$\ensuremath{\mathbf{y}}\xspace_2$]{};
\draw (3,-0.7) node(u1)[label=below:$0 1$]{};
\draw (5.2,-1.9) node(l1)[label=below right:$\ensuremath{\mathbf{y}}\xspace_1$]{};
\draw (6,0.7) node(u1)[label=above :$\ell$]{};
\draw (3,0.7) node(v6)[label=above:$x$]{};
\draw (-3.88,0) -- (2.9,0);
\draw (3.15,0) -- (6,0);
\node[] at (7,-2) {};
\end{tikzpicture}
}
\subfigure[Instance $I'$]{
\begin{tikzpicture}[thick, scale=0.5]
\tikzstyle{every node}==[fill=white,minimum size=4pt,inner sep=0pt]
\draw (-4.4,-0.7) node(v)[label=below:$-1 1$]{};
\draw (-4,0.7) node(v1)[label=above:$0$]{};
\draw (3,0) node(v3)[draw, fill=black, circle]{};
\draw (-4,0) node(v7)[draw, fill=white, circle]{};
\draw (6,0) node(v7)[draw, fill=black, circle]{};
\draw (3,-1.9) node(u)[label=below:$\ensuremath{\mathbf{y}}\xspace'_2$]{};
\draw (2.6,-0.7) node(u1)[label=below:$-1 1$]{};
\draw (5.6,-1.9) node(l1)[label=below right:$\ensuremath{\mathbf{y}}\xspace'_1$]{};
\draw (5.85,0.7) node(u1)[label=above :$\ell$]{};
\draw (3,0.7) node(v6)[label=above:$x$]{};
\draw (-3.83,0) -- (5.85,0);
\node[] at (7,-2) {};
\end{tikzpicture}
}
\caption{Example for preferences in $\{-1,0,1\}^2$.}
\label{fig:fig1}
\end{center}
\end{figure}
\noindent
Instances $I$ and $I'$ show that the mechanism which places the
facilities on the optimal locations is not strategy-proof. On instance $I$
agent $a_2$ can declare $t'_2=(-1,1)$ and increase his utility from
$2-\frac{x}{2}$ to $2$.
Next, we focus on the inapproximability result for any deterministic mechanism.
The high-level idea of the proof is as follows. We assume that we know a strategy-proof
mechanism $M$ that achieves the best possible approximation ratio for the problem. Firstly,
we focus on instance $I$ where we show that $M$ always places the first facility on 1 and we
derive the approximation guarantee of $M$ on $I$ as a function of the location $\ensuremath{\mathbf{y}}\xspace_2$
of the second facility. Then, we turn our attention to instance $I'$, and we observe that
for the location $\ensuremath{\mathbf{y}}\xspace'_2$ of the second facility in this case, it should be true that
$\ensuremath{\mathbf{y}}\xspace'_2 \leq \ensuremath{\mathbf{y}}\xspace_2$. Using this, we consider the two possible cases for the location
$\ensuremath{\mathbf{y}}\xspace_1$ of facility $f_1$ with respect to $x$ and we derive bounds on the approximation
ratio of $M$, in each case as a function of $x$. Then, we optimize the value of $x$ and
derive the claimed bound.
So, let $M$ be a strategy-proof mechanism that achieves the best possible approximation
for the \textsc{Egalitarian}\xspace objective.
We first argue that on instance $I$ mechanism $M$ should place facility $f_1$ on $z=1$.
If this was not the case, the utility of $a_1$ would strictly increase by the movement
of $f_1$ to $1$, while the utility of $a_2$ would remain the same.
Hence, the approximation ratio of $M$ would strictly improve by placing $f_1$ on 1,
contradicting the assumption that $M$ achieves the best approximation guarantee.
Next, suppose that $M$ places facility $f_2$ on $\ensuremath{\mathbf{y}}\xspace_2 \leq x$ on instance $I$.
Since $M$ is strategy-proof, $f_2$ cannot be placed on any
$\ensuremath{\mathbf{y}}\xspace'_2 > \ensuremath{\mathbf{y}}\xspace_2$ on instance $I'$. If $\ensuremath{\mathbf{y}}\xspace'_2 > \ensuremath{\mathbf{y}}\xspace_2$, then agent $a_2$
from $I$ could declare preferences $t'_2 = (-1,1)$ and increase his utility
(assuming that $ x > \frac{2\ell}{3}$).
We consider the following two cases regarding the location $\ensuremath{\mathbf{y}}\xspace'_1$ in which
$M$ places $f_1$ on $I'$:
\begin{itemize}
\item $\ensuremath{\mathbf{y}}\xspace'_1 \geq x$. Then, obviously $\ensuremath{\mathbf{y}}\xspace_1 = 1$ since otherwise the
utility of both agents in $I'$ is decreasing and thus $M$ does not achieve the
maximum approximation. So, under $M$ agent $a_2$ on instance $I'$ gets
utility at most $u_2' = 2-2x + \ensuremath{\mathbf{y}}\xspace_2$ while $a_2$ gets utility $u_1' = 2 - \ensuremath{\mathbf{y}}\xspace_2 \geq u_2'$
(since $x \geq \ensuremath{\mathbf{y}}\xspace_2$).
Thus $M$ achieves an approximation of $\frac{2-2x + \ensuremath{\mathbf{y}}\xspace_2}{2-x}$.
Furthermore, on instance $I$ agent $a_1$ gets utility $u_1 = 2 - \ensuremath{\mathbf{y}}\xspace_2$,
since as explained earlier, $M$ places $f_1$ on 1, while $a_2$
gets utility $u_2 = 2 -x + \ensuremath{\mathbf{y}}\xspace_2 \geq u_1$ when $\ensuremath{\mathbf{y}}\xspace_2 \geq \frac{x}{2}$.
Clearly if $\ensuremath{\mathbf{y}}\xspace_2 < \frac{x}{2}$ the utility of $a_1$ gets worse. Thus, the
approximation of $M$ on instance $I$ is $\frac{4 - 2\ensuremath{\mathbf{y}}\xspace_2}{4-x}$.
Observe that the approximation guarantee of $M$ on $I$ is decreasing with
$\ensuremath{\mathbf{y}}\xspace_2$ while on $I'$ it is increasing with $\ensuremath{\mathbf{y}}\xspace_2$.
So, if we optimize the approximation guarantee and solve for $\ensuremath{\mathbf{y}}\xspace_2$ we get
that $\ensuremath{\mathbf{y}}\xspace_2 =\frac{6x-2x^2}{8-3x}$. Thus, if $\ensuremath{\mathbf{y}}\xspace'_1 > x$, the approximation
of $M$ is at most
\begin{align}
\label{eq:case1}
\frac{4-2\cdot \frac{6x-2x^2}{8-3x}}{4 -x} = \frac{4x^2-24x+32}{3x^2-20x+32}
\end{align}
\item
If $M$ on instance $I'$ places $f_1$ on $\ensuremath{\mathbf{y}}\xspace'_1 < x$, then observe that there
is no location $\ensuremath{\mathbf{y}}\xspace'_2$ for $f_2$ such that both agents get utility strictly
larger than 1.
Thus, in this case $M$ achieves approximation at most
\begin{align}
\label{eq:case2}
\frac{1}{2 -x}
\end{align}
\end{itemize}
Observe that the approximation guarantee in~\eqref{eq:case1} increases with $x$
while in~\eqref{eq:case2} it decreases with $x$. So if we optimize on the approximation guarantee of $M$,
we have to solve for $x$ the equation $-4x^3+29x^2-60x+32=0$. The unique
solution in $[0,1]$ is $x=\frac{13-\sqrt{41}}{8}$.
Using this value in~\eqref{eq:case1} and~\eqref{eq:case2}
we get that any deterministic strategy-proof mechanism on instances $I$ and $I'$ achieves approximation less than 0.851.
\end{proof}
The inapproximability bound
can be extended to strategy-proof in expectation mechanisms.
\begin{theorem}\label{alg-rand-inapprox}
There is no $\alpha$-approximate strategy-proof in expectation mechanism for the
2-facility game with $\alpha \geq 0.851$.
\end{theorem}
\begin{proof}
We will use again the instances from Figure~\ref{fig:fig1} to prove the claim setting $x = \frac{13-\sqrt{41}}{8}$. Recall that the optimal utility on instance $I$ is $\frac{4-x}{2}$ and on $I'$ it is $2-x$.
So, let $M$ be a strategy-proof in expectation mechanism.
Observe that on instance $I$ the mechanism should place the facility $f_1$ on 1
for the same reason as the one mentioned in the proof of Theorem~\ref{thm:2fub}; every
other location for $f_1$ decreases the approximation guarantee of $M$. Suppose now that $M$ places $f_2$ on $y \in [0,1]$
according to the probability distribution $p(y)$.
Without loss of generality we can assume that $p(y) = 0$ for every
$y > x$. This is because the approximation guarantee of $M$ can increase if
we place the facility on $x$ instead of some $y>x$.
Hence, on instance $I$ under $M$ agent $a_1$ gets utility 1 from $f_1$ and
utility $\int_0^x p(y)(1-y)dy = 1 - \int_0^x p(y)y dy$ from facility $f_2$, so $u_1 = 2-\int_0^x p(y)ydy$ in total.
Similarly, agent $a_2$ gets utility 1 from $f_1$ and utility
$1-x + \int_0^x p(y)ydy$ from facility $f_2$, so $u_2 = 2 - x + \int_0^x p(y)ydy$ in total.
Then, since $u_2 < u_1$, the approximation guarantee of $M$ on $I$ is at most
\begin{align}
\label{eq:ra1}
\frac{2}{4-x}\cdot\left(2-\int_0^x p(y)ydy\right)
\end{align}
We now consider two cases according to the location in which $M$ places facility $f_1$ on instance $I'$.
If $M$ places $f_1$ on $y'_1 \geq x$, then without loss of generality we can
assume that $f_1$ is placed on 1 since every other location decreases the utility
of both agents.
So suppose that $M$ places $f_1$ on 1 with some probability.
Furthermore, suppose that $M$ places $f_2$ on $y$ according to the probability
distribution $\pi(y)$ when $f_1$ is placed on 1. Observe that we can assume
that $M$ does not place $f_2$ on $y > x$, since the utility of both
agents could increase by placing it on $x$ instead. Thus, on instance $I'$, agent $a_2$ gets utility $1-x$ from facility $f_1$
and utility $1-x + \int_0^x \pi(y)ydy$ from facility $f_2$, so $u_2' = 2 - 2x + \int_0^x \pi(y)ydy$
in total. Similarly, agent $a_1$ gets total utility $u_1' = 2 -\int_0^x \pi(y)ydy > u_2'$. Since $M$ is
strategy-proof it must hold that $\int_0^x \pi(y)ydy \leq \int_0^x p(y)ydy$.
If this was not the case, then agent $a_2$ from instance $I$ could declare
preferences $(-1,1)$ and increase its utility. As a result the approximation guarantee of
$M$ on $I'$ is at most
\begin{align}
\label{eq:ra2}
\frac{1}{2-x}\cdot\left(2-2x+\int_0^x p(y)ydy\right)
\end{align}
$M$ achieves the best approximation on both instances when
the quantities from~\eqref{eq:ra1} and~\eqref{eq:ra2} are equal. Hence, if we
equalize them and solve for the integral we get that $\int_0^x p(y)ydy =
\frac{6x-2x^2}{8-3x}$ and the approximation guarantee is less than 0.851 on both
instances for the chosen $x$.
If the mechanism places $f_1$ on $y'_1 < x$, then on any location for $f_2$
there will be an agent with utility at most 1 and the approximation guarantee
of the mechanism will be at most $\frac{1}{2-x} < 0.851$.
Thus, in all possible cases the approximation of $M$ is upper bounded by 0.851.
\end{proof}
\section{Deterministic Mechanisms}
In this section, we propose deterministic strategy-proof mechanisms.
An initial approach would be to consider each facility independently and place
it to its optimal location. As we have already proved this mechanism is strategy-proof
when the locations of the agents are public information.
However, it achieves poor approximation if the agents
want to be away from the facilities. Consider the case with
$n$ agents located on $0, \frac{2\ell}{n}, \frac{3\ell}{n},
\ldots, \frac{(n-1)\ell}{n}, \ell$ and each having preferences $(-1,-1)$.
Observe that the optimal location for one facility is to be placed on
$\frac{\ell}{n}$ since this location maximizes the minimum distance between any
agent and the facility. Thus, both facilities will be placed on the same
location $\frac{\ell}{n}$.
Then the agent located in 0 has utility $\frac{2\ell}{n}$, the minimum over all
the agents. It is not hard to see that an optimal solution is to place
facility $f_1$ on 0 and facility $f_2$ on $\ell$ resulting in a utility of $\ell$
for each agent. Hence, the mechanism that places the facilities independently
to their optimal locations is $\frac{2}{n}$-approximate.
The example above provides evidence that a mechanism with good approximation
ratio should not put both facilities on the same location if
there are agents who have preference -1 for both facilities; in the worst case
there is an instance with an agent located in the exact same location where the
facilities are placed and with preferences $(-1,-1)$ resulting in a zero approximation.
On the other hand, the facilities should not be placed far away from each other. This
is because, in the worst-case again, an agent might have preference -1 for the
facility that is close to his location and preference 1 for the facility that is
far from him.
Using the intuition gained from the discussion above we propose a mechanism for
the 2-facility game that comprises these ideas and places the
facilities symmetrically away from the endpoints of the segment and it is strategy-proof even if the locations of the agents is private information.
Mechanism \texttt{Fixed}\xspace depicts our approach. It does not use any information from the
agents, thus it is de facto strategy-proof.
\begin{definition}[\texttt{Fixed}\xspace Mechanism]
Let $z_f = 1-\frac{\sqrt{2}}{2}$. \texttt{Fixed}\xspace mechanism sets $\ensuremath{\mathbf{y}}\xspace_1=z_f \cdot \ell$
and $\ensuremath{\mathbf{y}}\xspace_2=(1-z_f)\cdot \ell$.
\end{definition}
\begin{theorem}
\label{thm:mech2}
\texttt{Fixed}\xspace is $z_f \simeq 0.292$-approximate.
\end{theorem}
\begin{proof}
Tables~\ref{tab:main-low} and~\ref{tab:main-high} show the utility the agent located on $x_i$ gets under
$\ensuremath{\mathbf{y}}\xspace=(z\cdot\ell, (1-z)\cdot\ell)$ and the corresponding ratio.
Our goal is to find a $z \in [0,\ell]$ that maximizes the minimum ratio.
Thus, the optimal guarantee for \texttt{Fixed}\xspace is achieved
when $\frac{z}{\ell} = \frac{\ell - 2z}{2\ell - 2z}$. If we solve for
$z$, the feasible solution is $z_f = (1 - \frac{\sqrt{2}}{2})\ell$ and the approximation
guarantee follows.
Finally, observe that if the number of facilities to be placed is at least two, then
$\max_\ensuremath{\mathbf{y}}\xspace \min_i u_i(x_i, \ensuremath{t_{i}}\xspace, \ensuremath{\mathbf{y}}\xspace) \geq
\max_\ensuremath{\mathbf{y}}\xspace \min_i \frac{u_i(x_i, \ensuremath{t_{i}}\xspace, \ensuremath{\mathbf{y}}\xspace)}{u^*_i(x_i, \ensuremath{t_{i}}\xspace)}$, since
$u^*_i(x_i, \ensuremath{t_{i}}\xspace) \geq \ell$. Thus, \texttt{Fixed}\xspace can be used for both \textsc{Egalitarian}\xspace and \textsc{Happiness}\xspace
objectives and since it does not use any information from
the agents, it possesses all the desirable properties like group strategy
proofness and false name proofness.
\begin{table}[h!]
\begin{minipage}{0.45\textwidth}
\centering
\begin{tabular}{|c|c|c|l|}
\hline
$t_i$ & $u_i(x_i,\ensuremath{t_{i}}\xspace,\ensuremath{\mathbf{y}}\xspace)$ & $u^*_i(x_i,\ensuremath{t_{i}}\xspace)$ & Ratio \\ \hline
1, 1 & $\ell + 2x_i$ & $2\ell$ & $\geq 1/2$ \\ \hline
-1, 1 & $2z\cdot\ell$ & $2\ell - x_i$ & $\geq z$ \\ \hline
1, -1 & $(2 - 2z)\cdot\ell$ & $2\ell - x_i$ & $\geq 1/2$ \\ \hline
-1, -1 & $\ell - 2x_i$ & $2\ell -2x_i$ & $\geq \frac{1 - 2z}{2 - 2z}$\\
\hline
\end{tabular}
\caption{Case analysis when $x_i \leq z\cdot\ell$ or $x_i \geq (1-z)\cdot\ell$.}
\label{tab:main-low}
\end{minipage}
\hfill
\begin{minipage}{0.45\textwidth}
\centering
\begin{tabular}{|c|c|c|l|}
\hline
$t_i$ & $u_i(x_i,\ensuremath{t_{i}}\xspace,\ensuremath{\mathbf{y}}\xspace)$ & $u^*_i(x_i,\ensuremath{t_{i}}\xspace)$ & Ratio \\ \hline
1, 1 & $(1 + 2z)\cdot\ell$ & $2\ell$ & $\geq 1/2$ \\ \hline
-1, 1 & $2x_i$ & $2\ell - x_i$ & $\geq \frac{2z}{2-z}$ \\ \hline
1, -1 & $2\ell - 2x_i$ & $2\ell - x_i$ & $\geq 2/3$ \\ \hline
-1, -1 & $(1 - 2z)\cdot\ell$ & $2\ell - 2x_i$ & $\geq \frac{1 - 2z}{2 - 2z}$ \\ \hline
\end{tabular}
\caption{Case analysis when $z\cdot\ell<x_i<(1-z)\cdot\ell$.}
\label{tab:main-high}
\end{minipage}
\end{table}
\end{proof}
Theorem~\ref{thm:mech2} shows the sharp contrast between 1-facility and 2-facility
games where both locations and preferences are private. Recall that
~\cite{FJ15} proved that for 1-facility games there is no deterministic
strategy-proof mechanism with bounded approximation guarantee.
Observe furthermore that \texttt{Fixed}\xspace does not require any information from the agents.
Next, we prove that it is optimal when no communication is allowed.
\begin{theorem}
\label{thm:cc-lb}
\texttt{Fixed}\xspace is the optimal deterministic mechanism when no communication is allowed.
\end{theorem}
\begin{proof}
Let $M$ be any deterministic mechanism that places the facilities with no communication.
Since $M$ is deterministic, it places them on the same locations for any instance.
So, let $\ensuremath{\mathbf{y}}\xspace_1\cdot \ell$ and $\ensuremath{\mathbf{y}}\xspace_2\cdot \ell$ be the locations of the first and the second
facility respectively. Without loss of generality assume that $0\leq \ensuremath{\mathbf{y}}\xspace_1\leq \ensuremath{\mathbf{y}}\xspace_2\leq 1$.
We will prove our claim by contradiction. So, for the sake of contradiction assume that the
approximation ratio of $M$ is strictly better than $z=(1-\frac{\sqrt{2}}{2})$.
Without loss of generality we assume that $\ensuremath{\mathbf{y}}\xspace_1 \leq \frac{1}{2}$. Consider the following two instances.
On the first instance there is only one agent on $\ensuremath{\mathbf{y}}\xspace_1\cdot\ell$ with preferences $(-1,-1)$.
The utility of the agent under $M$ is $(\ensuremath{\mathbf{y}}\xspace_2-\ensuremath{\mathbf{y}}\xspace_1)\cdot \ell$. The optimal solution places both
facilities on $\ell$ and the agent gets utility $(2-2\ensuremath{\mathbf{y}}\xspace_1)\cdot \ell$. So, the approximation ratio
of $M$ is $\frac{\ensuremath{\mathbf{y}}\xspace_2-\ensuremath{\mathbf{y}}\xspace_1}{2-2\ensuremath{\mathbf{y}}\xspace_1}$. Since the approximation of $M$ is strictly greater than
$z$, we get that
\begin{equation}
\label{eq:cc-lb1}
\ensuremath{\mathbf{y}}\xspace_1 < \frac{\ensuremath{\mathbf{y}}\xspace_2-2z}{1-2z}
\end{equation}
Now, consider the instance where there is only one agent on 0 with preferences $(-1,1)$.
Under $M$, the agent gets utility $(1+\ensuremath{\mathbf{y}}\xspace_1-\ensuremath{\mathbf{y}}\xspace_2)\cdot\ell$. The optimal solution for this instance
places the first facility on $\ell$, the second one on 0, and the agent gets utility $2\ell$.
Hence, the approximation guarantee of $M$ on this instance is $\frac{1+\ensuremath{\mathbf{y}}\xspace_1-\ensuremath{\mathbf{y}}\xspace_2}{2}$.
Again, since we assume that the approximation is strictly greater than $z$, we get that
\begin{equation}
\label{eq:cc-lb2}
\ensuremath{\mathbf{y}}\xspace_1 > 2z+\ensuremath{\mathbf{y}}\xspace_2-1
\end{equation}
The combination of Equations~\eqref{eq:cc-lb1} and~\eqref{eq:cc-lb2} dictates that
$\ensuremath{\mathbf{y}}\xspace_2 > 3-\frac{1}{2z}-2z > 1-z$.
Similarly using another two instances, we can prove that $\ensuremath{\mathbf{y}}\xspace_1< z$. More
specifically, we use the instance where there is only one agent on $\ensuremath{\mathbf{y}}\xspace_2\cdot \ell$ with preferences $(-1,-1)$ and the instance where there is only one agent on $\ell$ with preferences $(1,-1)$. Finally, consider again the instance where there is only one agent on 0 with preferences $(-1,1)$. Recall that the approximation guarantee of the mechanism on this instance is $\frac{1+\ensuremath{\mathbf{y}}\xspace_1-\ensuremath{\mathbf{y}}\xspace_2}{2}$.
So, since $\ensuremath{\mathbf{y}}\xspace_1<z$ and $\ensuremath{\mathbf{y}}\xspace_2>1-z$, we get that the approximation guarantee is strictly smaller than $z$
which is a contradiction. Our claim follows.
\end{proof}
\subsection{\ensuremath{\texttt{Fixed}^+}\xspace mechanism}
In order to describe \ensuremath{\texttt{Fixed}^+}\xspace, we need to introduce the following events:
\begin{itemize}
\item $L_j$: Every agent wants facility $j$ below $\ell/2$. Formally, for every
agent $i$ with $x_i \leq \frac{\ell}{2}$ it holds that $t_{ij} \in \{0,1\}$ and
for every agent $i$ with $x_i > \frac{\ell}{2}$ it holds that $t_{ij} \in \{0,-1\}$.
\item $H_j$: Every agent wants facility $j$ above $\ell/2$. Formally, for every
agent $i$ with $x_i \leq \frac{\ell}{2}$ it holds that $t_{ij} \in \{0,-1\}$ and
for every agent $i$ with $x_i > \frac{\ell}{2}$ it holds that $t_{ij} \in \{0,1\}$.
\end{itemize}
\begin{tcolorbox}[title=\ensuremath{\texttt{Fixed}^+}\xspace mechanism]
\textbf{Input:} Locations $x_1, \ldots, x_n$ and preferences $p_1, \ldots, p_n$.\\
\textbf{Output:} Locations $\ensuremath{\mathbf{y}}\xspace_1$ and $\ensuremath{\mathbf{y}}\xspace_2$.\\
Set $z_d= \frac{7}{22} \approx 0.31$.
\begin{enumerate}
\item \label{step:one}
If events $L_1$ and $L_2$ occur, then set $\ensuremath{\mathbf{y}}\xspace_1=\ensuremath{\mathbf{y}}\xspace_2=z_d\cdot \ell$.
\item \label{step:two}
Else if events $L_1$ and $H_2$ occur, then set $\ensuremath{\mathbf{y}}\xspace_1=z_d \cdot \ell$ and $\ensuremath{\mathbf{y}}\xspace_2=(1-z_d)\cdot \ell$.
\item \label{step:three}
Else if events $H_1$ and $H_2$ occur, then set $\ensuremath{\mathbf{y}}\xspace_1=\ensuremath{\mathbf{y}}\xspace_2=(1-z_d)\cdot \ell$.
\item \label{step:four}
Else if events $H_1$ and $L_2$ occur, then set $\ensuremath{\mathbf{y}}\xspace_1=(1-z_d)\cdot \ell$ and $\ensuremath{\mathbf{y}}\xspace_2=z_d\cdot \ell$.
\item \label{step:five}
Else set $\ensuremath{\mathbf{y}}\xspace_1=z_d \cdot \ell$ and $\ensuremath{\mathbf{y}}\xspace_2=(1-z_d)\cdot \ell$.
\end{enumerate}
\end{tcolorbox}
\begin{lemma}
\label{lem:fixedp-sp}
\ensuremath{\texttt{Fixed}^+}\xspace is strategy-proof even when both locations and preferences are private.
\end{lemma}
\begin{proof}
We will prove that there is no deviation that can yield strictly higher utility for any agent $i \in N$.
Fix an arbitrary declaration for all the agents except agent $i$. For every $j \in \{1,2\}$, let
$\ensuremath{\mathbf{y}}\xspace_j$, respectively $\ensuremath{\mathbf{y}}\xspace'_j$, denote the location where \ensuremath{\texttt{Fixed}^+}\xspace places facility $j$ when
agent $i$ declares truthfully, respectively non-truthfully, his preference for facility $j$.
Let us define $w_{ij}: = |\ensuremath{\mathbf{y}}\xspace'_j - x_i| - |\ensuremath{\mathbf{y}}\xspace_j - x_i|$. Then, observe that the difference
$\Delta$ between the utility that agent $i$ gets by reporting truthfully and misreporting, can be written
as $\Delta = t_{i1}\cdot w_{i1} + t_{i2}\cdot w_{i2}$. Hence, there exists a profitable deviation for
agent $i$ if and only if there is a declaration such that $\Delta < 0$. To prove that such declaration
does not exist, we will use Tables~\ref{tab:mech-cases} and~\ref{tab:sign-of-w-cases}, assuming first agent $i$ is located on $x_i \leq \frac{\ell}{2}$. This is a concise
representation of all cases that bypasses the repetitive case analysis.
Table~\ref{tab:mech-cases} presents possible preferences of agent $i$ when the mechanism places the
facilities through Step $k$ and it is interpreted as follows. If $t_{ij}$ at Step $k$ can be 0 or 1, then
we write ``+'' on the corresponding cell of the table; if $t_{ij}$ at Step $k$ can be either 0 or $-1$,
then we write ``-'' on the corresponding cell. For example, if the mechanism places the facilities through
Step 3, then for agent $i$ it {\em must} hold that $t_{i1} \in \{-1,0\}$ {\em and} $t_{i2} \in \{-1,0\}$.
In Table~\ref{tab:sign-of-w-cases}, the $(k,l)$th cell shows the signs of $w_{i1}$ and $w_{i2}$ when the
outcome of the mechanism changes from Step $k$ to Step $l$, where Step $k$ corresponds to the outcome when
agent $i$ truthfully declares his preferences and Step $l$ corresponds to the outcome of the mechanism when
agent $i$ lies.
\hl{Observe that we do {\em not} care how the agent manipulates the outcome. Hence, the agent can misreport his location, his preference, or both.}
So, the $(2,3)$th cell of Table~\ref{tab:sign-of-w-cases} corresponds to the case where
\ensuremath{\texttt{Fixed}^+}\xspace under the true declaration would place the facilities through Step 2, but under the misreport of
agent $i$ would place them through Step 3. In addition, the signs $(+,0)$ mean that
$w_{i1} > 0$ and $w_{i2}=0$. So, using this information
alongside the information from the agent's preferences from the third row of Table~\ref{tab:mech-cases}, we
can deduce that $ \Delta \geq 0$ under this change. If we apply the same reasoning, we will see that
$\Delta \geq 0$ for all possible cases when $x_i \leq \frac{\ell}{2}$. Hence, there is no profitable deviation
for agent $i$, when $x_i \leq \frac{\ell}{2}$.
\begin{table}[h!]
\begin{minipage}{0.35\textwidth}
\centering
\begin{tabular}{|c||c|c|}
\hline
& $t_{i1}$ & $t_{i2}$\\ \hline \hline
Step 1 & + & + \\ \hline
Step 2 & + & - \\ \hline
Step 3 & - & - \\ \hline
Step 4 & - & + \\ \hline
Step 5 & + & - \\ \hline
\end{tabular}
\caption{Preferences of agent $i$ for every step of \ensuremath{\texttt{Fixed}^+}\xspace, when $x_i \leq \frac{\ell}{2}$. \label{tab:mech-cases}}
\end{minipage}
\hfill
\begin{minipage}{0.6\textwidth}
\centering
\begin{tabular}{|c||c|c|c|c|c|}
\hline
\backslashbox{True}{Lie} & Step 1 & Step 2 & Step 3 & Step 4 & Step 5\\ \hline \hline
Step 1 & (0,0) & (0,+) & (+,+) & (+,0) & (0,+)\\ \hline
Step 2 & (0,-) & (0,0) & (+,0) & (+,-) & (0,0)\\ \hline
Step 3 & (-,-) & (-,0) & (0,0) & (0,-) & (-,0)\\ \hline
Step 4 & (-,0) & (-,+) & (0,+) & (0,0) & (-,+)\\ \hline
Step 5 & (0,-) & (0,0) & (+,0) & (+,-) & (0,0)\\ \hline
\end{tabular}
\caption{Signs for $(w_{i1}, w_{i2})$ when $x_i \leq \frac{\ell}{2}$. \label{tab:sign-of-w-cases}}
\end{minipage}
\end{table}
Similarly when $x_i > \frac{\ell}{2}$, we can use Tables~\ref{tab:sign-of-preference-larger}
and~\ref{tab:sign-of-w-larger} and see again that $\Delta\geq0$ in every case, thus the lemma follows.
Again, for Table~\ref{tab:sign-of-w-larger} \hl{we do not make any assumptions on how the agent manipulated the mechanism, hence we allow him to misreport both his location and his preferences.}
\begin{table}[h!]
\begin{minipage}{0.35 \textwidth}
\centering
\begin{tabular}{|c||c|c|}
\hline
& $t_{i1}$ & $t_{i2}$\\ \hline \hline
Step 1 & - & - \\ \hline
Step 2 & - & + \\ \hline
Step 3 & + & + \\ \hline
Step 4 & + & - \\ \hline
Step 5 & - & + \\ \hline
\end{tabular}
\caption{Preferences of agent $i$ for every step of \ensuremath{\texttt{Fixed}^+}\xspace, when $x_i > \frac{\ell}{2}$. \label{tab:sign-of-preference-larger}}
\end{minipage}
\hfill
\begin{minipage}{0.6\textwidth}
\centering
\begin{tabular}{|c||c|c|c|c|c|}
\hline
\backslashbox{True}{Lie} & Step 1 & Step 2 & Step 3 & Step 4 & Step 5\\ \hline \hline
Step 1 & (0,0) & (0,+) & (+,+) & (+,0) & (0,+)\\ \hline
Step 2 & (0,-) & (0,0) & (+,0) & (+,-) & (0,0)\\ \hline
Step 3 & (-,-) & (-,0) & (0,0) & (0,-) & (-,0)\\ \hline
Step 4 & (-,0) & (-,+) & (0,+) & (0,0) & (-,+)\\ \hline
Step 5 & (0,-) & (0,0) & (+,0) & (+,-) & (0,0)\\ \hline
\end{tabular}
\caption{Signs for $(w_{i1}, w_{i2})$ when $x_i > \frac{\ell}{2}$. \label{tab:sign-of-w-larger}}
\end{minipage}
\end{table}
\end{proof}
\begin{theorem}
\label{thm:fixedp-appx}
\ensuremath{\texttt{Fixed}^+}\xspace is $1 - 2z_d \approx 0.366$-approximate.
\end{theorem}
\begin{proof}
In order to prove our claim, we will focus on the agent that gets the minimum utility
under \ensuremath{\texttt{Fixed}^+}\xspace. We will prove that for every possible
combination of his preferences and his location the agent gets at least $\frac{2z_d}{2-z_d}$
the fraction of the utility he would get under an optimal solution. So, let $i$ be an agent
that gets minimum utility under \ensuremath{\texttt{Fixed}^+}\xspace.
Without loss of generality, we will assume that he is located below $\frac{\ell}{2}$.
Observe that for the preference combinations $(0,1), (1,0), (0,-1), (-1,0)$ the agent
gets utility at least $\ell$, while the maximum utility he can get is trivially bounded by $2\ell$.
Hence, if the agent's preferences are any of these combinations, then under any location for the
facilities the agent gets at least half of his maximum utility and the mechanism is at least
$\frac{1}{2}$-approximate.
\begin{itemize}
\item $p_i = (1,1)$. Observe that if there exists an agent with preferences $(1,1)$, then
\ensuremath{\texttt{Fixed}^+}\xspace will locate the facilities either through Step~\ref{step:one},
or through Step~\ref{step:five}.
Observe that under any of these steps, agent $i$ gets utility at least $\ell$,
while the maximum utility he can get is bounded by $2\ell$. So, the mechanism is
$\frac{1}{2}$-approximate in any of these steps.
\item $p_i = (1,-1)$. When there exists an agent below $\frac{\ell}{2}$ with preferences $(1,-1)$,
\ensuremath{\texttt{Fixed}^+}\xspace will place the facilities either through Step~\ref{step:two},
or through Step~\ref{step:five}. Observe that both steps place the facilities in the same way.
If we check Tables~\ref{tab:main-low} and~\ref{tab:main-high}
we can see that in any case the ratio of the mechanism is greater than $\frac{1}{2}$.
\item $p_i = (-1,1)$. When there exists an agent below $\frac{\ell}{2}$ with preferences
$(-1,1)$, then \ensuremath{\texttt{Fixed}^+}\xspace will place the facilities either through Step~\ref{step:four}, or
through Step~\ref{step:five}. In the worst case scenario when Step~\ref{step:four} is used,
agent $i$ is located at $x_i = \frac{\ell}{2} - \epsilon$, for some $\epsilon > 0$. In this
case his utility is $u_i = \ell + 2 \epsilon$, so the ratio of the mechanism is greater than
$\frac{1}{2}$.
The utility of $i$ from \ensuremath{\texttt{Fixed}^+}\xspace in this step is $u_i = 2z_d \ell$, when $x_i \leq z_d$ and $u_i = 2x_i$, when $z_d < x_i \leq \frac{\ell}{2}$. If Step~\ref{step:five} is chosen, there should exist an agent $i_0$ with $x_{i_0} \geq \frac{\ell}{2}$ and $t_{i_02} = 1$ or with $x_{i_0} < \frac{\ell}{2}$ and $t_{i_02} = -1$.
The utility of the optimal solution for agent $i$, gets maximized when agent $i_0$ is located in $x_{i_0} = \frac{\ell}{2}$ with preferences $t_{i_0} = (0,1)$. In this case the optimal places $f_1$ on $\ensuremath{\mathbf{y}}\xspace_1 = \ell$ and the $f_2$ on $\ensuremath{\mathbf{y}}\xspace_2 = \frac{\ell}{4}$. The utility of $i$ under opt is then $u_i = \frac{7 \ell}{4}$, and the approximation in this step is
\begin{equation}\label{approximation_ratio1}
\frac{8z_d}{7 \ell}
\end{equation}
\item $p_i = (-1,-1)$. When there exists an agent below $\frac{\ell}{2}$ with preferences $(-1,-1)$, then
\ensuremath{\texttt{Fixed}^+}\xspace will place the facilities either through Step~\ref{step:three}, or
through Step~\ref{step:five}. When Step~\ref{step:three} is used by the mechanism, agent $i$
gets utility $(1-z_d-x_i)\cdot 2\ell$ while the optimal value is trivially bounded by $(1-x_i)\cdot 2\ell$. Hence the approximation guarantee from this step for $x_i \leq \frac{\ell}{2}$ is
\begin{equation}\label{approximation_ratio2}
\frac{1-z_d-x_i}{1-x_i} \geq 1-2z_d
\end{equation}
When Step~\ref{step:five} is used, the worst case instance for the
mechanism is when agent $i$ is located on $z_d$ and there is another agent on $\frac{\ell}{2}-\epsilon$ with preferences $(1,0)$. The optimal mechanism will place $f_1$ on $\ensuremath{\mathbf{y}}\xspace_1 = \frac{3\ell +2z_d-2\epsilon}{4}$ and $f_2$ on $\ensuremath{\mathbf{y}}\xspace_2 = z_d$.
Then, the utility of agent $i$ in the optimal solution is $\frac{7\ell+2z_d-2\epsilon}{4}$ while
the utility he gets under \ensuremath{\texttt{Fixed}^+}\xspace is $(1-2z_d)\cdot\ell$. Hence, the
approximation ratio of \ensuremath{\texttt{Fixed}^+}\xspace is
\begin{equation}\label{approximation_ratio3}
\frac{4\ell - 8z_d}{7\ell +2z_d}
\end{equation}
\end{itemize}
We first observe from \eqref{approximation_ratio2} and \eqref{approximation_ratio3} that $1-2z_d \leq \frac{4-8z_d}{7+2z_d}$. Hence, the value of $z_d$ for which the approximation guarantee is maximized can be found if we equalize \eqref{approximation_ratio1} and \eqref{approximation_ratio2}: $\frac{8z_d}{7\ell} = 1-2z_d \Rightarrow z_d = \frac{7}{22} \approx 0.31$. Then the approximation ratio of the mechanism is $\frac{4}{11} \approx 0.36$.
\end{proof}
Observe that since \ensuremath{\texttt{Fixed}^+}\xspace asks for the exact location of every agent, it requires
arbitrarily large communication; this happens for example when the location $x_i$ of an agent $i$
is irrational. However, a closer look shows that this is not necessary. An interesting question is whether there exists
a deterministic mechanism that achieves better approximation when every agent communicates $O(1)$ bits.
\begin{theorem}
\label{thm:fixedp-cc}
\hl{The communication complexity of} \ensuremath{\texttt{Fixed}^+}\xspace \hl{is 5 bits per agent.}
\end{theorem}
\begin{proof}
\hl{Observe that} \ensuremath{\texttt{Fixed}^+}\xspace \hl{can compute its output by only computing which events from $L_1, L_2, H_1, H_2$ occur. This can be done only with a bitstring of 5 bits per agent. So, agent $i$ will send the bit string $(l,f_{11},f_{12}, f_{21}, f_{22})$, where $l$ will contain information about the location $x_i$; $f_{11}$ and $f_{12}$ will contain information about $t_{i1}$; $f_{21}$ and $f_{22}$ will contain information about $t_{i2}$. The agent will send the bits to the mechanism under the following rules.}
\begin{itemize}
\item \hl{If $x_i \leq \frac{\ell}{2}$, then $l=0$; else $l=1$.}
\item \hl{If for some $j \in \{1,2\}$ it holds $t_{ij} = 0$, then $f_{j1} = f_{j2} = 0$.}
\item \hl{If for some $j \in \{1,2\}$ it holds $t_{ij} = 1$, then $f_{j1} = 0$ and $f_{j2} = 1$.}
\item \hl{If for some $j \in \{1,2\}$ it holds $t_{ij} = -1$, then $f_{j1} = 1$ and $f_{j2} = 1$.}
\end{itemize}
\hl{It is not hard to see that this information suffices for the mechanism to correctly compute which events occur and thus output the correct locations for the facilities.}
\end{proof}
\section{Randomized mechanisms}
In this section, we propose two randomized mechanisms, \texttt{Random}\xspace and \ensuremath{\texttt{Random}^+}\xspace that
achieve constant approximation ratio and are universally strategy-proof and strategy-proof in expectation, respectively, even when both locations and preferences are private information. \texttt{Random}\xspace requires zero communication and
\ensuremath{\texttt{Random}^+}\xspace can be implemented using five bits per agent.
\begin{definition}[\texttt{Random}\xspace mechanism]
\texttt{Random}\xspace sets $\ensuremath{\mathbf{y}}\xspace_1=\ensuremath{\mathbf{y}}\xspace_2=0$ with probability $\frac{1}{2}$ and $\ensuremath{\mathbf{y}}\xspace_1=\ensuremath{\mathbf{y}}\xspace_2=\ell$
with probability $\frac{1}{2}$.
\end{definition}
\begin{theorem}
\label{thm:rand}
\texttt{Random}\xspace is universally strategy-proof and achieves $\frac{1}{2}$ approximation.
\end{theorem}
\begin{proof}
Firstly, it is easy to see that the mechanism is universally strategy-proof
since in each case, the mechanism chooses a fixed location, which is
strategy-proof.
We will prove that every agent gets utility at least $\frac{\ell}{2}$ in
expectation from every facility.
Suppose that agent $i \in N$ is located on $x_i$ and has preferences \ensuremath{t_{i}}\xspace.
Let us study the expected utility that the agent gets from facility $j$.
If $t_{ij}=1$, then the agent's utility is $\ell - x_i$ when $\ensuremath{\mathbf{y}}\xspace_j=0$ and
$x_i$ when $\ensuremath{\mathbf{y}}\xspace_j=\ell$.
If $t_{ij}=-1$, then the agent gets utility $x_i$ if $\ensuremath{\mathbf{y}}\xspace_j=0$ and $\ell-x_i$
if $\ensuremath{\mathbf{y}}\xspace_j=\ell$. If $t_{ij}=0$, then the agent gets utility $\ell$ irrespectively
from $\ensuremath{\mathbf{y}}\xspace_j$. As a result, the agent gets utility at least
$\frac{\ell}{2}$ in expectation from each facility. So in total the agent in expectation
gets utility at least $\ell$. Since, the maximum utility is trivially bounded by $2 \ell$,
the theorem follows.
\end{proof}
Although \texttt{Random}\xspace seems naive, it achieves the best approximation so far, using zero
communication as well.
\begin{theorem}
\texttt{Random}\xspace is the optimal mechanism when no communication is allowed.
\end{theorem}
\begin{proof}
For the purpose of contradiction suppose there is a mechanism $M$ achieving an approximation strictly higher than $ \frac{1}{2}$. Let $p(y_1,y_2)$ be the joint probability distribution of the facilities $\ensuremath{\mathbf{y}}\xspace_1$ and $\ensuremath{\mathbf{y}}\xspace_2$.
Consider now an instance $I_1$ with one agent $a_1$ located at 0 having preferences $(1,1)$. His utility is then $u_1=\int_0^1 \int_0^1 p(\ensuremath{\mathbf{y}}\xspace_1, \ensuremath{\mathbf{y}}\xspace_2)\cdot (1-\ensuremath{\mathbf{y}}\xspace_1) + p(\ensuremath{\mathbf{y}}\xspace_1,\ensuremath{\mathbf{y}}\xspace_2)\cdot (1-\ensuremath{\mathbf{y}}\xspace_2)d\ensuremath{\mathbf{y}}\xspace_1d\ensuremath{\mathbf{y}}\xspace_2= 2 \int_0^1 \int_0^1 p(\ensuremath{\mathbf{y}}\xspace_1, \ensuremath{\mathbf{y}}\xspace_2)d\ensuremath{\mathbf{y}}\xspace_1d\ensuremath{\mathbf{y}}\xspace_2 - \int_0^1 \int_0^1 p(\ensuremath{\mathbf{y}}\xspace_1, \ensuremath{\mathbf{y}}\xspace_2)(\ensuremath{\mathbf{y}}\xspace_1+\ensuremath{\mathbf{y}}\xspace_2)d\ensuremath{\mathbf{y}}\xspace_1d\ensuremath{\mathbf{y}}\xspace_2=2-\omega$ where $\omega=\int_0^1 \int_0^1 p(\ensuremath{\mathbf{y}}\xspace_1, \ensuremath{\mathbf{y}}\xspace_2)(\ensuremath{\mathbf{y}}\xspace_1+\ensuremath{\mathbf{y}}\xspace_2)d\ensuremath{\mathbf{y}}\xspace_1d\ensuremath{\mathbf{y}}\xspace_2$. In $I_1$ the optimal solution places both facilities in 0 resulting in a utility of $u_1^*=2$. The approximation ratio of $M$ in $I_1$ is then
\begin{equation}
\label{I1_approx}
1-\frac{1}{2}\cdot \omega
\end{equation}
Similarly consider another instance $I_2$ with one agent $a_2$ located at 0 having preferences $(-1,-1)$. The utility of $a_2$ is then $u_2=\int_0^1 \int_0^1 p(\ensuremath{\mathbf{y}}\xspace_1, \ensuremath{\mathbf{y}}\xspace_2)(\ensuremath{\mathbf{y}}\xspace_1+\ensuremath{\mathbf{y}}\xspace_2)d\ensuremath{\mathbf{y}}\xspace_1d\ensuremath{\mathbf{y}}\xspace_2=\omega$. In $I_2$ the optimal solution places both facilities in 1 resulting in a utility of $u_2^*=2$. Thus the approximation ratio of $M$ in $I_2$ is
\begin{equation}
\label{I2_approx}
\frac{1}{2}\cdot \omega
\end{equation}
Combining \eqref{I1_approx} and \eqref{I2_approx} we derive that $\omega=1$ and thus the approximation of $M$ is $\frac{1}{2}$, a contradiction.
\end{proof}
We should note that \texttt{Random}\xspace can be extended for
$k$-facility games, for any $k$, and achieve $\frac{1}{2}$ approximation.
Furthermore, we use the intuition obtained from it to construct \ensuremath{\texttt{Random}^+}\xspace. The first four steps of \ensuremath{\texttt{Random}^+}\xspace are the same as in \ensuremath{\texttt{Fixed}^+}\xspace,
so again we will use the events $L_j$ and $H_j$ introduced in the previous section.
\begin{tcolorbox}[title= \ensuremath{\texttt{Random}^+}\xspace mechanism]
\textbf{Input:} Locations $x_1, \ldots, x_n$ and preferences $p_1, \ldots, p_n$.\\
\textbf{Output:} Locations $y_1$ and $y_2$.\\
Set $z_r= \frac{13-\sqrt{161}}{8}$
\begin{enumerate}
\item \label{step:one}
If events $L_1$ and $L_2$ occur, then set $y_1=y_2=z_r \cdot \ell$.
\item \label{step:two}
Else if events $L_1$ and $H_2$ occur, then set $y_1=z_r \cdot \ell$ and $y_2=(1-z_r) \cdot \ell$.
\item \label{step:three}
Else if events $H_1$ and $H_2$ occur, then set $y_1=y_2=(1-z_r) \cdot \ell$.
\item \label{step:four}
Else if events $H_1$ and $L_2$ occur, then set $y_1=(1-z_r) \cdot \ell$ and $y_2=z_r \ell$.
\item \label{step:five}
Else with probability $\frac{1}{2}$ set $y_1=y_2=z_r \cdot \ell$ and with probability $\frac{1}{2}$ set $y_1=y_2=(1-z_r) \cdot \ell$.
\end{enumerate}
\end{tcolorbox}
\begin{lemma}
\label{lem:randp-sp}
\ensuremath{\texttt{Random}^+}\xspace is strategy-proof in expectation.
\end{lemma}
\begin{proof}
Steps 1-4 of \ensuremath{\texttt{Random}^+}\xspace are similar to the steps of \texttt{Fixed}\xspace, so any deviation of agent $i$ that changes the outcome between Steps 1-4 will not result in a better outcome for the agent, as it is shown in Lemma \ref{lem:fixedp-sp}. Therefore we only need to examine any deviation to, or from, Step 5.
\noindent
\textbf{Deviation to Step 5:} Similarly to Lemma~\ref{lem:fixedp-sp}, let us denote by $\Delta = t_{i1}\cdot w_{i1} + t_{i2}\cdot w_{i2}$ the difference between the expected utility of agent $i$ when he reports truthfully and when misreporting and \ensuremath{\texttt{Random}^+}\xspace implements Step 5; here $w_{ij}: = \frac{1}{2}|z_r \ell - x_i| + \frac{1}{2}|(1 - z_r) \ell - x_i| - |\ensuremath{\mathbf{y}}\xspace_j - x_i|$.
Tables \ref{tab:sign-of-preference-smaller-rand-a} and \ref{tab:sign-of-preference-larger-rand-a} present the signs of the preferences of agent $i$ when $x_i \leq \frac{\ell}{2}$ and $x_i > \frac{\ell}{2}$ respectively;
recall ``+'' corresponds to preferences in $\{0,1\}$ and ``-'' to preferences in $\{-1,0\}$. Each cell of the tables \ref{tab:sign-of-w-smaller-rand-a} and \ref{tab:sign-of-w-larger-rand-a} (for $x_i \leq \frac{\ell}{2}$ and $x_i > \frac{\ell}{2}$ respectively) presents the signs of $(w_{i1}, w_{i2})$ when agent $i$ misreports so that Step 5 is followed by \ensuremath{\texttt{Random}^+}\xspace. It can be easily verified that for all possible combinations we have that
$\Delta \geq 0$. Thus, any misrepresentation of the preferences of agent $i$ that changes the outcome of \ensuremath{\texttt{Random}^+}\xspace
from Step 1-4 to Step 5 does not increase the utility of the agent.
\begin{table}[h!]
\begin{minipage}{0.45 \textwidth}
\centering
\begin{tabular}{|c||c|c|}
\hline
& $t_{i1}$ & $t_{i2}$\\ \hline \hline
Step 1 & + & + \\ \hline
Step 2 & + & - \\ \hline
Step 3 & - & - \\ \hline
Step 4 & - & + \\ \hline
\end{tabular}
\caption{Preferences of agent $i$ for Steps 1-4 of \ensuremath{\texttt{Random}^+}\xspace, when $x_i \leq \frac{\ell}{2}$. \label{tab:sign-of-preference-smaller-rand-a}}
\end{minipage}
\hfill
\begin{minipage}{0.5\textwidth}
\centering
\begin{tabular}{|c||c|}
\hline
\backslashbox{True}{Lie} & Step 5 \\ \hline \hline
Step 1 & (+,+) \\ \hline
Step 2 & (+,-) \\ \hline
Step 3 & (-,-) \\ \hline
Step 4 & (-,+) \\ \hline
\end{tabular}
\caption{Signs for $(w_{i1}, w_{i2})$ when $x_i \leq \frac{\ell}{2}$. \label{tab:sign-of-w-smaller-rand-a}}
\end{minipage}
\end{table}
\begin{table}[h!]
\begin{minipage}{0.45 \textwidth}
\centering
\begin{tabular}{|c||c|c|}
\hline
& $t_{i1}$ & $t_{i2}$\\ \hline \hline
Step 1 & - & - \\ \hline
Step 2 & - & + \\ \hline
Step 3 & + & + \\ \hline
Step 4 & + & - \\ \hline
\end{tabular}
\caption{Preferences of agent $i$ for Step 1-4 of \ensuremath{\texttt{Random}^+}\xspace, when $x_i > \frac{\ell}{2}$. \label{tab:sign-of-preference-larger-rand-a}}
\end{minipage}
\hfill
\begin{minipage}{0.45\textwidth}
\centering
\begin{tabular}{|c||c|}
\hline
\backslashbox{True}{Lie} & Step 5 \\ \hline \hline
Step 1 & (-,-) \\ \hline
Step 2 & (-,+) \\ \hline
Step 3 & (+,+) \\ \hline
Step 4 & (+,-) \\ \hline
\end{tabular}
\caption{Signs for $(w_{i1}, w_{i2})$ when $x_i > \frac{\ell}{2}$. \label{tab:sign-of-w-larger-rand-a}}
\end{minipage}
\end{table}
\noindent
\textbf{Deviation from Step 5:} In order for the outcome of the mechanism to change, the preference of $i$ must change sign from ``+'' to ``-'' or from ``-'' to ``+''. Similarly as above,
$\Delta = t_{i1}\cdot w_{i1} + t_{i2}\cdot w_{i2}$ denotes the difference between the expected utility of agent $i$ when he reports truthfully and Step 5 is employed, and when misreporting. Now, we have that
$w_{ij}: = |\ensuremath{\mathbf{y}}\xspace_j - x_i| - \frac{1}{2}|z_r \ell - x_i| - \frac{1}{2}|(1 - z_r) \ell - x_i|$. Each cell of column $c$ of the tables \ref{tab:sign-of-t-smaller-rand-b} and \ref{tab:sign-of-t-larger-rand-b} presents the only possible signs of $(t_{i1}, t_{i2})$ of agent $i$ in Step 5 when he reports his true preference, that can result in the step of column $c$ if he misreports. As an example, consider the case where $t_{i1} \leq 0$ when $x_i \leq \frac{\ell}{2}$. If $i$ changes his declaration to $t_{i1}' \geq 0$ Step 1 cannot be followed.
Again, it can be easily verified that $\Delta \geq 0$ in every case, hence there is no profitable deviation for
agent $i$.
\begin{table}[h!]
\begin{minipage}{0.45 \textwidth}
\centering
\begin{tabular}{|c||c|c|c|c|}
\hline
\backslashbox{True}{Lie} & Step 1 & Step 2 & Step 3 & Step 4 \\ \hline \hline
Step 5 & (-,-) & (-,+) & (+,+) & (+,-) \\ \hline
\end{tabular}
\caption{Signs of $(t_{i1}, t_{i2})$ when $x_i \leq \frac{\ell}{2}$. \label{tab:sign-of-t-smaller-rand-b}}
\end{minipage}
\hfill
\begin{minipage}{0.45\textwidth}
\centering
\begin{tabular}{|c||c|c|c|c|}
\hline
\backslashbox{True}{Lie} & Step 1 & Step 2 & Step 3 & Step 4 \\ \hline \hline
Step 5 & (-,-) & (-,+) & (+,+) & (+,-) \\ \hline
\end{tabular}
\caption{Signs of $(w_{i1}, w_{i2})$ when $x_i \leq \frac{\ell}{2}$. \label{tab:sign-of-w-smaller-rand-b}}
\end{minipage}
\end{table}
\begin{table}[h!]
\begin{minipage}{0.45 \textwidth}
\centering
\begin{tabular}{|c||c|c|c|c|}
\hline
\backslashbox{True}{Lie} & Step 1 & Step 2 & Step 3 & Step 4 \\ \hline \hline
Step 5 & (+,+) & (+,-) & (-,-) & (-,+) \\ \hline
\end{tabular}
\caption{Signs of $(t_{i1}, t_{i2})$ when $x_i > \frac{\ell}{2}$. \label{tab:sign-of-t-larger-rand-b}}
\end{minipage}
\hfill
\begin{minipage}{0.45 \textwidth}
\centering
\begin{tabular}{|c||c|c|c|c|}
\hline
\backslashbox{True}{Lie} & Step 1 & Step 2 & Step 3 & Step 4 \\ \hline \hline
Step 5 & (+,+) & (+,-) & (-,-) & (-,+) \\ \hline
\end{tabular}
\caption{Signs of $(w_{i1}, w_{i2})$ when $x_i > \frac{\ell}{2}$. \label{tab:sign-of-w-larger-rand-b}}
\end{minipage}
\end{table}
\end{proof}
\begin{theorem}
\label{thm:randp-apx}
\ensuremath{\texttt{Random}^+}\xspace is $(\frac{1}{2}+z_r) \simeq 0.538$-approximate.
\end{theorem}
\begin{proof}
To prove our claim, we will focus on the agent that gets the minimum utility
under \ensuremath{\texttt{Random}^+}\xspace. We will prove that for every possible
combination of his preferences and his location the agent gets a fraction of
$\frac{1}{2}+z_r$ of the utility he would get under an optimal solution. So, let $i$ be an agent
that gets minimum utility under \ensuremath{\texttt{Random}^+}\xspace. Without loss of
generality, we will assume that he is located below $\frac{\ell}{2}$.
\begin{itemize}
\item $p_i = (1,1)$. If there exists an agent below $\frac{\ell}{2}$ with
preferences $(1,1)$, then \ensuremath{\texttt{Random}^+}\xspace will place the facilities through
Step~\ref{step:one}, or through Step~\ref{step:five}.
We will consider each case separately.
If the facilities are placed due to Step~\ref{step:one}, then the utility of the
agent is at least $(1+2z_r)\ell$, while the maximum utility the agent can get is $2\ell$.
Hence, the approximation guarantee of the mechanism, in this case, is $\frac{1}{2} +z_r$.
For the case where the facilities are placed due to Step~\ref{step:five}, we have to consider
the following subcases. Firstly, if $x_i\geq z_r$, then the expected utility of the agent
is $(1+2z_r)\ell$ while the optimum is bounded by $2\ell$, hence the mechanism achieves
$\frac{1}{2}+z_r$ approximation.
If $x_i<z_r$, we have to further consider two cases depending on the reason Step~\ref{step:five}
was triggered. The first one is that there exists an agent $i'$ with $x_{i'}<\frac{\ell}{2}$
that has preference $-1$ for one of the two facilities. Then, the optimal value for the objective
is upper bounded by $\frac{3\ell}{2}$. Hence, since we assumed that agent $i$ has the minimum utility
under \ensuremath{\texttt{Random}^+}\xspace, we get that it achieves $\frac{2}{3}$ approximation in this case.
The second subcase is when there exists an agent $i'$ with $x_{i'}>\frac{\ell}{2}$ that has preference $1$
for one of the two facilities. Then, the optimum is again upper bounded by $\frac{3\ell}{2}$ and the mechanism achieves the claimed approximation ratio.
\item $p_i = (1,0)$. Observe that the analysis for the case $p_i=(0,1)$ is symmetric, hence it
will be omitted. When there exists an agent below $\frac{\ell}{2}$ with preferences $(0,1)$, then
\ensuremath{\texttt{Random}^+}\xspace will place the facilities either through Step~\ref{step:one}, or through
Step~\ref{step:two}, or through
Step~\ref{step:five}. As in the previous case, it is not hard to see that under any of these steps the
utility of agent $i$ is at least $(\frac{3\ell}{2}+z_r)\cdot \ell$, while the maximum utility he can get is bounded
by $2\ell$. So, the approximation guarantee follows.
\item $p_i = (1,-1)$. Observe that the analysis for the case $p_i=(-1,1)$ is symmetric, hence it
will be omitted. When there exists an agent below $\frac{\ell}{2}$ with preferences $(-1,1)$,
then \ensuremath{\texttt{Random}^+}\xspace will locate the facilities either through
Step~\ref{step:two}, or through Step~\ref{step:five}. When the mechanism locates the facilities
through Step~\ref{step:two}, then the worst-case instance occurs when there is only one agent $i$ on $\frac{\ell}{2}$. Then,
the agent gets utility $\ell$, while an optimal solution locates the first facility on $\frac{\ell}{2}$
and the second facility on 0 yielding utility $\frac{3\ell}{2}$. Thus, the mechanism is
$\frac{2}{3}$-approximate. So, for the chosen value of $z_r$, the mechanism is $(\frac{1}{2}+z_r)$-approximate.
If, on the other hand, the mechanism locates the facilities through Step~\ref{step:five}, then the expected utility of agent $i$ is $\ell$ irrespectively of his location $x_i$. In order to construct a worst-case
instance, it suffices to consider instances with only two agents, since more agents can only restrict
more the set of optimal solutions, which implies that the optimal value can only decrease. The ``loosest''
constraint to the optimal that triggers Step~\ref{step:five} too, is when there is an agent on
$\frac{\ell}{2} + \epsilon$ for an arbitrarily small positive $\epsilon$ with preferences $(1,0)$.
Then, the worst-case instance for the mechanism, in terms of approximation guarantee, is when
$x_i=0$ and the optimal utility the agent $i$ gets is bounded by $\frac{7\ell}{4}$; the first facility is
located at $\frac{\ell}{4}$ and the second one at $\ell$. Hence, the mechanism is
$\frac{4}{7}$-approximate. So, for the chosen value of $z_r$, the mechanism is
$(\frac{1}{2}+z_r)$-approximate.
\item $p_i = (-1,0)$. Observe that the analysis for the case $p_i=(0,-1)$ is symmetric, hence it
will be omitted. When there exists an agent below $\frac{\ell}{2}$ with preferences $(-1,0)$, then
\ensuremath{\texttt{Random}^+}\xspace will place the facilities either through Step~\ref{step:three}, or
through Step~\ref{step:four}, or through Step~\ref{step:five}. When Step~\ref{step:three}, or
Step~\ref{step:four}, is used,
agent $i$ gets utility $2\ell-z_r \ell -x_i$, while the optimal utility is bounded by $2\ell-x_i$ by locating both
facilities on $\ell$. So, since $x_i \leq \frac{\ell}{2}$, the approximation of \ensuremath{\texttt{Random}^+}\xspace
is at least $1-\frac{z_r}{2} > \frac{1}{2}+ z_r$, for the chosen value of $z_r$.
If Step~\ref{step:five} is used, then agent $i$ gets utility at least $(\frac{3}{2}-z_r)\cdot \ell$. Furthermore,
we can trivially bound the optimal utility by $2\ell$, so, for the chosen value of $z_r$ we get that
\ensuremath{\texttt{Random}^+}\xspace is $(\frac{1}{2}+z_r)$-approximate.
\item $p_i = (-1,-1)$. When there exists an agent below $\frac{\ell}{2}$ with preferences $(-1,-1)$, then
\ensuremath{\texttt{Random}^+}\xspace will locate the facilities either through Step~\ref{step:three}, or
through Step~\ref{step:five}. When Step~\ref{step:three} is used by the mechanism, then agent $i$
gets utility $(1-z_r-x_i)\cdot 2\ell$ while the optimal value is trivially bounded by $(1-x_i)\cdot 2\ell$.
Hence, since $x_i \leq \frac{\ell}{2}$, the approximation guarantee of the mechanism is bounded by
$1-\frac{z_r}{2} > \frac{1}{2}+z_r$.
If Step~\ref{step:five} is used, then using similar arguments as in the case where $p_i=(1,-1)$, we
can construct the worst case instance for the mechanism by locating an agent with preferences $(1,0)$
on $\frac{\ell}{2}-\epsilon$ and by setting $x_i=z_r \ell$. Then, under Step~\ref{step:five} agent $i$ gets
utility $(1-2z_r)\cdot \ell$ and his utility under the optimal location for the facilities is bounded by
$(\frac{7}{4}-z_r)\cdot\ell$; the first facility is located on $(\frac{3}{4}+z_r)\cdot \ell$ and the second
one on $\ell$. Hence, \ensuremath{\texttt{Random}^+}\xspace is $(1-2z_r)/(\frac{7}{4}-z_r)$-approximate.
It is not hard to verify that for the chosen value of $z_r$ we get that $(1-2z_r)/(\frac{7}{4}-z_r)=
\frac{1}{2}+z_r$.
\end{itemize}
\end{proof}
Using exactly the same arguments as in Theorem \ref{thm:fixedp-cc}we can get that \ensuremath{\texttt{Random}^+}\xspace can be implemented in a communication-efficient way where each agent sends only five bits to the planner.
\begin{theorem}
\label{thm:rand-cc}
The communication complexity of \ensuremath{\texttt{Random}^+}\xspace is 5 bits per agent.
\end{theorem}
\section{Two-preference instances}
In this section, we study $k$-facility games where all the agents have preferences in $\{0,1\}^k$,
$\{1,-1\}^k$, or in $\{0,-1\}^k$, which we call two-preference instances.
The non-existence of optimal deterministic strategy-proof mechanisms can be
extended even on two-preference instances with three agents.
\begin{theorem}
\label{thm:utility}
For any $k \geq 2$, there is no optimal deterministic strategy-proof mechanism
for $k$-facility games even on two-preference instances with three
agents and known locations.
\end{theorem}
The proof of the theorem follows from the instances of Figures \ref{fig2}, \ref{fig3}, \ref{fig4}. As in Theorem \ref{thm:2fub}, white circles correspond to agents and black circles to the optimal locations.
\begin{figure}[h!]
\begin{center}
\subfigure[Instance I]{
\begin{tikzpicture}[thick, scale=0.6]
\tikzstyle{every node}==[fill=white,minimum size=4pt,inner sep=0pt]
\draw (-4,-0.5) node(v)[label=below:$0 1$]{};
\draw (-4,0.5) node(v1)[label=above:$0$]{};
\draw (-1.5,0.5) node(v2)[label=above:$\frac{\ell}{3}$]{};
\draw (1,-0.5) node(v2)[label=below:$1 1$]{};
\draw (6,0) node(v3)[draw, fill=white, circle]{};
\draw (-4,0) node(v7)[draw, fill=white, circle]{};
\draw (1,0) node(v7)[draw, fill=white, circle]{};
\draw (-1.5,0) node(v8)[draw, fill=black, circle]{};
\draw (3.8,0) node(v9)[draw, fill=black, circle]{};
\draw (1,0.5) node(v2)[label=above:$\frac{\ell}{2}$]{};
\draw (-1.5,-1.7) node(u)[label=below:$\ensuremath{\mathbf{y}}\xspace_2$]{};
\draw (3.8,-1.7) node(u1)[label=below:$\ensuremath{\mathbf{y}}\xspace_1$]{};
\draw (6,-0.5) node(l1)[label=below :$1 0$]{};
\draw (6,0.5) node(u1)[label=above :$\ell$]{};
\draw (3.8,0.5) node(v6)[label=above:$\frac{2 \ell}{3}$]{};
\draw (-3.87,0) -- (0.87,0);
\draw (1.15,0) -- (5.85,0);
\node[] at (7,-2) {};
\end{tikzpicture}
}
\subfigure[Instance $I'$]{
\begin{tikzpicture}[thick, scale=0.6]
\tikzstyle{every node}==[fill=white,minimum size=4pt,inner sep=0pt]
\draw (-4,-0.5) node(v)[label=below:$1 1$]{};
\draw (-4,-1.7) node(v)[label=below:$\ensuremath{\mathbf{y}}\xspace_2$]{};
\draw (-4,0.5) node(v1)[label=above:$0$]{};
\draw (1,-0.5) node(v2)[label=below:$1 1$]{};
\draw (1,-1.7) node(v)[label=below:$\ensuremath{\mathbf{y}}\xspace_1$]{};
\draw (6,0) node(v3)[draw, fill=white, circle]{};
\draw (-4,0) node(v7)[draw, fill=black, circle]{};
\draw (1,0) node(v7)[draw, fill=black, circle]{};
\draw (1,0.5) node(v2)[label=above:$\frac{\ell}{2}$]{};
\draw (6,-0.5) node(l1)[label=below :$1 0$]{};
\draw (6,0.5) node(u1)[label=above :$\ell$]{};
\draw (-4,0) -- (5.85,0);
\node[] at (7,-2) {};
\end{tikzpicture}
}
\caption{Example for preferences in $\{0,1\}^2$. The agent located on $0$ in the
instance $I$ can declare preferences $(1,1)$ and increase his utility by moving
the facility $f_2$ closer to 0.}
\label{fig2}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\subfigure[Instance I]{
\begin{tikzpicture}[thick, scale=0.6]
\tikzstyle{every node}==[fill=white,minimum size=4pt,inner sep=0pt]
\draw (-4.2,-0.2) node(v)[label=below:-1 1]{};
\draw (-4,0.2) node(v1)[label=above:$0$]{};
\draw (5,-0.2) node(v2)[label=below:1 1]{};
\draw (5,0) node(v3)[draw, fill=white, circle]{};
\draw (-4,0) node(v7)[draw, fill=white, circle]{};
\draw (1,0) node(v7)[draw, fill=black, circle]{};
\draw (6,0) node(v3)[draw, fill=black, circle]{};
\draw (1,0.5) node(v2)[label=above:$\frac{\ell}{2}$]{};
\draw (5,0.5) node(v4)[label=above:$\ell$-$\epsilon$]{};
\draw (1,-1.7) node(u)[label=below:$\ensuremath{\mathbf{y}}\xspace_2$]{};
\draw (6,-1.7) node(l1)[label=below :$\ensuremath{\mathbf{y}}\xspace_1$]{};
\draw (6,0.5) node(u1)[label=above :$\ell$]{};
\draw (-3.85,0) -- (4.85,0);
\draw (5.15,0) -- (5.85,0);
\node[] at (7,-2) {};
\end{tikzpicture}
}
\subfigure[Instance $I'$]{
\begin{tikzpicture}[thick, scale=0.6]
\tikzstyle{every node}==[fill=white,minimum size=4pt,inner sep=0pt]
\draw (-4.2,-0.2) node(v)[label=below:-1 1]{};
\draw (-4,0.2) node(v1)[label=above:$0$]{};
\draw (4.8,-0.2) node(v2)[label=below:-1 1]{};
\draw (5,0) node(v3)[draw, fill=black, circle]{};
\draw (-4,0) node(v7)[draw, fill=white, circle]{};
\draw (6,0) node(v3)[draw, fill=black, circle]{};
\draw (5,0.5) node(v4)[label=above:$\ell$-$\epsilon$]{};
\draw (4.8,-1.7) node(u)[label=below:$\ensuremath{\mathbf{y}}\xspace_2$]{};
\draw (6.2,-1.7) node(l1)[label=below :$\ensuremath{\mathbf{y}}\xspace_1$]{};
\draw (6,0.5) node(u1)[label=above :$\ell$]{};
\draw (-3.85,0) -- (5.85,0);
\node[] at (7,-2) {};
\end{tikzpicture}
}
\caption{Example for preferences in $\{-1,1\}^2$. The agent located on
$\ell-\epsilon$ in the instance $I$ can declare preferences $(-1,1)$ and
increase his utility by moving the facility $f_2$ closer to $\ell-\epsilon$.
}
\label{fig3}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\subfigure[Instance I]{
\begin{tikzpicture}[thick, scale=0.6]
\tikzstyle{every node}==[fill=white,minimum size=4pt,inner sep=0pt]
\draw (-4.2,-0.2) node(v)[label=below:0 -1]{};
\draw (1,-0.2) node(v)[label=below:-1 0]{};
\draw (-4,0.2) node(v1)[label=above:$0$]{};
\draw (5.8,-0.2) node(v2)[label=below:-1 -1]{};
\draw (-4,0) node(v7)[draw, fill=black, circle]{};
\draw (1,0) node(v7)[draw, fill=black, circle]{};
\draw (6,0) node(v3)[draw, fill=white, circle]{};
\draw (1,0.5) node(v2)[label=above:$\frac{\ell}{2}$]{};
\draw (1,-1.7) node(u)[label=below:$\ensuremath{\mathbf{y}}\xspace_2$]{};
\draw (-4,-1.7) node(l1)[label=below :$\ensuremath{\mathbf{y}}\xspace_1$]{};
\draw (6,0.5) node(u1)[label=above :$\ell$]{};
\draw (-3.85,0) -- (0.85,0);
\draw (1.15,0) -- (5.85,0);
\node[] at (7,-2) {};
\end{tikzpicture}
}
\subfigure[Instance $I'$]{
\begin{tikzpicture}[thick, scale=0.6]
\tikzstyle{every node}==[fill=white,minimum size=4pt,inner sep=0pt]
\draw (-4.2,-0.2) node(v)[label=below:-1 -1]{};
\draw (1,-0.2) node(v)[label=below:-1 0]{};
\draw (-4,0.2) node(v1)[label=above:$0$]{};
\draw (5.8,-0.2) node(v2)[label=below:-1 -1]{};
\draw (-4,0) node(v7)[draw, fill=black, circle]{};
\draw (1,0) node(v7)[draw, fill=white, circle]{};
\draw (6,0) node(v3)[draw, fill=black, circle]{};
\draw (1,0.5) node(v2)[label=above:$\frac{\ell}{2}$]{};
\draw (6,-1.7) node(u)[label=below:$\ensuremath{\mathbf{y}}\xspace_2$]{};
\draw (-4,-1.7) node(l1)[label=below :$\ensuremath{\mathbf{y}}\xspace_1$]{};
\draw (6,0.5) node(u1)[label=above :$\ell$]{};
\draw (-3.85,0) -- (0.85,0);
\draw (1.15,0) -- (5.85,0);
\node[] at (7,-2) {};
\end{tikzpicture}
}
\caption{Example for preferences in $\{-1,0\}^2$. The agent located on $0$ in
the instance $I$ can declare preferences $(-1,-1)$ and increase his utility by
moving the facility $f_2$ away from 0. Observe that for the Instance $I'$ there
are two optimal solutions ($\ensuremath{\mathbf{y}}\xspace_1=0, \ensuremath{\mathbf{y}}\xspace_2=\ell$ and $\ensuremath{\mathbf{y}}\xspace_1=\ell, \ensuremath{\mathbf{y}}\xspace_2=0$).
However, this does not affect the correctness of our example assuming that the
mechanism chooses a solution \emph{deterministically}.}
\label{fig4}
\end{center}
\end{figure}
We now show how we can modify \texttt{Fixed}\xspace by changing the value of $z_f$ and achieve
better approximation guarantees. We denote the mechanisms as \ensuremath{\texttt{Fixed}^{\{0,1\}}}\xspace, for preferences in $\{0,1\}^k$,
and \ensuremath{\texttt{Fixed}^{\{0,-1\}}}\xspace, for preferences in $\{-1,0\}^k$.
Furthermore, for $k=2$ we derive a new deterministic mechanism
termed $OPT^2$, for the case where all agents have preferences in $\{0,1\}^2$ and their locations
are known.
\begin{definition}
\ensuremath{\texttt{Fixed}^{\{0,1\}}}\xspace sets $\ensuremath{\mathbf{y}}\xspace_1=\ldots=\ensuremath{\mathbf{y}}\xspace_k=\frac{\ell}{2}$.
\end{definition}
\begin{theorem}
\label{thm:fzo}
\ensuremath{\texttt{Fixed}^{\{0,1\}}}\xspace is $\frac{1}{2}$-approximate.
\end{theorem}
\begin{proof}
Observe that for every agent $i$ and any facility $j$ it holds that
$u_{ij}(x_i,t_{ij},\ensuremath{\mathbf{y}}\xspace_j) \geq \ell - |x_i-\frac{\ell}{2}| \geq \frac{\ell}{2}$. Hence,
$u_i(x_i,t_i,\ensuremath{\mathbf{y}}\xspace) \geq \frac{k\cdot\ell}{2}$. Observe, however, that
$\max_\ensuremath{\mathbf{y}}\xspace u_i(x_i,t_i,\ensuremath{\mathbf{y}}\xspace) \leq k\cdot \ell$. Hence, agent $i$ under \ensuremath{\mathbf{y}}\xspace gets
at least half of his maximum utility.
\end{proof}
\begin{definition}
\ensuremath{\texttt{Fixed}^{\{0,-1\}}}\xspace sets $\ensuremath{\mathbf{y}}\xspace_1=\ldots=\ensuremath{\mathbf{y}}\xspace_{\lceil \frac{k}{2}\rceil}=0$ and
$\ensuremath{\mathbf{y}}\xspace_{\lfloor \frac{k}{2}\rfloor}=\ldots = \ensuremath{\mathbf{y}}\xspace_k = \ell$.
\end{definition}
\begin{theorem}
\label{thm:fzm}
\ensuremath{\texttt{Fixed}^{\{0,-1\}}}\xspace is $\frac{\lfloor \frac{k}{2}\rfloor}{k}$-approximate.
\end{theorem}
\begin{proof}
Observe that since $t_i \in \{0,-1\}^k$ it holds that
$u_i(x_i,t_i,\ensuremath{\mathbf{y}}\xspace) = \sum_j \lceil \frac{k}{2}\rceil \cdot x_i + \lfloor \frac{k}{2}\rfloor
\cdot (\ell -x_i) \geq \lfloor \frac{k}{2}\rfloor \cdot \ell$. Observe though that
$\max_\ensuremath{\mathbf{y}}\xspace u_i(x_i,t_i,\ensuremath{\mathbf{y}}\xspace) \leq k\cdot \ell$. Hence, \ensuremath{\texttt{Fixed}^{\{0,-1\}}}\xspace is at least
$\frac{\lfloor \frac{k}{2}\rfloor}{k}$-approximate.
\end{proof}
\begin{definition}
$OPT^2$ places each of the two facilities independently on its optimal location.
\end{definition}
It is not hard to see that $OPT^2$ is strategy-proof. This is because we know that
when agents' locations are known, the mechanism that places one facility on the
leftmost optimal location is strategy-proof. Therefore, since the mechanism places each
facility independently no agent can increase his utility by lying.
\begin{theorem}
\label{thm:opt2}
$OPT^2$ is $\frac{3}{4}$-approximate.
\end{theorem}
\begin{proof}
Before we analyze the approximation guarantee of the mechanism, let us first
study the locations in which the mechanism places the facilities. Since the
preferences of each agent are in $\{0,1\}^2$, it is not hard to see that
the optimal location for each facility is the median point between the locations
of the leftmost and the rightmost agents that want to be close to the facility.
Without loss of generality, we can assume that the agent with the minimum utility
under $\textsc{OPT}\xspace^2$, denoted by $a_1$, has preferences $(1,1)$.
If $t_i =(1,0)$, then the agent would have utility at least $\frac{3}{2}\ell$
since any other agent who wants to be close to the first facility is located
in distance at most $\ell$ from $a_1$'s location.
The maximum utility the agent can get is $2\ell$, so the mechanism is then
$\frac{3}{4}$-approximate.
Assume that $a_1$ is located on $x \leq \frac{\ell}{2}$.
Then, without loss of generality, we can assume that he is located on 0 since for
any other location the agent would be closer to the facilities and thus his
utility would increase.
Then, observe that agent $a_1$, alongside with the rightmost agents,
will define the locations of the facilities. Observe that if the
rightmost agent has preferences $(1,1)$, then $\textsc{OPT}\xspace^2$ is optimal. So, we can
assume that the rightmost agent, denoted by $a_{r1}$, has preferences $(0,1)$.
In the worst case, $a_{r1}$ will be located on $\ell$, since for
every other location, the utility of agent $a_1$ will be higher. We have
to consider the two possible preferences for the second rightmost agent with
preference 1 for the first facility and prove that $\textsc{OPT}\xspace^2$ achieves the desired
approximation.
We will use $a_i$ to denote this agent and $x_i$ to denote his location.
Firstly, we consider the case where agent $a_i$ has preferences $(1,1)$ and
$x_i \geq \frac{\ell}{2}$.
The utilities of the agents for the facilities under the locations $(\ensuremath{\mathbf{y}}\xspace_1,\ensuremath{\mathbf{y}}\xspace_2)$, where
$\ensuremath{\mathbf{y}}\xspace_2 \leq x_i$, are $u_1 = 2 \ell -\ensuremath{\mathbf{y}}\xspace_1-\ensuremath{\mathbf{y}}\xspace_2$,
$u_i = 2 \ell -2x_i+\ensuremath{\mathbf{y}}\xspace_1+\ensuremath{\mathbf{y}}\xspace_2$ and $u_{r_1}=\ell +\ensuremath{\mathbf{y}}\xspace_2$.
$\textsc{OPT}\xspace^2$ will place the facilities to $\ensuremath{\mathbf{y}}\xspace_1=\frac{x_i}{2}$ and
$\ensuremath{\mathbf{y}}\xspace_2=\frac{\ell}{2}$ and the utility of agent $a_1$ will be
$u_1=\frac{3 \ell -x_i}{2}$. Observe that the locations of the facilities that make
the utilities of these three agents equal provide an upper bound on the utility
that agent $a_1$ gets under the optimal solution, since any other solution would
yield lower utility for at least one of these agents. If we find the locations
of the facilities that equalize the utilities for the agents we get
$\ensuremath{\mathbf{y}}\xspace_1=2x_i-\ell$ and $\ensuremath{\mathbf{y}}\xspace_2=\ell-x_i$ and thus the optimal utility for
agent $a_1$ is bounded by $2\ell-x_i$.
Hence, $\textsc{OPT}\xspace^2$ is $\alpha=\frac{3-x_i}{4-2x_i} \geq \frac{3}{4}$-approximate.
In the case where $x_i<\frac{\ell}{2}$, it is not difficult to see that
agent $a_1$ gets utility at least $\frac{5}{4}\ell$ under $\textsc{OPT}\xspace^2$. Observe that
under the optimal solution the utility of the agents is bounded by
$\frac{3}{2}\ell$, since there are no locations for the facilities where both
$a_1$ and $a_{r1}$ get more than $\frac{3}{2}\ell$. Thus, in this case the
mechanism is $\frac{5}{6}$-approximate.
If the preferences of $a_i$ are $(1,0)$, then similar analysis can be applied.
\end{proof}
\section{\textsc{Utilitarian}\xspace and \textsc{Happiness}\xspace}
In this section we show that \texttt{Fixed}\xspace, \ensuremath{\texttt{Fixed}^{\{0,1\}}}\xspace, \ensuremath{\texttt{Fixed}^{\{0,-1\}}}\xspace, and \texttt{Random}\xspace achieve the same approximation guarantees
for \textsc{Utilitarian}\xspace and \textsc{Happiness}\xspace objectives as \textsc{Egalitarian}\xspace. All mechanisms remain strategy-proof since they do not
require any information from the agents. Recall, \textsc{Utilitarian}\xspace is the sum of the utilities of the agents, formally
$\sum_i u_i(x_i, t_i, \ensuremath{\mathbf{y}}\xspace)$ and \textsc{Happiness}\xspace is $\min_i \frac{u_i(x_i, t_i, \ensuremath{\mathbf{y}}\xspace)}{u^*_i(x_i, t_i)}$, where
$u^*_i(x_i, t_i) = max_\ensuremath{\mathbf{y}}\xspace u_i(x_i, t_i,\ensuremath{\mathbf{y}}\xspace)$.
\begin{theorem}
\label{thm:wel-hap}
For \textsc{Utilitarian}\xspace and \textsc{Happiness}\xspace objectives the following hold.
\texttt{Fixed}\xspace is $z_f$-approximate.
\ensuremath{\texttt{Fixed}^{\{0,1\}}}\xspace is $\frac{1}{2}$-approximate.
\ensuremath{\texttt{Fixed}^{\{0,-1\}}}\xspace is $\frac{\lfloor \frac{k}{2}\rfloor}{k}$-approximate.
\texttt{Random}\xspace is $\frac{1}{2}$-approximate.
\end{theorem}
\begin{proof}
In the proofs of Theorems~\ref{thm:mech2},~\ref{thm:fzo},~\ref{thm:fzm}, and~\ref{thm:randp-apx}
it is proved that for every agent $i$ it holds that $\frac{u_i(x_i, t_i, \ensuremath{\mathbf{y}}\xspace)}{u^*_i(x_i, t_i)} \geq \alpha$,
where $\alpha$ is the approximation ratio of the corresponding mechanism. Hence, the claim for \textsc{Happiness}\xspace
already follows from those proofs since they capture the definition of \textsc{Happiness}\xspace. For
\textsc{Utilitarian}\xspace, observe that $OPT_w=\max_\ensuremath{\mathbf{y}}\xspace \sum_i u_i(x_i, t_i, \ensuremath{\mathbf{y}}\xspace) \leq \sum_i u^*_i(x_i, t_i)$.
So, from the proofs of the aforementioned theorems we get that
$u_i(x_i, t_i, \ensuremath{\mathbf{y}}\xspace) \geq u^*_i(x_i, t_i)\cdot \alpha$
for every $i$. So, if we sum over $i$ we get that
$\sum_i u_i(x_i, t_i, \ensuremath{\mathbf{y}}\xspace) \geq \alpha \cdot \sum_i u^*_i(x_i, t_i) \geq \alpha \cdot OPT_w$
and the theorem follows.
\end{proof}
The observing reader may wonder whether the approximation guarantee of \texttt{Fixed}\xspace for \textsc{Utilitarian}\xspace
contradicts the result of~\cite{ZL15}. Recall, \cite{ZL15} proved that there is no deterministic
strategy-proof mechanism for \textsc{Utilitarian}\xspace with approximation ratio better than $\frac{2}{n}$.
However, a closer look will reveal that in order to establish that result, the following assumptions must be made.
Firstly, that every agent wants to be close to the first facility and away from the second facility.
Furthermore, they defined the utility of an agent located on $x_i$ to be
$u_i(x_i, \ensuremath{\mathbf{y}}\xspace)= |x_i-\ensuremath{\mathbf{y}}\xspace_1| - |x_i-\ensuremath{\mathbf{y}}\xspace_2|$. This different definition of utility is crucial
for deriving those negative results, and this is the reason why our results do not contradict theirs.
\begin{comment}
\section{Verification}
We now turn our attention to verification mechanisms, where the mechanism designer invests in resources (e.g. infrastructure) to prevent undesired outcomes derived from the selfish behaviour of the agents. As an example, consider the case where the facility to be built is a car park. Suppose that the agent has preference 1. If he declares -1 and parks his car to the car park, he will be caught lying by the system. However, the opposite is not easy to verify without the presence of money, i.e. with true preference -1 and declared 1.
\begin{itemize}
\item $\mathbf{t_i \in \{-1,1\}}$
In this section, we will assume that if the declared preference is -1, then this is the true preference of an agent since otherwise the agent will be caught lying as explained above.
\end{itemize}
From Figure~\ref{fig2} we know that \textsc{OPT}\xspace is not strategy-proof.
\todo[inline]{Is there any instance with preferences in $\{-1,1\}$ such that for agent i his true preferences are $t_i=(-1,1)$ but he declares $t_i'=(1,1)$ so that $\ensuremath{\mathbf{y}}\xspace_2$ comes closer to him?}
\begin{claim}
For $k=2$ and preferences in $\{-1,1\}$, \textsc{OPT}\xspace is strategy-proof under the verification criterion.
\end{claim}
\begin{proof}[Proof sketch]
For the purpose of contradiction, let us assume that there is an instance in which i is better off not declare his true type.
We only need to prove only two cases i) $t_i=(-1,-1) \rightarrow t_i'=(-1,1)$ and ii) $t_i=(-1,1) \rightarrow t_i'=(1,1)$. The other two possible cases i.e. $t_i=(-1,-1) \rightarrow t_i'=(1,1)$ and $t_i=(1,-1) \rightarrow t_i'=(1,1)$ are symmetric.? All these under the assumption that if an agent with real preference 1 declares -1 he will be caught by the mechanism under verification.
We first assume that $x_i \leq \frac{\ell}{2}$. The other case is symmetric. We consider the following cases of type change:
\begin{itemize}
\item $t_i=(-1,-1) \rightarrow t_i'=(-1,1)$
\begin{itemize}
\item $y_1, y_2 \leq x$: Let us first consider the case where $y_1 \leq y_2$ is the optimal solution under the true type $t_i$. Let $y_1', y_2$ be the optimal solution under $t_i'$. We next consider the following subcases:
\begin{itemize}
\item $y_2'=y_2$ and $y_1'<y_1$: Contradicts the optimality of the solution under $t_i$.
\item $y_2'<y_2$ and $y_1'=y_1$: Contradicts the optimality of the solution under $t_i$.
\item $y_1'=y_1$ and $|x_i-y_2'|< |x_i-y_2|$: The true payoff of i gets worse. A contradiction.
\end{itemize}
\end{itemize}
\end{itemize}
\end{proof}
However, if there are more than one optimal solutions a tie breaking rule is necessary, e.g. $x_1=0$, $t_1=(-1,1)$, $x_2=\epsilon$ and $t_2=(-1,-1)$. If agent 1 declares $t_1'=(1,1)$ there are two optima. In one of them he gets utility 2 and in the other he gets 0.
\end{comment}
\section{Discussion}
In this paper, we studied heterogeneous facility locations on the line segment. To the best of our knowledge, this
is the first systematic study of this model for the \textsc{Egalitarian}\xspace objective. We derived inapproximability results for
strategy-proof mechanisms for \textsc{Egalitarian}\xspace even for instances with known locations and two agents. Furthermore,
we derived strategy-proof mechanisms that achieve constant approximation for \textsc{Egalitarian}\xspace, some of which also achieve the same guarantee for \textsc{Utilitarian}\xspace and \textsc{Happiness}\xspace objectives.
All of our mechanisms are simple and can be implemented in a communication-efficient way. More specifically,
every mechanism needs zero or five bits of information from every agent. Communication efficiency is
crucial for real-life scenarios. Consider the example of the factory and the school discussed in the
introduction. If thousand of citizens live on this street, then our mechanisms require only their preferences
and whether they live on the western part of the street or on the east one and not their full address saving a huge
amount of time to the planner. To the best
of our knowledge, this is the first time that communication complexity is studied for facility location problems.
We strongly believe that there is much to be said about facility location mechanisms and communication
complexity. Firstly, it would be really interesting to understand how limited communication affects the approximation
guarantee of mechanisms. Is there a better randomized mechanism than \texttt{Random}\xspace when no communication is allowed?
Are there better mechanism than \ensuremath{\texttt{Fixed}^+}\xspace and \ensuremath{\texttt{Random}^+}\xspace when every agent is allowed to communicate $O(1)$ bits?
Can \ensuremath{\texttt{Fixed}^+}\xspace and \ensuremath{\texttt{Random}^+}\xspace be extended for $k \geq 3$ facilities?
Another intriguing avenue of research is to use communication complexity to define ``simple''
mechanisms. Recently~\citeauthor{LiOSP}~\cite{LiOSP} defined the \emph{obviously strategy-proof} (OSP)
mechanisms to capture the simplicity of mechanisms. Intuitively, a mechanism is obviously
strategy-proof if it remains incentive-compatible even when some of the agents are not fully rational.
The formal definition of OSP is quite technical, and thus we decided not to include it in our paper since it
would deviate from its main theme. However, we strongly believe that some of our mechanisms, if not all of
them, should be \emph{obviously strategy-proof}~\cite{LiOSP}. \texttt{Fixed}\xspace and \texttt{Random}\xspace do not use any information
from the agents. In both \ensuremath{\texttt{Fixed}^+}\xspace and \ensuremath{\texttt{Random}^+}\xspace, if an agent knows the declarations of the rest of the agents, then
he can verify that he cannot increase his utility by misreporting his type using $O(1)$ space. We believe that
this kind of mechanisms are de facto simple and deserve further studying.
\newpage
\bibliographystyle{plainnat} |
2005.03110 | \section{Introduction} \label{sec:intro}
Active galactic nuclei (AGN) constitute a small fraction of super-massive black holes at the centers of galaxies. These objects are powered by accretion and a further fraction ($\sim$10\%) of AGN have highly collimated jets of fully ionized plasma that can reach scales of several Mpc. Numerous AGN are known to emit high-energy (HE; MeV$-$GeV)
and very-high-energy (VHE; E $>100$ GeV) $\gamma$-rays, presumably via inverse-Compton (IC) emission of leptonic particles within the jet. All but four of the 78 (jetted) AGN currently detected at VHE are blazars, where the jet is viewed nearly along its axis; the other four are radio galaxies where the jet associated with the AGN is viewed at somewhat larger angles.\footnote{TeVCat online source catalog: \citet{wakely2008}}
It has been suggested that radio galaxies form the parent population of blazars with core dominated objects (FR-Is, after the \cite{Fanaroff1974} classification) corresponding to BL-Lac objects observed at larger jet viewing angles, while lobe-dominated FR-II radio galaxies are, instead, associated with flat-spectrum radio quasars (FSRQ) \citep{Urry1995}.
The jet emission in blazars and radio galaxies is characterized by a double-peaked, non-thermal spectral energy distribution (SED). The lower frequency peak, which in blazars has a peak frequency ($\nu_{\mathrm peak}$) ranging from $10^{13}$ to $10^{18}$~Hz, is well-described as synchrotron emission from relativistic electrons spiraling in the magnetic field of the jet. The higher frequency peak, located in the $\gamma$-ray band, is generally attributed to inverse-Compton (IC) emission.
Typically, the sources with higher-frequency synchrotron peaks have higher-frequency IC peaks (i.e.\ beyond the 10$-$100 GeV range). Correspondingly, high-synchrotron-peaked (HSP) blazars ($\nu_{\mathrm peak}$ $>10^{15}$~Hz) are the brightest and best-studied AGN at VHE (51 of the current VHE AGN), even though they are the least luminous / powerful. In contrast, only 9 blazars (BL\,Lac objects and quasars) in the current TeV catalog are low-synchrotron-peaked (LSP; $\nu_{\mathrm peak}$ $\lesssim 10^{14}$~Hz) objects. Although there is no strict division between the classes, radio galaxies are believed to have their jets oriented at larger angles to the line-of-sight ($\gtrsim$10$^\circ$) than blazars. This larger misalignment means that radio galaxies are much less Doppler boosted than their blazar counterparts, and that they tend to have lower-frequency synchrotron peaks, similar to LSP blazars \citep{meyer2011}.
The IC emission in blazars and radio galaxies can arise from synchrotron self-Compton (SSC) or external Compton processes, or a combination of the two. It is generally thought that most low-power blazars (i.e.\ HSPs) have IC peaks dominated by SSC emission \cite[e.g.][]{boettcher2007,paggi2009} while the more powerful blazars (i.e.\ LSPs) are likely to require external-Compton processes \citep{sikora2009,Meyer2012-EC}. In the latter case, it is unclear which external photon field provides the seed photons for scattering, due to the uncertainty in the actual location of the high-energy emitting region in the jet \citep[e.g.][]{Arsioli2018}. The possibilities for the dominant seed-photon source include the molecular torus region, the much smaller broad-line emitting region, and even the accretion disk \citep{dermer1992,sikora1994,Blazejowski2000,sikora2009}. In addition to these purely leptonic scenarios, there are also models for jet emission which include a significant population of relativistic protons (i.e.\ hadronic models) that produce
HE and VHE $\gamma$-ray emission via several different processes \citep[e.g.][]{aharonian2000}. In particular, cloud-jet interaction models could explain the observed TeV flaring emission in sources like M87 \citep[e.g.][]{barkov2012}.
This paper describes the discovery by VERITAS in VHE $\gamma$-rays of the FR-I radio galaxy 3C\,264. It is the fourth radio galaxy detected at VHE, and the most distant, at a comoving distance of 93 Mpc. All four VHE radio galaxies are low-power, with FR-I type jets. The other VHE detections are Centaurus\,A, M\,87, and NGC\,1275 \citep[at a distance of 3.8, 16.7, and 62.5 Mpc, respectively;][]{harris2010_cena,blakeslee2009,ngcdist}. Two of the four VHE radio galaxies show superluminal motions on kpc scales (3C\,264 and M\,87). It is plausible that some very nearby radio galaxies are designated such because their
proximity makes the identification of their host galaxy easier. At much larger distances, the same objects would likely be classified as (slightly misaligned) blazars. Indeed, the VHE source IC\,310, detected by
both VERITAS and MAGIC \citep{Aleksic2014a}, is considered by some to be a fifth VHE radio galaxy \citep{IC310_RG1,IC310_RG2}. However,
there are convincing arguments that it is a borderline BL\,Lac object \citep{kadler2012}. A similar case is the VHE source PKS\,0625$-$354, for which there is also some ambiguity, although the balance of evidence is in favor of a BL Lac classification \citep{hess2018}.
Previous VHE detections of radio galaxies reveal high-energy Compton components similar to blazars in terms of spectral shape and origin, though at lower luminosity due to the decreased Doppler boosting. As is sometimes the case for blazars, single-zone SSC models are usually inadequate to explain the observed emission. In the blazar/radio galaxy IC\,310, a rising TeV component led to suggestions for a hadronic origin, or a leptonic origin with multiple electron distributions \citep{fraija2017}. A similar spectral hardening at VHE is seen in Cen\,A \citep{Aharonian2009_cenA} and possibly M\,87 \citep{rieger2018_review}. In contrast, a single-zone SSC model is compatible with the high-energy emission and variability of NGC\,1275 \citep{Aleksic2014a}.
Comparing jet structure within VHE radio galaxies, 3C\,264 closely resembles M\,87. Both have one-sided FR-I type jets with the same kinetic luminosity ($10^{43.8}$ erg s$^{-1}$; \citealp{meyer2011}). Their
jets also have similar morphological traits (i.e.\ multiple knots) and they share
similar qualitative kinematic characteristics within the jet substructure \citep{meyer2015_nature}.
In contrast, NGC\,1275 and Cen\,A both have misaligned two-sided radio jets.
M\,87 has famously shown an outburst in the optical and X-ray bands from HST-1, a knot $\sim$100 pc downstream of the core (sky-projected) \citep{harris2006_m87}. It has also exhibited extreme (day-scale) VHE variability on multiple occasions \citep{aharonian2006,harris2009_m87, aliu2012}, which some attribute to HST-1, rather than the core. In light of the similarities between M\,87 and 3C\,264, and due to the ongoing collision between two knots in the 3C\,264 jet \citep{meyer2015_nature},
a suite of contemporaneous multi-wavelength observations was assembled to complement the detection of an increased VHE flux from 3C\,264 in early 2018.
The goal was to observe a change in brightness or structure within the jet or core during the same period. Therefore, in addition to the VERITAS VHE discovery of 3C 264, this paper describes the results from this multi-wavelength observation campaign,
as well as the similarities and differences between
3C\,264 and M\,87, particularly in light of their variability on 100$-$1000 parsec scales and at VHE.
In this paper a standard $\Lambda$CDM cosmology is assumed with $H_0$ = 67.8 km~s$^{-1}$~Mpc$^{-1}$, $\Omega_M$ = 0.308, and $\Omega_\Lambda$ = 0.692. The luminosity distances to 3C\,264 and M\,87 are 95.4 and 22.2 Mpc, respectively.
\section{VERITAS Data \& Results } \label{sec:data}
\label{DataVHE}
The Very Energetic Radiation Imaging Telescope Array System (VERITAS) is an array of four imaging atmospheric Cherenkov telescopes located at Fred Lawrence Whipple Observatory near Amado, Arizona (31$^\circ$ 40'N, 110$^\circ$ 57'W). The 12-m diameter telescopes are of the Davies-Cotton design, and each is instrumented with a 499 photomultiplier tube (PMT) camera providing a field-of-view of 3.5$^\circ$. The observatory is sensitive to $\gamma$-rays between $\sim$85 GeV and $\sim$30 TeV. The angular resolution of
the facility is $\sim$0.08$^\circ$ at 1 TeV, and its energy resolution is approximately 15\% \citep{Holder2006, Christiansen2017}.
\begin{figure}[t]
\centering
\includegraphics[width=3.5in]{figures/VERITAS_3C264_SigMap_RBM_LiMa_2018.png}
\caption{\label{fig:VERITAS_skymap} VERITAS sky map of the significance observed from the direction of 3C\,264 during 2018. The centroid of the excess observed by VERITAS is within $2\sigma$ of the SIMBAD position of 3C\,264 (black cross). The extent of the VHE source is consistent with the VERITAS point-spread function (PSF). The PSF in this analysis is reduced from prior publications due to the use of the ITM $\gamma$-ray reconstruction \citep{Christiansen2017}.}
\end{figure}
\begin{table*}[t]
\caption{Results from VERITAS observations of 3C\,264 in 2017 $-$ 2019. The quality-selected live time, number of $\gamma$-ray-like events in the on- and off-source regions, the normalization for the larger off-source region, the observed excess of $\gamma$-rays and the corresponding statistical significance are shown. For each observation epoch, the integral flux corresponding to the observed excess is given. For the 2017 and 2019 observations, an upper limit may be more appropriate and this information is given in the text. The flux is reported above the observation threshold of 315 GeV, and is also given in percentage of Crab Nebula flux above the same threshold. Some quantities may not appear to sum precisely due to rounding.}
\centering
\begin{tabular}{lccccccccc}
\hline \hline
Epoch & MJD & T & On & Off & Norm. & Excess & Significance & Flux ($>$315 GeV) & Crab \\
& & [ hr ] & & & & & [ $\sigma$ ] & [ $10^{-13}$ cm$^{-2}$ s$^{-1}$ ] & [ \% ] \\
\hline
Total (2017 $-$ 19) & 57811$-$58633 & 57.0 & 225 & 1856 & 0.0666 & 101.4 & 7.8 & $ 5.8 \pm 0.9 $ & $0.54 \pm 0.08$ \\
\hline
Feb. $-$ May 2017 & 57811$-$57893 & 9.2 & 26 & 306 & 0.0663 & 5.7 & 1.2 & $1.9 \pm 1.7 $ & $0.18 \pm 0.16$ \\
Feb. $-$ April 2018 & 58158$-$58229 & 37.9 & 172 & 1279 & 0.0665 & 87.0 & 7.9 & $7.6 \pm 1.2 $ & $0.71 \pm 0.11$ \\
Jan. $-$ May 2019 & 58487$-$58633 & 10.0 & 27 & 271 & 0.0674 & 8.8 & 1.8 & $2.9 \pm 1.8 $ & $0.27 \pm 0.17$\\
\hline
February 2018 & 58158$-$58170 & 3.0 & 20 & 102 & 0.0667 & 13.2 & 3.9 & $13.1 \pm 4.5$ & $1.20 \pm 0.41$ \\
March 2018 & 58186$-$58198 & 17.7 & 93 & 599 & 0.0667 & 53.0 & 6.8 & $10.2 \pm 1.9$ & $0.95 \pm 0.18$ \\
April 2018 & 58212$-$58229 & 17.2 & 59 & 578 & 0.0662 & 20.8 & 3.0 & $4.0 \pm 1.5$ & $0.37 \pm 0.14$ \\
\hline
\end{tabular}
\label{table:VERITAS_exposure}
\end{table*}
The VERITAS observations of 3C\,264 were taken from February through May 2017, from February through April 2018, and from January through May 2019. The AGN was observed for 30 minute runs in `wobble mode' where the source position was offset from the center of the camera field of view by $0.5^{\circ}$ in each of the cardinal directions in successive runs \citep{Fomin1994}. Generally, several runs were taken on each of the individual nights during the approximately monthly `dark periods' in the three seasons of data taking. However, the observed signal from 3C\,264 is relatively weak and therefore results are only reported for coarse temporal bins. A total of 11.0, 47.7 and 12.8 hours of data were taken in weather conditions classified as good quality by VERITAS observers in 2017, 2018 and 2019, respectively. These data are further quality selected based on information from atmospheric-monitoring instruments and the functionality of various subsystems.
The data are reduced using the Image-Template Method (ITM) \citep{Christiansen2017}. The PSF in this analysis is reduced from prior publications due to the improved angular resolution of the ITM $\gamma$-ray reconstruction. The event-selection criteria for identifying $\gamma$-ray images and removing background cosmic-ray images is optimized for hard-spectrum sources using Crab Nebula data scaled to 1\% of its nominal strength.
The signal is extracted from a circular region of $0.0707^{\circ}$ radius centered on the International Celestial Reference Frame (ICRF) radio position of 3C~264, and the background is typically determined from 15 off-source regions with the same offset from the center of the VERITAS camera (Reflected Region Method; \cite{Berge}). The significance of any excess is calculated following Equation 17 of \cite{LiMa}. The $\gamma$-ray selection requirements result in an average energy threshold of about 315 GeV for the conditions under which 3C\,264 was observed.
Table~\ref{table:VERITAS_exposure} shows the results from the VERITAS observations. Overall, an excess of 101 $\gamma$-ray-like events is observed from the direction of 3C\,264, corresponding to a statistical significance of 7.8 standard deviations ($\sigma$) above background. While some excess is observed in both the 2017 and 2019 data sets, it is clear that a majority of the signal comes from the 2018 observations, and from February to March 2018 in particular.
The 2018 observations yield an excess of 87 events ($7.9 \sigma$) in 37.9 hours of live time, and the VERITAS results from these data are emphasized in this paper. Figure~\ref{fig:VERITAS_skymap} shows the significance map for the 2018 data. A clear point-source is seen at the position of 3C\,264.
The VHE light curve from 3C\,264 is shown in Figure~\ref{fig:VERITAS_lightcurve}, and all the plotted integral flux values above the observation threshold of 315 GeV are given in Table~\ref{table:VERITAS_exposure}. The systematic error on the flux measured by VERITAS is 30\%. The flux for the total 2017$-$19 measurement is shown as a line (short-dashed) in Figure~\ref{fig:VERITAS_lightcurve}. There is evidence for variability in the annual measurements. A fit of a constant to the annual flux values is poor ($\chi^2 = 9.7$, 2 degrees of freedom, P($\chi^2$) = 0.0079). This is driven by the elevated flux seen in 2018, F($>$315 GeV) = $(7.6 \pm 1.2) \times 10^{-13}$ cm$^{-2}$ s$^{-1}$, which corresponds to 0.7\% of the Crab Nebula flux \citep{Albert2008}. Although an elevated flux is seen from 3C\,264 in 2018, the observed value places it among the dimmest sources detected in the VHE band. The monthly fluxes observed in 2018 also show evidence for VHE variability, as a similar fit of a constant is poor ($\chi^2 = 8.8$, 2 degrees of freedom, P($\chi^2$) = 0.012). The poor $\chi^2$ comes from the factor of $2-3$ decrease in April 2018 from the elevated flux-state observed during the February to March 2018 time period. The significance of the excess observed from 3C\,264 in 2017 and 2019 is low during each of those seasons. Correspondingly, 99\% confidence level upper limits of F($>$315 GeV) $< 7.0 \times 10^{-13}$ cm$^{-2}$ s$^{-1}$ for 2017, and F($>$315 GeV) $< 8.2 \times 10^{-13}$ cm$^{-2}$ s$^{-1}$ for 2019, are also reported.
The photon spectrum from the 2018 VERITAS observations of 3C\,264 is shown in Figure~\ref{fig:VERITAS_spectrum}. The data are well fit by a power law of the form dN/dE $\sim$ E$^{-\Gamma}$ ($\chi^2$ = 3.0, 4 degrees of freedom) with a hard photon index of $2.20 \pm 0.27_{\mathrm stat} \pm 0.20_{\mathrm syst}$ and differential flux normalization of $(1.94 \pm 0.35_{\mathrm stat} \pm 0.58_{\mathrm syst}) \times 10^{-13}$ cm$^{-2}$ s$^{-1}$ TeV$^{-1}$ at 1 TeV.
The location of the VERITAS excess is determined using a two-dimensional Gaussian fit to a map of the excess of events. The centroid of the point-like excess is located (J2000) at Right Ascension (RA) $11^h 45^m 8.4^s \pm 0.7^s_{\mathrm stat}$ and declination ($\delta$) $+19^\circ 36' 29'' \pm 17''_{\mathrm stat}$. The source is accordingly named VER\,J1145+196, and it is located $0.017^\circ$ from the (ICRF) radio position of 3C\,264 of RA = $11^h 45^m 05.00903^s$ and $\delta$ = $+19^\circ 36' 22.7414''$ \citep{fey2004}. The VERITAS measurement has a systematic uncertainty of $0.007^\circ$ (25$''$), in addition to the statistical uncertainty of $0.006^\circ$. The systematic uncertainty comes largely from the accuracy of the calibration of the VERITAS pointing system, which corrects for the flexing of each telescope's optical support structure \citep{Griffiths2015}. The reconstructed source position is therefore consistent with the ICRF location at the $2\sigma$ level.
\begin{figure}[bth]
\centering
\includegraphics[width=3.5in]{figures/VERITAS_3C264_LightCurve_stack.png}
\caption{\label{fig:VERITAS_lightcurve} \textit{Bottom: }VHE light curve measured by VERITAS. The flux was elevated in February and March of 2018. Upper limits at the 99\% CL are also shown for the 2017 and 2019 observations due to the low significance of the observed excesses. The flux observed in 2017$-$19 is indicated by the line (short-dashed). The 2018 flux is indicated by the dashed line segment. \textit{Top:} X-ray flux light curve measured by \textit{Swift}-XRT. The average flux from 2018-2019 is indicated by a dashed line. }
\end{figure}
\begin{figure}[tbh]
\centering
\includegraphics[width=3.5in]{figures/VERITAS_3C264_Spectrum_Wobb_2018_v257_nogrid.png}
\caption{\label{fig:VERITAS_spectrum} VHE spectrum observed by VERITAS in 2018.}
\end{figure}
\section{Multi-Wavelength Data Sets}
\subsection{HE $\gamma$-ray Observations}
\label{DataHE}
The Fermi Large Area Telescope (LAT) is sensitive to $\gamma$-rays from $\sim$20 MeV to $\sim$300 GeV, and it operates primarily in a survey mode that covers virtually the entire sky every few hours. 3C\,264 appears in several \emph{Fermi}-LAT catalogs and is associated with the source 4FGL\,J1144.9+1937. It is not classified as variable in the 8-year Fermi catalog \citep{4FGL2019}, where its spectrum is characterized by a power-law with a photon index of $1.94 \pm 0.10$. Although the object is not listed as a variable source, archival \emph{Fermi}-LAT observations in the region of 3C\,264 were analyzed for the time period coincident with the VERITAS observations in 2018 (09 Feb 2018 to 21 Apr 2018), corresponding to mission-elapsed time (MET) 539827205 to 545961605, in order to probe the HE emission. A standard `unbinned' likelihood analysis was performed using the python-based \emph{Fermi} tools (ver 1.0.1). In particular, a region-of-interest (ROI) of $15^{\circ}$ around the position of 3C\,264 was analyzed including photons with energies from 100 MeV to 100 GeV. The initial maximum-likelihood optimization used a model file populated from the 3FGL catalog \citep[3FGL;][]{acero2015}. A converged fit was found after some iteration which required both adding new sources to the model file based on residual significance in the `TS' (test statistic) maps and removing some low-significance sources from the 3FGL list. After the final XML source model file was generated, we used maximum likelihood to measure the flux and spectral index of 3C\,264 during the 2018 VERITAS observations. The spectral index has a large error, so we also ran a fit with the spectral index fixed to the 4FGL catalog value and this source flux is reported in Section~\S\ref{sec:FermiObservations}. We also generated a 95\% upper limit for the time frame of the 2017 observations using the default method of Likelihood profile fitting with the Fermi UpperLimit tool.
\subsection{X-ray Observations}
\label{DataXray}
\paragraph{Swift.}
\label{xrt}
The X-Ray Telescope (XRT) aboard the \textit{Neil Gehrels Swift} Observatory observed 3C\,264 (Target ID: 10512) in 2018 and 2019 for a total exposure of 54 ks. The data are almost evenly split between the two years, and these are all known XRT exposures of 3C\,264 prior to the end of the VERITAS observations in 2019. These observations were made under several target of opportunity requests, and the observation IDs are listed in Table~\ref{tab:xrtfitting} next to the appropriate spectral fits. Since 3C\,264 has a relatively low XRT count rate, all exposures were taken in photon counting (PC) mode. The average count rate is 0.09 counts/s, well below the suggested threshold of 0.5 count/s for performing any pile-up correction. All XRT exposures were processed and analyzed with \texttt{HEASoft V6.26.1}\footnote{https://heasarc.gsfc.nasa.gov/docs/software/heasoft/}. All level-2 data products were created locally using \texttt{xrtpipeline} V0.13.5. The spectra were extracted using \texttt{XSELECT V2.4g} and model fitting was performed using \texttt{XSPEC 12.10.1f} (via \texttt{PyXspec}). The source region is a circle of radius, r = 20 pixels (about $45''$) while the background region has the same shape and size, and was placed nearby to avoid the single other point source in the region.
Each XRT observation was analyzed individually, as well as grouped into the several epochs shown in Table~\ref{tab:xrtfitting}. A similar analysis was performed on each sample. Events were extracted between 0.3 and 10 keV. However, due to the relatively weak source the energy bins above 7 keV are consistent with zero flux in many cases, so model fitting was performed to obtain a flux extrapolated to 10 keV. Each spectrum was fit to a simple absorbed power law using the xspec model \texttt{phabs*powerlaw}. The hydrogen column density was fixed at $n_H = 1.96\times10^{20}\rm\ cm^{-2}$ and the remaining fit (power-law) parameters were left free. This value of $n_H$ was determined using the \texttt{nhtot} webtool\footnote{https://www.swift.ac.uk/analysis/nhtot/index.php} which facilitates the use of column density measurements described in \citet{2013MNRAS.431..394W}. Since the source is a point source, the weighted $n_H$ value was used. Past analyses with \emph{Chandra} and XMM (e.g.\ \citealp{perlman2010_3c264, Evans2006}) found that power-law models provided the best fits for 3C\,264 over those including some sort of thermal component (e.g.\ including xspec models \texttt{apec} or \texttt{bbody} to the absorbed model). This is also supported by the X-ray emission being possibly dominated by the large-scale jet, and not the core or galactic dust (see Section~\S\ref{Chandra-res}). Therefore no thermal model was included.
\paragraph{Chandra.}
3C\,264 was observed on 04 Apr 2018 with the High Resolution Camera (HRC) on board \emph{Chandra} under Director's Discretionary Time proposal 21058. The exposure time is 14.58 ks and all data analysis is conducted with CIAO version 4.11. The data is reprocessed in the standard way using the \texttt{chandra\_repro} script. The total flux from 3C\,264 is estimated using the \texttt{srcflux} script with the `wide' band appropriate for HRC. Previous \emph{Chandra} observations of 3C\,264 were made using the Advanced CCD Imaging Spectrometer (ACIS), where elongation of the source along the radio/optical jet direction was noted \citep{perlman2010_3c264}. To evaluate the extended emission in the 2018 HRC observations, the \emph{Chandra}/HRC PSF at the location of 3C\,264 was calculated using the
online \texttt{CHaRT} tool and simulations from CIAO task \texttt{psf\_project\_ray}. To reduce the statistical uncertainty (noise) in the PSF, which is matched to the data by total counts/exposure, 50 realizations of the instrument PSF from \texttt{CHaRT}
were requested and the resulting detector-plane PSFs produced by \texttt{psf\_project\_ray} were averaged.
This PSF is used to deconvolve the image using the Lucy-Richardson algorithm as implemented in the CIAO task \texttt{arestore}.
\subsection{Optical Observations}
\label{DataOptical}
\paragraph{Ground-based}
3C\,264 was observed by two ground-based optical observatories as part of a target of opportunity campaign in 2018. Individual Johnson R band exposures were taken on eight nights between 22 Mar 2018 and 10 Apr 2018 (MJD 58199$ - $58218) at the 1.3 meter Robotically Controlled Telescope (RCT) at Kitt Peak National Observatory. In addition, Johnson V band exposures were acquired on 14 nights between 21 Mar 2018 and 20 May 2018 (MJD 58198-58258) with nearly half of the data each from two nodes of iTelescope.net: the T21 - 413mm Reflector of the New Mexico Observatory; the T27 - 770 mm Reflector of the Siding Spring Observatory; and an additional data point was taken with the T32 - 413 mm Reflector at the Siding Spring Observatory.
The data were bias subtracted and flat-field corrected using standard IRAF routines for the RCT observations and MIRA PRO UE for the iTelescope.net observations. V and R magnitudes were determined using differential aperture photometry with a comparison star in the same field of view as 3C 264, and a photometric radius of 10.15".
\paragraph{UVOT}
The Ultraviolet/Optical Telescope (UVOT) aboard \textit{Swift} observed 3C\,264 simultaneously during the XRT exposures described in Section~\S\ref{DataXray} (see Table~\ref{tab:xrtfitting}). Observations were made using all six filters available (v, b, u, uvw1, uvm2, uvw2), and the UVOT exposures were processed using \texttt{uvotproduct} version 2.4 to calculate the flux and generate light curves. A circle of radius, $r = 5''$ was used for the source. The background was extracted from a nearby, source-free, circle of radius, $r = 20''$.
\paragraph{HST}
The kpc-scale jet of 3C\,264 is visible in the optical, as well as the radio, and has been extensively observed by HST. The recent discovery of optical superluminal proper motions and colliding knots in the jet \citep{meyer2015_nature} was enabled by comparing a moderately deep ACS/WFC F606W image\footnote{HST filter information is provided here: \url{http://svo2.cab.inta-csic.es/svo/theory/fps3/index.php?mode=browse&gname=HST}} in May 2014 against earlier short WFPC2 observations from 1994, 1996, and 2002. Based on this result, a long-term monitoring campaign with HST began in 2015. This campaign has an approximately two-year cadence following the 2014 observation; new observations were made in 2015/2016 and 2018/2019. These include polarization imaging with ACS/WFC in F606W and multi-band imaging with WFC3/UVIS for diagnostics on possible changes in the knot spectrum as the collision of knots B and C continues. Further details are in Meyer et al., (in prep.).
\subsection{Radio Observations}
\label{DataRadio}
\paragraph{VLA}
The jet of 3C\,264 was observed by the VLA in K-band, A-configuration on 13 Aug 2015 and 02 Apr 2018. The 2015 observation (Project 15A-507) was taken in order to provide an updated image of the jet after the discovery of the fast proper motion of two of the four knots in the optical. The April 2018 observation was obtained from Director's Discretionary Time (Project 18A-464) in response to the increased VERITAS flux observed in early 2018. The setup and length of these observations were identical. Therefore the 2015 epoch is an excellent reference to determine if any changes in the core or jet knots occurred during the VERITAS flare. The data can also be compared to deep K-band imaging acquired in 1983 and 2003.
Both recent data sets were calibrated using the \texttt{CASA pipeline (version 4.7.2)}, and the scans on 3C\,264 were split off for imaging using \texttt{clean}. Due to the wide-band observing mode (18$-$25 GHz), nterms=2 was used in \texttt{clean}. Full polarization products were obtained after several initial rounds of self-calibration. Briggs weighting with a robust parameter of 0.5 was used for all imaging. The pixel scale was set to $0.025''$ to match the HST imaging scale. The final synthesized beam has a size of 0.12$''$x0.08$''$ and 0.18$''$x0.08$''$ in the 2015 and 2018 images, respectively. The fractional polarization and electric-vector position angle (EVPA) were calculated according to the standard formulae from the Stokes images.
\paragraph{VLBI}
Observations with the Very Long Baseline Array (VLBA) were made on 30 Mar 2018 under a Director's Discretionary Time request related to the VHE flaring activity (Project Code BM\,450). Simultaneous multi-frequency VLBA observations were performed for a total of 4 hours at 5.0 GHz, 8.4 GHz, 12.1 GHz, and 15.3 GHz. Both circular and cross-hand polarizations were recorded with 2-bit sampling at 2048 Mbps, with 8 intermediate frequencies, each of 32 MHz bandwidth. The Los Alamos antenna did not participate due to a telecommunications problem, but useful data were obtained with the other 9 VLBA antennas.
The frequency scans were interleaved and interspersed with scans on the bright fringe calibrator source OM\,280. The total integration times were adjusted to yield a rms image noise of $\sim 0.1$ mJy/beam at each frequency.
Each frequency band was processed following standard procedures in \texttt{AIPS} and \texttt{DIFMAP}, and produced naturally weighted images with a pixel size
of 0.05 mas. The antenna polarization leakage terms were corrected using the \texttt{AIPS} task \texttt{LPCAL}. It is not possible to calibrate the instrumental
polarization EVPA offset due to a lack of a simultaneous single-dish observation of either 3C\,264 or OM\,280. Neither source has any jet features with stable EVPA that could be used for calibration purposes.
3C\,264 is also monitored as a part of the MOJAVE\footnote{\url{http://www.physics.purdue.edu/astro/MOJAVE/sourcepages/1142+198.shtml}} program.
In addition to the new data obtained in March 2018, MOJAVE monitoring data exists since 2016.
The MOJAVE sample also includes an archival 15 GHz observation from 2005.
For the analysis methods of the MOJAVE program data, please refer to \cite{MOJAVE_XV}.
\section{Multi-Wavelength Results} \label{sec:Results}
\subsection{Fermi Observations} \label{sec:FermiObservations}
3C\,264 (4FGL\,J1144.9+1937) is not a particularly strong \emph{Fermi}-LAT source with an 11.4$\sigma$ significance detection in the 8-year catalog. Correspondingly, it should only be weakly detected ($\sim$2$\sigma$) in a few-month integration, and it is not classified as variable in the 4FGL catalog. The \emph{Fermi}-LAT data taken contemporaneous (MJD 58158$-$58229) to the VERITAS sample in 2018 indicates a higher flux F(1$-$100 GeV) = $(7.1\pm3.7) \times 10^{-9}$ ph s$^{-1}$ cm$^{-2}$ than the 4FGL catalog value of $(2.85\pm0.40) \times 10^{-10}$ ph s$^{-1}$ cm$^{-2}$. The \emph{Fermi}-LAT data taken during the main VERITAS observing period in 2018 also indicates a
flat MeV-GeV spectrum ($\Gamma=2.1\pm0.6$), consistent with the 4FGL value ($\Gamma=1.94\pm0.10$).
Both the concurrent sample and 8-year catalog indicate the peak of the inverse-Compton component
of the spectral energy distribution is in the GeV band.
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\columnwidth, angle=270]{figures/xrtallspectrum.pdf}
\caption{\label{fig:swiftsed} \textit{Swift}-XRT spectrum and fit for 2018 and 2019. }
\end{figure}
\begin{table*}
\caption{\textit{Swift}-XRT spectral fit results for 3C\,264. The normalization and photon index ($\Gamma$) are the best fit results for a power-law from \texttt{phabs*powerlaw} with $n_H = 1.96\times10^{20}\rm\ cm^{-2}$. All obsid have the prefix of 000105120-.}
\centering
\begin{tabular}{cccccc}
\hline \hline
Data set & Normalization & $\Gamma$ & $\chi^2 / dof$ & 0.3-10 keV Model Flux & obsid\\
& [ $\rm 10^{-4}\ \rm ph\ cm^{-2}\ s^{-1}\ keV^{-1}$ ] & & &[ $10^{-12}\ \rm erg\ cm^{-2}\ s^{-1}$ ] & 000105120nn \\
\hline
Jan 2018 & $6.54\pm0.25$ & $2.17\pm0.06$ & 317 / 486 & $3.14\pm0.12$ & 01-04 \\
Mar 2018 & $7.09\pm0.26$ & $2.06\pm0.06$ & 232 / 408 & $3.62\pm0.13$ & 06-09,11-12 \\
Apr 2018 & $5.42\pm0.27$ & $2.19\pm0.08$ & 236 / 326 & $2.58\pm0.13$ & 14,16-21 \\
all 2018 & $7.15\pm 0.16$ & $2.10 \pm 0.04$ & 461 / 617 & $3.55\pm 0.08$ & all of the above\\
\hline
all 2019 & $4.68\pm 0.13$ & $2.20 \pm 0.05$ & 327 / 598 & $2.22\pm 0.06$ & 22-27,29-33 \\
\hline
2018+2019 & $6.34\pm 0.12$ & $2.12 \pm 0.03$ & 487 / 549 & $3.13\pm 0.06$ & all of the above\\
\hline
\end{tabular}
\label{tab:xrtfitting}
\end{table*}
\subsection{Swift Observations}
The spectral-fit results for selected monthly and yearly epochs are shown in Table \ref{tab:xrtfitting}. The $\chi^2$ for each fit is reasonable (i.e. $\chi^2/dof < 1$), and the 2018+2019 X-ray spectrum along with the corresponding fit is shown in Figure~\ref{fig:swiftsed} as an example. The monthly-binned X-ray flux in 2018 is significantly variable when comparing to the average with $\chi^2/dof = 32.1/2$ (P$(\chi^2) \approx 10^{-7}$). \textit{Swift} did not observe 3C~264 in Feb 2018 when VERITAS observed its highest flux, but the general trend is still apparent with the available observations from January, March, and April 2018. The XRT light-curve is shown in Figure \ref{fig:VERITAS_lightcurve}.
Light curves of the \textit{Swift}-UVOT exposures were inspected and no time variability was found. This reinforces the expectation that the emission in this band should be dominated by stars in the galaxy and stable on this time scale. For each UVOT filter, the results were time-averaged to find a mean magnitude and energy flux. The results are shown in Table~\ref{tab:uvotflux}.
\begin{table}[t]
\caption{\textit{Swift}-UVOT spectral information, time averaged from all 2018$-$19 \textit{Swift} observations of 3C\,264. These measurements cover a significant fraction of the entire galaxy, and not only the core or jet structure.}
\centering
\begin{tabular}{ccc}
\hline \hline
Filter & Energy Flux & Magnitude \\
& [ $\rm erg\ cm^{-2}\ s^{-1}$ ] & \\
\hline
v & $(3.77 \pm 0.03)\times 10^{-11}$ & $14.336 \pm 0.008$ \\
b & $(2.15 \pm 0.02)\times 10^{-11}$ & $15.305 \pm 0.008$ \\
u & $(6.87 \pm 0.06)\times 10^{-12}$ & $15.626 \pm 0.009$ \\
uvw1 &$(3.47 \pm 0.04)\times 10^{-12}$ & $16.192 \pm 0.012$ \\
uvm2 & $(2.61 \pm 0.04)\times 10^{-12}$ & $16.519 \pm 0.015$\\
uvw2 & $(2.49 \pm 0.02)\times 10^{-12}$ & $16.548 \pm 0.010$ \\
\hline
\end{tabular}
\label{tab:uvotflux}
\end{table}
\subsection{Ground-based Optical Observations}
No significant variability is found in the light curves from the RCT and iTelescope.net observatories. The largest difference between any two RCT points is 0.06 magnitude, and no single iTelescope.net point differed by more than 0.16 magnitude.
The mean R-band measurement with RCT is R = 13.09 and the mean V-band measurement with iTelescope.net is V = 13.49,
which correspond to fluxes of 17.9 mJy at 640 nm and 15.4 mJy at 550 nm, respectively. No attempt is made to subtract the host galaxy flux, and it is important to note that the integration radius for these optical data is $\sim$2 times larger than that used for the UVOT results.
\subsection{Chandra Observations}
\label{Chandra-res}
The deconvolved HRC-I image of 3C\,264 is shown in Figure~\ref{fig:chandra}, with the HST contours of the jet overlaid. The source is clearly extended in the image. This is also apparent from two-dimensional (2D) fitting of the (non-deconvolved) image with \texttt{sherpa}:\ a double-Sersic model fits the image best (reduced $\chi^2$ statistic of 0.042), though it leaves a residual extended flux distributed around the source. This fit model is unlikely to be physically meaningful, but the relatively large radii of the Sersic components (2.2 and 14 pixels) indicates that the bulk of the X-ray emission is extended. A point source model provides a worse fit (reduced $\chi^2$ statistic of 0.054).
The currently presented observations are the highest-resolution X-ray observations of this system to date. Previous imaging with ACIS-S and XMM suggest an extended thermal component around the AGN arising from the host galaxy on scales of $1.5-6''$ \citep[0.7-2.6 kpc,][]{sun2007}, which is considerably larger than the scale of the extended emission shown in Figure~\ref{fig:chandra}. Indeed, outside of 1.5$''$ the 2018 observation shows very little emission. As the HRC effectively provides no spectral information, it is difficult to directly assess whether the observed extended emission could be thermal. However, \cite{perlman2010_3c264} took the `core' emission to be everything within 1.23$''$ of the peak which essentially covers the entire region of interest in Figure~\ref{fig:chandra}. Their fit to the extracted spectrum showed that a thermal component could contribute no more than 5\% of the total flux, with the rest attributed to a non-thermal power-law spectrum with a spectral index $\alpha_x$=1.24.
Identifying the `core' location in the deconvolved \emph{Chandra} image of 3C\,264 is not unambiguous.
This is due to the absolute pointing accuracy of \emph{Chandra} (90\% uncertainty radius is 0.8$''$) and HST (typical error is $\sim$0.9$''$) and the lack of any other source in the HRC field of view.
If the brightest pixel is assumed to be the location of the AGN core, then the brightest part of the extended and presumably non-thermal emission would be located to the south and west of the core, which seems unlikely, given the jet extends to the northwest. Instead, if the bright component shown centered on the HST core in Figure~\ref{fig:chandra} is chosen, then the bulk of the extended emission coincides with the extended optical/radio jet rather than the core. This is more plausible, though the brightest pixel is offset somewhat to the north side of the jet. If this is indeed the correct identification, then the jet appears to be brighter than the core, which is unusual; the only other case where this has been observed was during the brightest outburst of HST-1 in M\,87.
\begin{figure}[t]
\centering
\includegraphics[width=3.25in]{figures/paperfig_chandra_rev1.pdf}
\caption{\label{fig:chandra} Deconvolved \emph{Chandra} HRC-I image of 3C\,264 observed on 04 Apr 2018.
The HST image was aligned assuming the brighter component to the south is the core, and the resulting overlay of the HST contours of the jet is shown. The previously reported thermal emission associated with the host galaxy by \cite{sun2007} is on the scales of 1.5-6$''$. This image shows very little emission on the same scales (the green circle has a radius of 1.5$''$). Color scale is in counts units.}
\end{figure}
Taking the total unabsorbed $0.1-10$ keV flux from the 2018 observation of $(6.91 \pm 0.2) \times 10^{-12}$ erg~cm$^{-2}~$s$^{-1}$, and adopting the spectral index $\alpha_x$=1.24 from the previous \cite{perlman2010_3c264} analysis of \emph{Chandra}/ACIS-S observations taken in 2004, gives a 1~keV monochromatic flux of 0.59$\pm$0.02\,$\mu$Jy for the entire region. This
is approximately twice as large as the flux (0.28$\pm$0.1\,$\mu$Jy) assigned to what was referred to as the core (i.e.\ a similar region) in the 2004 observation. Crudely separating the extended jet region from the area tentatively identified as the core, we can assign approximately 80\% of the flux (470 nJy) to the extended jet. We note that the previously reported flux of 4.6$\pm$1.1 nJy for the extended jet in \cite{perlman2010_3c264} was taken for a region outside 0.8$''$ from the core, and thus from a region corresponding to the much fainter/diffuse part of the optical jet which is not detected in our observations here. The two fluxes are from different regions and should not be compared. It must be emphasized that it is not certain that the core/jet regions are correctly identified in the observations presented here due to the lack of an absolute astrometric reference. Therefore only the total X-ray flux is reported in the SEDs presented in Section~\S\ref{sec:Discussion}.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.95\linewidth]{radio_optical_pol.png}
\caption{High-resolution radio and optical polarization images of 3C\,264 from the VLA and HST. These images all show the fractional linear polarization, on a scale from 0\% to 40\% in the radio, and from 0\% to 60\% in the optical. The VLA K-band polarization is shown in August 2015 (top left) and March 2018 (top right). There is no striking difference in the level of polarization in the jet, both showing a peak of about 22-23\% polarization just downstream of the knot B/C collision region, indicated by the black line \citep{meyer2015_nature}. The peak of HST-optical polarization is in the same location and reaches $\sim$15-17\%. It also shows similar levels in April 2016 (bottom left) and March 2018 (bottom right). The polarization values are uncorrected for the effect of dilution from the light from the galaxy/dust disk. In all images, the vector lines show the direction of the magnetic field (90$^\circ$ rotation from the EVPA), and the contour lines show the flux in the corresponding Stokes-I image for each epoch/band. In the radio images, contours are drawn at 1, 2, 4, 8 and 16 times the base level of 5$\times$10$^{-4}$ Jy. In the optical images, the contour lines are drawn at 2, 4, 8, 16, and 32 times the base level of 1$\times$10$^{-8}$ Jy. }
\label{fig:polarization}
\end{figure*}
\subsection{HST Observations}
Figure~\ref{fig:polarization} shows images of the radio (VLA) and optical (HST) fractional linear polarization for the large-scale jet in 3C\, 264 (the VLA radio polarization images are described below in Section~\S\ref{VLA_pol}). In the optical, the fractional polarization images were obtained by taking the ratio of the total linear polarization to the galaxy-subtracted Stokes I image; because this subtraction removes the core, the fractional polarization shown in the core region is not meaningful. Indeed, because of the inner dust disk in 3C 264, it is difficult to disentangle the contribution of the galaxy and the synchrotron jet in the core region in the total flux images. However, assuming all the central flux in the Stokes I image comes from the synchrotron core, it suggests an optical fractional polarization at the core of 16\% and 13\% in 2016 and 2018, respectively. The integrated optical core luminosity in Stokes I (under a Gaussian fit) rose slightly between 2016 and 2018, from 185$\pm$4 $\mu$Jy to 238$\pm$5 $\mu$Jy.
The large-scale jet shows a much higher level of polarization just downstream of the knot B/C collision zone \citep{meyer2015_nature}, as shown in Figure~\ref{fig:polarization} (the collision region is indicated with a black line). The linear polarization fraction of this feature does not appear to change significantly between April 2016 and March 2018, with a value of $\sim$25-35\% in both epochs when accounting for the contribution to the Stokes I flux from the galaxy. The size and location of this region in the two epochs is found to be consistent at approximately 0.5$''$ (220 pc) from the core and 90 pc in extent (based on a Gaussian fit). The position angle of the magnetic field (shown in Figure~\ref{fig:polarization}, 90$^\circ$ rotated from the EVPA) also appears largely consistent between the two epochs. It shows a smooth `flow' pattern aligned with the jet direction, with only a hint of some periodic transverse component. Interestingly, there does appear to be a small region in the center of the knot B/C collision zone where the B-field direction becomes perpendicular to the flow. This is consistent with the scenario outlined in \cite{meyer2015_nature} which suggests the collision is in the incipient stages. There is a possible enhancement of the linear polarization fraction which appears in the 2018 image just downstream of stationary knot A, where the polarization fraction reaches 15\% (uncorrected). However this region is very close to the bright core of the jet and differences in the orientation of HST during the two observations could change the shape and distribution of features in the Stokes I image near the core, making any features less certain.
\begin{figure*}[ht]
\centering
\includegraphics[width=6.4in]{figures/hst_2014_2019.pdf}
\caption{\label{fig:HSTjet} HST images of the kpc-scale jet in 3C\,264. In all cases, the light from the galaxy and inner dust disk is modeled and subtracted. The images from May 2014 (left; ACS/WFC F606W), November 2015 (center; WFC3/UVIS F814W) and January 2019 (right; WFC3/UVIS F814W) are shown. The red cross marks the location of the central black hole, and the green cross marks the location of a bend in the jet seen in VLBA imaging. Colorbar scale shown at left is in units of $\mu$Jy.}
\end{figure*}
Figure~\ref{fig:HSTjet} shows the ACS/WFC F606W image of the jet taken in 2014 as well as the WFC3/UVIS F814W images acquired in 2015 and 2019. These observations are also useful for comparing the state of the jet before and after the increased VHE flux. The multi-band WFC3/UVIS observations taken in January 2019 were taken as replacements for June 2018 observations which missed the jet due to a problem with a gyroscope on the spacecraft. As shown, very little change can be seen between 2014 and 2019. There is a slight shift of the knot B/C centroid which is expected based on the previously detected proper motions. The change in core brightness, at 20-30\%, is typical for blazars and moderately well-aligned sources. A further discussion of the kinematics of the jet will be published in a future publication.
\subsection{VLA} \label{VLA_pol}
The VLA observations of 3C\,264 have somewhat lower resolution than the HST imaging. However, the polarization structure also shown in Figure~\ref{fig:polarization} appears very similar. Further, there is no obvious change between the observations taken in 2015 and those taken in 2018, during the period of increased VHE flux. The K-band core flux in 2015 was 167 mJy, and decreased to 121 mJy in 2018.
\subsection{VLBI}
After registering the VLBA images, a map of spectral index
values $\alpha$, where $S_\nu \propto \nu^{+\alpha}$, was produced by performing a
linear regression on the intensity values $S_\nu$ of each pixel.
Only pixels which exceeded 3 times the image noise level at
all four frequencies were considered. The spectral morphology map is shown
in Figure~\ref{combo_spindx}, with contours overlaid from the 5-GHz total-intensity map.
\begin{figure*}[]
\centering
\includegraphics[width=6.5in]{figures/combo_spindx.pdf}
\caption{\label{combo_spindx} Results from VLBA images of 3C\,264 taken at 5.0 GHz, 8.4 GHz, 12.1 GHz, and 15.3 GHz on 30 Mar 2018. Maps of the radio spectral index (left) and the synchrotron spectral turnover frequency (right)
are shown. The VLBA images were restored with a common Gaussian beam having FWHM dimensions 3.4 $\times$ 1.5 mas at position angle $-9^\circ$. The 5.0 GHz total-intensity contours are drawn at successive factors of two times the base-contour level of 0.4 m\hbox{${\rm Jy \ beam}^{-1}\;$}.}
\end{figure*}
The spectral index values in Figure~\ref{combo_spindx} are only representative of
the actual spectrum in regions of the jet where the turnover frequency
does not lie between 5.0 and 15.3 GHz. To investigate this further, the
self-absorbed synchrotron spectra (see Eq.\ 4 of
\citealt{MOJAVE_XI}) were fit for each pixel and the resulting turnover
frequency values $\nu_m$ are also shown in
Figure~\ref{combo_spindx}. In this map, $\nu_m$
values below $\sim 6$ GHz and above $\sim 12$ GHz are not well
constrained by the data. However, some clear trends emerge when
comparing the two VLBA maps. The core region has a
self-absorbed spectrum peaking at $\sim 8$ GHz, and the jet becomes
optically thin roughly 4 mas (13 pc projected) downstream. At 11 mas
downstream, there is an isolated jet feature with an inverted spectrum.
The high fractional linear polarization at this location ($\sim$15\%
at 5 GHz) may be indicative of a transverse shock that is accelerating
the electrons and enhancing the magnetic field strength perpendicular
to the jet.
The VLBA imaging of the jet is shown in Figure~\ref{vlbi_multi}. No significant change in either the core or any of the jet features were observed between any epochs taken during the VHE high state.
\begin{figure*}[]
\centering
\includegraphics[width=6.5in]{figures/VLBI_multi.pdf}
\caption{\label{vlbi_multi} VLBA naturally-weighted contour maps of 3C 264 at 5.0, 8.4, 12.1 and 15.3 GHz. The
fractional linear polarization is overlaid in false color for pixels with
total linearly polarized intensity above 0.2, 0.45, 0.45 and 0.5 m\hbox{${\rm Jy \ beam}^{-1}\;$}, respectively. The contours are drawn in successive factors of two times the base contour level of 0.2, 0.25, 0.29 and 0.3 m\hbox{${\rm Jy \ beam}^{-1}\;$}. A single negative contour equal to the base contour is also drawn using dashed lines. The peak total intensity of the map is 119, 144, 132, and 116 m\hbox{${\rm Jy \ beam}^{-1}\;$}, respectively. The dimensions and orientation of the restoring beam are indicated by a cross at
the lower left of each sub-figure. The 5.0-GHz image (top-left) is shown on a different, larger scale.}
\end{figure*}
\section{Discussion} \label{sec:Discussion}
\subsection{Multiwavelength Observations}
The VERITAS observations of 3C\,264 in 2017$-$2019, as shown in Figure~\ref{fig:VERITAS_lightcurve}, indicate a period of enhanced VHE flux lasting at least several weeks in early 2018. This elevated state enabled the relatively quick discovery of the source at VHE and motivated an intensive multi-wavelength campaign to search for the origin of the VHE enhancement. However, there is no clearly identifiable source of the event. In the high-resolution radio and optical imaging from early 2018, there is no evidence of any significant change in the larger-scale jet beyond the core, i.e., no flaring event comparable to the well-known HST-1 flare in M\,87. The X-ray flux seen in the 2018 \emph{Chandra}/HRC observation is significantly increased (by a factor of 2) over that detected by \emph{Chandra}/ACIS in 2005. However, the current \emph{Chandra} imaging is inconclusive as to the location of this increase due to both the ambiguity of the core identification and the lack of a prior epoch of similar resolution. The flux observed from the core in other bands does not show a consistent pattern. It actually decreased by 27\% in the radio band between August 2015 and April 2018, while it increased by a modest 21\% in the V-band optical (F606W filter) over a similar time frame (Apr 2016 to Mar 2018). This level of optical variability appears to be typical based on observations of the core at other epochs. For example, there
is a 22\% drop in flux between the HST F475W observations in 2015 and 2019.
\begin{figure*}[t]
\centering
\includegraphics[width=6in]{figures/sed_3c264.pdf}
\caption{\label{fig:mainSED} The broad-band SED for 3C\,264. Gray points are historical fluxes from NED, where the low-frequency radio is dominated by the isotropically-emitting radio lobes and the optical by the host galaxy. Shown in orange are the isolated flux values for the core as seen by VLA, ALMA, HST, and \emph{Chandra} (data taken from NED, this paper, and \citealt{perlman2010_3c264}). At high energies two temporal states are shown for 3C\,264. The cyan upper limits at GeV energies and VHE correspond to the upper limits in 2017 from \emph{Fermi}-LAT and VERITAS, while the dark blue data points and Fermi spectrum show the measurements from the 2018 enhanced state. We also show the contemporaneous optical and X-ray fluxes from 2018 as dark blue circles; in the X-rays this includes the \emph{Chandra} total measurement (filled point, no spectrum) and the VERITAS-concurrent Swift measurement (open point with butterfly spectrum). The model shown (dashed and solid lines) is a self-consistent synchrotron + SSC model with parameters typical of BL Lac objects.}
\end{figure*}
The broad-band SED for 3C\,264 is shown in Figure~\ref{fig:mainSED} including historical core fluxes and the 2018 HST (F606W stokes I), \emph{Chandra}, Swift, \emph{Fermi} and VERITAS fluxes as well as the upper limits from $\gamma$-ray observations in 2017.
What is immediately notable about the core SED is the broadness of the lower-energy synchrotron peak, compared to typical blazars, or even M\,87. Given that only mild (factor of $2-3$) variations are seen in the 3-year VERITAS data set, and that the \emph{Fermi}-LAT flux in 2018 is only marginally higher than the 8-year average flux reported in the 4FGL catalog, it seems likely that the enhanced flux observed by VERITAS in 2018 was not related to an extreme flare (i.e., an event with 10-20$\times$ higher flux than normal) but rather a modestly elevated state.
Using a self-consistent synchrotron and SSC model (dashed and solid lines in Figure~\ref{fig:mainSED}) we are able to reasonably reproduce the observed SED. The modeling code is based on \citet{Graff}. It takes an injected electron distribution and uses a kinetic equation solved forward in time to find a steady-state electron distribution, which is then used to calculate the synchrotron and inverse-Compton emission. Here we use a Doppler factor of 10, and we inject a power-law electron distribution with an index of 2.6 and electron Lorentz factor confined between $200$ and $2 \times 10^6$. The comoving injected power is $ 2 \times 10^{42}$ erg/s, the comoving magnetic field $2 \times 10^{-2}$ G, and the radius of the source $2 \times 10^{16}$ cm. With these choices the source is particle dominated and the radiative cooling takes place in the slow cooling regime. The model parameters are typical of BL Lacs, except that the Doppler factor is somewhat lower. The peak is notably at a relatively high frequency, which is unusual for radio galaxies \citep{meyer2011}. The straight portion of the synchrotron curve is not able to perfectly match the optical/X-ray flux points; this is a limitation of using a single-zone model with a power-law distribution of electron energies -- a more complex model (multi-zone and/or with a log-parabolic energy spectrum) may fit the data better, though at the cost of more input parameters. More complex modeling of this source is left to future work. Based on the single-zone model here, similar to some other BL Lacs and radio galaxies, the VHE part of the model is visually softer than the relatively flat slope indicated by the VHE data, suggesting that there is a need for multiple components or more complex models to produce harder VHE emission \citep[see e.g., the case of AP Librae,][]{Hervet,zacharias2016,petropoulou2017}.
\subsection{3C\,264 as an M\,87 Analog}
\begin{figure*}[t]
\centering
\includegraphics[width=6in]{figures/sed_3c264_compare_m87.pdf}
\caption{\label{fig:compSED} SED comparison for the 3C\,264 non-thermal emission in 2018 (red) and M\,87 (light blue).
The low-energy data points for both objects come from high-resolution imaging where the core flux can be isolated from other emission, while the HE (\emph{Fermi}-LAT) and VHE data are for the total source. For M\,87, the data and fit are taken from the `average' state SED of \citealp[][Figure 3]{dejong2015}. The model curve for 3C~264 is the same as in Figure~\ref{fig:mainSED}. While both objects have remarkably consistent radio spectra, 3C\,264 clearly has a much higher synchrotron peak, near or at the X-rays, where it is also nearly 50 times more luminous than M\,87. Similarly, the HE and VHE luminosity of 3C\,264 is also clearly higher than M\,87. }
\end{figure*}
As noted in the introduction, 3C\,264 bears some resemblance to M\,87. They have identical jet powers, exhibit a one-sided optical jet with multiple knots, and are the only objects with optical superluminal jets on kpc scales. Each has a stationary knot feature which is the first bright optical knot in the jet, located at about 100 parsecs (projected) from the core (knot HST-1 in M\,87 and knot A in 3C\,264). Following this, both show fast superluminal motion up to $5-7c$ in the following knots with speeds decreasing along the jet \citep{meyer2013,meyer2015_nature}. The main difference is that the optical jet of 3C\,264 is about one-quarter the length of M\,87. This could be in part due to increased foreshortening due to a smaller orientation angle for 3C\,264, although 3C\,264 also has fewer optical knots than M\,87 (4 versus $\sim$7). Note that the observed optical proper motions set the maximum angle of each jet to similar values (16$^\circ$ and 19$^\circ$ for 3C\,264 and M\,87, respectively\footnote{These angles are derived from the maximum reported speeds of $7c$ and $6c$, respectively \citep{meyer2015_nature,biretta1999}.}), but this does not necessarily mean they actually have similar orientation angles or intrinsic (as opposed to observed) speeds.
The comparison between 3C\,264 and M\,87 is particularly interesting in light of the currently presented VHE detection because of the high-energy flaring behavior observed in M\,87 in the 2000s. These M\,87 observations consist of two distinct sets. First, \emph{Chandra} observed dramatic X-ray variability from knot HST-1 in M\,87 (100 pc from the core, projected), where the flux increased by a factor of 50 over five years \citep{harris2006_m87}, peaking in mid-2005. The dramatic increase was also seen at radio and optical wavelengths. In both the optical and X-rays, HST-1 actually outshone the core during the flare \citep{perlman2003_m87}. The knot also showed considerable shorter-timescale variability on the order of 20 days \citep{harris2009_m87}.
Secondly, during the same decade, three major VHE flares were observed from M\,87 in April 2005, February 2008, and April 2010 \citep{aharonian2006,acciari2009,aliu2012}, each with day-scale VHE variations. There has been considerable speculation about the location of these VHE flaring events. While typically it is assumed that only the core region would be compact enough to give rise to the day-scale VHE variability, the extreme X-ray outburst of the HST-1 knot led some to consider it as an alternative site \citep{stawarz2006_m87,cheung2007,harris2009_m87,harris2012_proc}. Confirming any VHE flare arising hundreds parsecs or more downstream of the base of the jet would be a major discovery and would significantly challenge models of jet formation, especially given the required small emission regions. However, high-resolution imaging conducted at the time of the 2008 and 2010 flares seems to point to the core and not
the HST-1 knot as being the source of the VHE flaring in M\,87 \citep{abramowski2012}. This is based on increased core activity during these VHE flares \citep[see also][]{georganopoulos05}. While those authors make a convincing case for a `blazar-like' origin for the VHE emission in M\,87 (both during quiescent and flaring states), the VHE emission has never been conclusively shown to originate in the core or HST-1. The possibility of more than one location is also not disfavored by the data \citep{abramowski2012}.
In light of the many points of similarity already noted between 3C\,264 and M\,87, it is interesting to compare the broad-band SEDs directly. The VHE flux of M\,87 at its peak brightness during the 2010 flare reached $\sim$10\% Crab \citep{aliu2012}. At the distance of 3C\,264 this is equivalent to 0.5\% Crab, remarkably similar to the flux detected from 3C\,264 in 2018 (0.7\% Crab). A direct comparison of the SEDs for the M\,87 core to that of 3C\,264 is shown in Figure~\ref{fig:compSED}. The data for M\,87 is taken from \cite{dejong2015} and represents an average state for the source, while the data and models for 3C~264 are the same as in Figure~\ref{fig:mainSED}. Here for both sources the isolated core measurements are used from radio to X-rays while total luminosity results (presumed to be dominated by the core) are reported at HE and VHE. It is interesting that the radio portions of the SEDs of 3C\,264 and M\,87 are practically identical,
but then deviate from each other at frequencies above $\sim 10^{13}$ Hz. The 3C\,264 synchrotron spectrum peaks somewhere between the optical and the X-rays, and the M\,87 synchrotron spectrum peaks somewhere around $\sim 10^{13}-10^{14} $ Hz. The high-energy SED of 3C\,264 is about 10 times brighter than that of M\,87. This behavior can be explained in the context of models with velocity profiles, such as a decelerating flow \citep{georganopoulos03} or a fast spine-slow sheath jet \citep{ghisellini2005} where the two jets are physically similar, but have different orientations, with 3C\,264 being closer to the line of sight than M\,87.
In such a scenario, where the high-energy electrons produce the optical to X-ray synchrotron emission and the $\gamma$-ray inverse-Compton emission comes from the faster parts of the flow, misaligning the jet causes the more highly beamed emission to correspondingly drop faster as the jet moves away from the observer.
Qualitatively, this would produce something like the observed differences between the two SEDs in Figure~\ref{fig:compSED}. Detailed modeling work to test this scenario will be considered in a future publication.
\section{Summary}
VHE $\gamma$-ray emission was discovered from the radio galaxy 3C\,264 by VERITAS in the spring of 2018. This AGN
is the most distant radio galaxy detected in the VHE to date, and the discovery was facilitated by a period of enhanced VHE flux lasting for several weeks. An extensive suite of contemporaneous multi-wavelength observations was acquired
to probe the underlying emission mechanism. These include high-resolution observations with the VLBA, VLA, HST and \emph{Chandra}, as well as observations by \emph{Swift} in the optical and X-ray, $\gamma$-rays by the \emph{Fermi}-LAT and ground-based optical observations. The mild VHE variability observed by VERITAS in 2017$-$2019 suggests that 3C\,264 did not experience a strong flare, but rather a period of modestly enhanced flux. The source of this enhanced flux is most likely the unresolved core, based on the lack of any notable change in any of the high-resolution \emph{Chandra} or HST imaging compared with previous epochs spanning the last decade; we also did not observe any large changes in the core flux at lower frequencies.
A qualitative inspection of the SED for the jet of 3C\,264 shows it is somewhat unusual for a radio galaxy, with a relatively high-frequency synchrotron peak near the X-rays. 3C\,264 could be
considered a more distant analog of the well-studied VHE source M\,87 based on both its beamed and unbeamed radio emission and its kinematic profile. If it is intrinsically similar, then 3C\,264 is likely oriented at a
smaller angle to the line-of-sight.
\acknowledgments
This research is supported by grants from the U.S. Department of Energy Office of Science, the U.S. National Science Foundation and the Smithsonian Institution, and by NSERC in Canada. This research used resources provided by the Open Science Grid, which is supported by the National Science Foundation and the U.S. Department of Energy's Office of Science, and resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility operated under Contract No. DE-AC02-05CH11231. We acknowledge the excellent work of the technical support staff at the Fred Lawrence Whipple Observatory and at the collaborating institutions in the construction and operation of the instrument. E.T. Meyer acknowledges the support of HST grant GO-14159.
\clearpage |
2212.00075 | \section{Introduction}\label{sec:intro}
Not only are primordial black holes (PBHs) a probe of very early-Universe physics, but they could also be the culprit behind several cosmological and astrophysical mysteries. For instance, even if they constitute only a small fraction of cold dark matter (CDM), intermediate-mass PBHs (1-$10^4$ $M_{\odot}$) could be the seed for supermassive black holes \citep{bean02a} or account for recent LIGO/Virgo gravitational wave observations \citep{bird16a}. Thus, even if the abundance of PBHs in this mass range is heavily constrained \citep{carr21a}, it proves invaluable to inspect further.
The intermediate-mass range is where PBH accretion may leave a non-negligible signature on the cosmic microwave background (CMB). The underlying physical phenomena that lead to an indirect signal are the following. PBHs accrete primordial plasma throughout cosmic time; some fraction of the in-falling material is converted into radiation; this radiation propagates and deposits energy into the background recombining plasma, heating and ionizing it; finally, this change to the ionization history perturbs the last-scattering surface, ultimately altering the observed CMB temperature and polarization anisotropy. In fact, the strongest constraints on the abundance of PBHs in this mass range come from this effect \cite{carr21a}; however, previous literature have only looked for a signal in 2-point CMB anisotropy statistics \citep{ricotti08a, yacine17a,poulin17a}.
One avenue that has not been inspected is the non-Gaussianity that is induced in the CMB by accreting PBHs. Although the PBH accretion rate and radiation power are largely uncertain, they necessarily depend on the magnitude of the local relative velocity between the accreted matter (baryons) and PBHs, which behave as CDM on large scales \citep{ricotti08a, yacine17a,poulin17a}. This dependence implies a spatial modulation of the luminosity of accreting PBHs and thus inhomogeneities in their perturbation to recombination. It is known that inhomogeneous recombination generates non-Gaussian signatures in CMB anisotropies \citep{senatore09a,khatri09a, dvorkin13a}. The goal of this paper is to quantify this \emph{qualitatively different} CMB signature of accreting PBHs, for the first time.
The effect considered here is similar in spirit to that studied in Ref.~\cite{dvorkin13a} in the context of dark matter (DM) annihilation, with, however, two major differences. First, since the PBH luminosity depends on the relative velocity squared, the lowest-order non-Gaussian statistics induced by accreting PBHs is the \emph{trispectrum}, or connected 4-point function. This is to be contrasted with the bispectrum (3-point function) sourced by energy injection from inhomogeneous annihilating DM \cite{dvorkin13a}. Second, in the case of annihilating DM, the inhomogeneity in energy injection is of order the DM density fluctuation around recombination, that is of order $\sim 10^{-3}$ on scales $k \sim 0.1$ Mpc$^{-1}$. In contrast, the PBH luminosity has \emph{order-unity} fluctuations on the same scales \cite{jensen21a}, as it is strongly modulated by supersonic relative velocities \cite{Tseliakhovich_10}. This implies that the inhomogeneities in the free-electron fraction sourced by accreting PBHs are comparable to its mean enhancement, as we demonstrated explicitly in Ref.~\cite{jensen21a}, hereafter Paper I. We therefore expect the amplitude of the non-Gaussian signature of accreting PBHs to be $\sim 10^3$ times larger than that of inhomogeneously annihilating DM, at equal amplitudes of the 2-point function perturbation.
In Paper I, we found that, for a PBH abundance saturating CMB power-spectra limits, the free-electron perturbation is of order $\delta_e \sim 10^{-3}$ around $z \sim 10^3$, both in mean and in root-mean-square (see Fig.~14 of Paper I). This relatively large effect implies that the CMB trispectrum could be significantly more sensitive to PBHs than CMB power spectra, as we now show with two simple order-of-magnitude estimates. First, without any exotic energy injection nor primordial non-Gaussianity, recombination is intrinsically inhomogeneous, with perturbations $\delta_{e, \rm std} \sim 10^{-4}$ \cite{Novosyadlyj_06, senatore09a}. This leads to non-Gaussianities with an amplitude just below detectability threshold for Planck \cite{senatore09a, Huang_13}. This suggests that an inhomogeneity of order $\delta_e \sim 10^{-3}$ would lead to a non-Gaussian signal with a signal-to-noise ratio (SNR) of order 10. Second, in the presence of a perturbation $\delta_e$ to the free-electron fraction, the CMB temperature anisotropy $\Theta = \Theta^{(0)} + \Theta^{(1)}$ is displaced from its standard value $\Theta^{(0)} \sim \zeta$, where $\zeta \sim 10^{-4.5}$ is the primordial curvature perturbation, by an amount $\Theta^{(1)} \sim \delta_e \zeta$. We therefore expect the connected 4-point function to be of order $\langle \Theta \Theta \Theta \Theta \rangle_{\rm c} =\langle \Theta^{(0)} \Theta^{(0)} \Theta^{(0)} \Theta^{(1)} \rangle \sim 10^{-3} \langle \zeta^2 \rangle^2$ for a PBH abundance saturating CMB power-spectra limits. In comparison, primordial trispectra lead to 4-point functions of order $\langle \Theta \Theta \Theta \Theta \rangle_{\rm c} \sim g_{\rm NL} \langle \zeta^2\rangle^3 \sim 10^{-9} g_{\rm NL} \langle \zeta^2 \rangle^2$. Planck's upper limits on the amplitude of local-type primordial non-Gaussianity is $|g_{\rm NL}| \lesssim 10^5$ \cite{planck20c}, implying that Planck is sensitive to a 4-point function of order $\langle \Theta \Theta \Theta \Theta \rangle_{\rm c} \sim 10^{-4} \langle \zeta^2 \rangle^2$. Here again, this estimates indicates that PBHs saturating CMB power spectra limits could lead to a trispectrum detectable with SNR $\sim 10$. Put differently, the trispectrum could be sensitive to PBH abundances an order of magnitude below current CMB power-spectra limits. As an ancillary effect, the perturbation of CMB power spectra induced by accreting PBHs ought to be modified by order unity when properly accounting for the inhomogeneities in $\delta_e$, which were neglected in past works \cite{ricotti08a, yacine17a, poulin17a}.
These promising estimates warrant a detailed calculation of the effects of inhomogeneously-accreting PBHs on CMB power spectra and trispectra. In this work, we take the first step in this program by computing the temperature-only 2-point and 4-point functions. We moreover forecast Planck's sensitivity to PBHs from the temperature trispectrum. We find that the inhomogeneity in recombination only leads to a $\lesssim 10\%$ correction to the effect of accreting PBHs on the temperature power spectrum. We also find that the temperature trispectrum is approximately as sensitive to accreting PBHs as the temperature power spectrum is, and is thus not quite as powerful a probe as our simple order-of-magnitude estimates indicated. This is likely due to the imperfect correlation between the standard temperature anisotropy $\Theta^{(0)}$ and the perturbation $\Theta^{(1)}$ sourced by inhomogeneous ionization fluctuations. Still, we find that, for $M_{\rm pbh} \lesssim 10^3 M_{\odot}$, the CMB temperature trispectrum would be a more sensitive probe of accreting PBHs than the temperature power spectrum is. This result motivates exploring the full temperature and polarization trispectrum, which we take up in future work.
The remainder of this paper is organized as follows. In Section \ref{sec:paperI} we begin by briefly reviewing accreting PBHs as a source of inhomogeneous recombination. By assuming spherical accretion and taking the luminosity prescription from Ref.~\cite{yacine17a} (hereafter AK17), we derive a quadratic transfer function for the perturbed free-electron fraction. This transfer function incorporates the radiation transport simulation and perturbed recombination calculation from Paper~I. We are able to make the transfer function factorizable with some justified approximations specific to accreting PBHs, which tremendously reduces the computational cost of calculating the high-dimensional trispectrum.
In Section \ref{sec:temp_ani} we derive general equations for the perturbed temperature anisotropy at first order in free-electron fraction perturbations, starting from the Boltzmann-Einstein system, and using the line-of-sight method \citep{seljak96a}. The results of this section are general and not limited to perturbations from accreting PBHs. As in previous works \citep{dvorkin13a,khatri10a} we neglect ``feedback" terms in the first-order perturbation. However, for the first time we quantify the error induced by this approximation in the case of the power-spectrum perturbation induced by homogeneous free-electron perturbations.
In Section~\ref{sec:tempstats} we apply these results to recombination perturbations due to accreting PBHs. We compute the perturbation to the temperature anisotropy power spectrum sourced by the \emph{inhomogeneous part} of free-electron fraction perturbations, which we find to be more than an order of magnitude smaller than its counterpart induced by the homogeneous effect on the ionization history. We moreover compute the temperature trispectrum induced by accreting PBHs, given in Eq.~\eqref{eq:Trispec-final}, which is one of the main results of this work.
In Section \ref{sec:forecast} we extract new limits on PBH abundance from Planck upper bounds on the local-shape primordial trispectrum \cite{planck20c}, which indirectly constrains the PBH-induced trispectrum with which it partially overlaps. But due to a poor correlation between the two trispectra, the constraints are an order of magnitude weaker than the constraints from the power spectra analysis. We also forecast the sensitivity of Planck to the temperature 4-point function induced by accreting PBHs, based on the optimal trispectrum estimator of Ref.~\cite{smith15a}. We are able to make these computations efficiently by pre-computing purely geometric rotational-invariant coefficients. We find that the temperature trispectrum could probe PBH abundances lower than current temperature-only power-spectrum limits for $M_{\rm pbh} \lesssim 10^3 M_\odot$. We conclude and discuss future work in Section \ref{sec:conc}.
We discuss a few points in more detail in the Appendices. In Appendix~\ref{app:vbcsq}, we justify the approximation of general non-linear functions of $v_{\rm bc}$ by a biased tracer of $v_{\rm bc}^2$. We describe our numerical resolution and convergence tests in Appendix~\ref{app:conv}. We review a few useful properties of spin-weighted spherical harmonics in Appendix~\ref{app:spin}, which we then use in Appendix~\ref{app:Q-sums} to derive simple expressions for the rotational-invariant quantities involved in the trispectrum sensitivity forecast calculation. In Appendix \ref{app:auto} we compute the auto-power spectrum of the temperature perturbation induced by accreting PBHs and its correlation coefficient with the standard temperature anisotropy. Latsly, in Appendix~\ref{app:slope}, we inspect the redshift dependence of the signal-to-noise ratio of the PBH-induced trispectrum.
\section{Perturbed recombination from accreting PBHs}\label{sec:paperI}
In this section we briefly review the effect of accreting PBHs on the ionization history. We derive an approximate factorized form for the free-electron fraction fluctuations, quadratic in the initial perturbations, which will help simplify our trispectrum calculations later on.
\subsection{Effect of accreting PBHs on the ionization history: general expressions}
If present in the early Universe, PBHs would accrete baryons which would power some radiation---at minimum, the heated, compressed and eventually ionized accreted gas would emit free-free radiation. The PBH luminosity $L$ is a function of the baryon sound speed $c_s$ and of the magnitude of the local relative velocity between baryons and dark matter $\boldsymbol{v}_{\rm bc}(\boldsymbol{r})$ (both evaluated far from the accretion region). The detailed dependence is estimated in AK17, accounting for Compton heating and Compton drag, and in two limiting regimes for the ionization structure of the accretion flow; throughout this paper, and unless otherwise stated, we will assume the most conservative ``collisionally-ionized" limit. Following AK17, we approximate the effect of relative velocities by adding them in quadrature to the baryon sound speed $c_s$, i.e.~approximating $L(c_s; v_{\rm bc} \neq 0) \approx L(\sqrt{c_s^2 + v_{\rm bc}^2}; 0)$. While the baryon sound speed is very nearly homogeneous near recombination, relative velocities have large-scale fluctuations, with rms values of order five times the sound speed \cite{Tseliakhovich_10}; as a consequence, the PBH luminosity $L(\boldsymbol{r}) = \overline{L} (1 + \delta_L(z, \boldsymbol{r}))$ is strongly inhomogeneous, tracing the large-scale fluctuations of relative velocities.
Assuming, to simplify, that PBHs all have the same mass $M_{\rm pbh}$ and make a fraction $f_{\rm pbh}$ of the dark matter, their accretion-powered luminosity leads to a volumetric energy injection rate
\begin{align}
\dot{\rho}_{\rm inj}(z, \boldsymbol{r}) &= \overline{\dot{\rho}}_{\rm inj}(z)\left(1 + \delta_L(z, \boldsymbol{r})\right),\nonumber\\
\overline{\dot{\rho}}_{\rm inj}(z) &\equiv f_{\rm pbh}\frac{\overline{\rho}_c(z)}{M_{\rm pbh}} \overline{L}(z)
\end{align}
where $z$ is the redshift and $\overline{\rho}_c$ is the mean dark matter mass density. Note that this equation is trivially generalizable to an extended mass distribution. This inhomogeneously-injected energy is partially deposited at some later time, and some distance away from the injection site. Some of this energy is deposited in the form of extra ionizations, leading to a perturbation $\Delta x_e(z, \boldsymbol{r})$ to the free-electron fraction. The latter is a convolution of the volumetric energy injection rate with a dimensionless injection-to-ionization Green's function. In Fourier space, this convolution is a simple product:
\begin{equation}
\Delta x_e(z, \boldsymbol{k}) = \int_z^{\infty} \frac{d z'}{1+z'} G_{x_e}^{\rm inj}(z, z', k) \frac{\overline{\dot{\rho}}_{\rm inj}}{n_{\rm H} H E_I}\Big{|}_{z'} \delta_L(z', \boldsymbol{k}), \label{eq:Dxe(k)}
\end{equation}
where $n_{\rm H}$ is the mean number density of hydrogen, $H$ is the Hubble rate, and $E_I \equiv 13.6$ eV is hydrogen's ionization energy. The homogeneous part of the ionization-fraction perturbation is obtained from a similar time integral, involving the homogeneous part of the Green's function:
\begin{equation}
\overline{\Delta x_e}(z) = \int_0^a \frac{d z'}{1+z'} G_{x_e}^{\rm inj}(z, z', 0) \frac{\overline{\dot{\rho}}_{\rm inj}}{n_{\rm H} H E_I}\Big{|}_{z'}.
\end{equation}
In Paper I, we computed the Green's function $G_{x_e}^{\rm inj}(z, z', k)$ numerically, by convolving the injection-to-deposition Green's function obtained from a radiative transfer code with the deposition-to-ionization Green's functions computed with a modified \texttt{HYREC}-2 \cite{hyrec2, yacine11a, YAH_10}.
\subsection{Quadratic transfer function of ionization perturbations}
The scale dependence of the luminosity perturbations $\delta_L$ is non-trivial, as the PBH luminosity is a nonlinear function of $v_{\rm bc}^2$. However, as we will see below, at lowest order $\Delta x_e$ affects CMB anisotropy statistics only through cross-correlations with other fields. As we demonstrate in Appendix \ref{app:vbcsq}, to a good approximation these cross-correlations can be obtained by approximating the full function by a biased tracer of $v_{\rm bc}^2$, with the same first moment:
\begin{equation}\label{eq:b}
\delta_L(z,\boldsymbol{r}) \approx b(z) \left(\frac{v_{\rm bc}^2(z,\boldsymbol{r})}{\langle v_{\rm bc}^2 \rangle(z)} -1 \right), \ \ \ b \equiv \frac32 \frac{\langle v_{\rm bc}^2 \delta_L\rangle}{\langle v_{\rm bc}^2\rangle}.
\end{equation}
This approximation is most accurate in both the large-scale and small-scale regimes, and as a consequence is reasonably accurate at all scales. We show the bias parameter $b$ as a function of redshift in Fig.~\ref{fig:b} for several black hole masses, for the AK17 accretion luminosity model. It is systematically negative, reflecting the suppression of accretion rate and luminosity in regions of large relative velocity, and its absolute value is roughly of order unity across a broad range of masses and redshifts. Although the accretion model is highly uncertain, we expect that these qualitative features should be robust, and hold even for very different accretion models, such as disk-like accretion \cite{poulin17a}.
\begin{figure}[htb]
\includegraphics[width=.95\columnwidth,trim={0cm 0.5cm 0.5cm 0.25cm}]{b_mbh_CI.pdf}
\caption{\label{fig:b} Bias parameter $b(z)$ of the PBH accretion luminosity, approximated as a biased tracer of $v_{\rm bc}^2$ (see precise definition in Eq.~\ref{eq:b}). The bias is shown as a function of redshift and for several PBH masses $1M_{\odot} \leq M_{\rm pbh} \leq 10^4 M_{\odot}$.}
\end{figure}
Assuming scalar initial conditions and linear evolution, the relative velocity field is purely longitudinal, and we denote its transfer function by $\widetilde{v}_{\rm bc}(z, k)$ defined such that
\begin{equation}
\boldsymbol{v}_{\rm bc}(z, \boldsymbol{k}) = -i \hat{k} ~\widetilde{v}_{\rm bc}(z, k) \zeta(\boldsymbol{k}),
\end{equation}
where $\zeta(\boldsymbol{k})$ is the primordial curvature perturbation. We then have
\begin{align}
v_{bc}^2(z, \boldsymbol{k}) =& (\boldsymbol{v}_{bc} \cdot \boldsymbol{v}_{bc})(z, \boldsymbol{k})\nonumber\\
=&- \int D(k_1 k_2)\cancel{\delta}(\boldsymbol{k}_1 + \boldsymbol{k}_2 - \boldsymbol{k}) (\hat{k}_1 \cdot \hat{k}_2) \nonumber\\
&\quad\quad\times \widetilde{v}_{\rm bc}(z, k_1) \widetilde{v}_{\rm bc}(z, k_2) \zeta(\boldsymbol{k}_1) \zeta(\boldsymbol{k}_2),
\end{align}
where from here on we denote $D(k_1 \cdots k_N) \equiv d^3 k_1/(2 \pi)^3 \cdots ~ d^3 k_N/(2 \pi)^3$ and $\cancel{\delta}(\boldsymbol{k}) \equiv (2 \pi)^3 \delta_{\rm D}(\boldsymbol{k})$.
We denote by $\delta_e \equiv \Delta x_e/x_e^{(0)} = \overline{\delta}_e + \delta_{e, \rm inh}$ the total fractional perturbation to the standard (and homogeneous) ionization history $x_e^{(0)}$. The first part, $\overline{\delta}_e$, is the homogeneous contribution, and the second part, $\delta_{e, \rm inh}$, is the inhomogeneity, which has zero mean, $\langle \delta_{e, \rm inh} \rangle = 0$.
Inserting Eq.~\eqref{eq:b} into Eq.~\eqref{eq:Dxe(k)}, we obtain the Fourier transform of $\delta_{e, \rm inh}$ for $\boldsymbol{k} \neq 0$:
\begin{align}
\delta_{e, \rm inh}(z, \boldsymbol{k}) \equiv \frac{\Delta x_e(z, \boldsymbol{k})}{x_e^{(0)}(z)} &\approx f_{\rm pbh} \int\!\! D(k_1 k_2)\cancel{\delta}(\boldsymbol{k}_1 + \boldsymbol{k}_2 - \boldsymbol{k}) \nonumber\\
&\quad\times T_e(z, \boldsymbol{k}_1, \boldsymbol{k}_2) \zeta(\boldsymbol{k}_1) \zeta(\boldsymbol{k}_2), \label{eq:Te-def}
\end{align}
where, for $\boldsymbol{k}_1 + \boldsymbol{k}_2 \neq \boldsymbol{0}$, the ionization-perturbation quadratic transfer function $T_e$ is defined as
\begin{align}
T_e(z, \boldsymbol{k}_1, \boldsymbol{k}_2) &\equiv - \frac{\hat{k}_1 \cdot \hat{k}_2}{x_e^{(0)}(z)} \int_z^{\infty} \frac{d z'}{1+z'} G_{x_e}^{\rm inj}(z, z', |\boldsymbol{k}_1 + \boldsymbol{k}_2|) \nonumber\\
&~~~~~~~ \times \frac{\overline{\rho}_c \overline{L}~ b}{M_{\rm pbh} n_{\rm H} H E_I}\Big{|}_{z'} \frac{\widetilde{v}_{\rm bc}(k_1)\widetilde{v}_{\rm bc}(k_2)}{ \langle v_{\rm bc}^2\rangle}\Big{|}_{z'}. \label{eq:Te-general}
\end{align}
We moreover define
\begin{align}
T_e(z, \boldsymbol{k}_1, -\boldsymbol{k}_1) = 0,
\end{align}
so that we may use Eq.~\eqref{eq:Te-def} even for $\boldsymbol{k} = \boldsymbol{0}$, in which case it gives $\delta_{e, \rm inh}(\boldsymbol{k} = \boldsymbol{0}) = 0$, as it should since $\delta_{e, \rm inh}$ is defined to have a vanishing spatial average\footnote{A more rigorous approach would be to keep track of the term proportional to $\cancel{\delta}(\boldsymbol{k})$ in $\delta_{e, \rm inh}(\boldsymbol{k})$; upon cross-correlating with other fields, this approach would give the same results as using Eq.~\eqref{eq:Te-def} for all $\boldsymbol{k}$ and imposing $T_e(\boldsymbol{k}_1, - \boldsymbol{k}_1) = 0$.}.
\subsection{Factorized approximation of the quadratic ionization transfer function}\label{sec:approx}
We now derive an approximate, factorized form for $T_e$, which will tremendously simplify our subsequent calculations of CMB power spectra and trispectra. We do so by making two approximations.
$(i)$ For $z \gtrsim 10^3$, the Green's function $G_{x_e}^{\rm inj}(z, z')$ is peaked at $z' \approx z$ (see Fig.~9 in Paper I). We may therefore approximate the last ratio in Eq.~\eqref{eq:Te-general} by its value at $z' = z$. For $z \lesssim 10^3$, the Green's function is increasingly broad; however, after kinematic decoupling at $z_{\rm dec} \approx 1020$, relative velocities redshift as $\widetilde{v}_{\rm bc}(z, k) \propto (1+z)$, independently of scale \cite{Tseliakhovich_10}. Therefore, the last term in Eq.~\eqref{eq:Te-general} is independent of redshift for $z' \lesssim z_{\rm dec}$. We therefore make the following approximation in Eq.~\eqref{eq:Te-general}, which we expect to be accurate at all redshifts:
\begin{equation}
\frac{\widetilde{v}_{\rm bc}(k_1)\widetilde{v}_{\rm bc}(k_2)}{ \langle v_{\rm bc}^2\rangle}\Big{|}_{z'} \approx \frac{\widetilde{v}_{\rm bc}(k_1)\widetilde{v}_{\rm bc}(k_2)}{ \langle v_{\rm bc}^2\rangle}\Big{|}_{z}.
\end{equation}
This approximation implies the following simplification:
\begin{align}
T_e(z, \boldsymbol{k}_1, \boldsymbol{k}_2) &=(\hat{k}_1 \cdot \hat{k}_2) G_e(z, |\boldsymbol{k}_1 + \boldsymbol{k}_2|) \nonumber\\
&\quad\quad\quad\quad\times \frac{\widetilde{v}_{\rm bc}(z, k_1) \widetilde{v}_{\rm bc}(z, k_2)}{\langle v_{\rm bc}^2 \rangle_z},\\
G_e(z, k) \equiv& -\int_z^{\infty}\frac{d z'}{1+z'} \frac{G_{x_e}^{\rm inj}(z, z', k)}{x_e^{(0)}(z)} \frac{\overline{\rho}_c \overline{L}~ b}{M_{\rm pbh} n_{\rm H} H E_I}\Big{|}_{z'}.\label{eq:G_e}
\end{align}
\begin{figure}[htb]
\includegraphics[width=.95\columnwidth,trim={0cm 0.5cm 0.5cm 0.25cm}]{G_e_100_M.pdf}
\caption{\label{fig:G_e} Normalized injection-integrated Green's function defined in Eq.~\eqref{eq:G_e}, at various redshifts, for 100-$M_\odot$ PBHs. This function is approximately Gaussian with a characteristic cutoff $k_*(z)$, beyond which ionization inhomogeneities are suppressed due to finite propagation of injected photons.}
\end{figure}
$(ii)$ As illustrated in Fig.~\ref{fig:G_e}, we find that $G_e(z, k)$ is an approximately Gaussian function of wavenumber, with a characteristic cutoff at a redshift-dependent scale $k_*(z)$:
\begin{equation}
G_e(z, k) \approx G_e(z, 0) e^{- k^2/k_*^2(z)}.
\end{equation}
Given that $(\boldsymbol{k}_1 + \boldsymbol{k}_2)^2 \leq 2k_1^2 + 2 k_2^2$, we may therefore approximately bracket $G_e$ as follows:
\begin{equation}
\frac{G_e(z, \sqrt{2} k_1)~G_e(z, \sqrt{2} k_2) }{G_e(z, 0)} \leq G_e(z, |\boldsymbol{k}_1 + \boldsymbol{k}_2|) \leq G_e(z, 0).
\end{equation}
By default, we will conservatively approximate $G_e$ by the lower bound of this range. This approximation is accurate at large scales $k_1,k_2 \lesssim k_*(z)$, at which propagation effects are not relevant to energy deposition.
With these two approximations, the quadratic ionization transfer function takes on the factorized form
\begin{align}
T_e(z, \boldsymbol{k}_1,\boldsymbol{k}_2) &\approx (\hat{k}_1 \cdot \hat{k}_2) \Delta_e(z, k_1) \Delta_e(z, k_2), \label{eq:dele_pbh}\\
\Delta_e(z, k) &\equiv \frac{G_e(z, \sqrt{2} k)}{\sqrt{G_e(z, 0)}} \frac{\widetilde{v}_{\rm bc}(z, k)}{\langle v_{\rm bc}^2 \rangle_z^{1/2}}, \label{eq:Delta_e-def}
\end{align}
where we recall that this expression holds for $\boldsymbol{k}_1 + \boldsymbol{k}_2 \neq 0$ only, and that $T_e(z, \boldsymbol{k}_1, - \boldsymbol{k}_1) = 0$.
Our approximation for $G_e(z, |\boldsymbol{k}_1 + \boldsymbol{k}_2|)$ can significantly underestimate the true signal at small scales, in particular for $\boldsymbol{k}_1 \approx -\boldsymbol{k}_2$ or $k_1 \ll k_2$. Moreover, it modifies the geometric dependence of the signal. In order to estimate the error that this approximation induces, we will also show our results in the spatially-on-the-spot approximation $G_e(z, k) \approx G_e(z, 0)$, which systematically over-estimates the signal. In that case, the quadratic ionization transfer function still takes the form \eqref{eq:dele_pbh}, but with $\Delta_e(z, k) = \sqrt{G_e(z, 0)} \frac{\widetilde{v}_{\rm bc}(z, k)}{\langle v_{\rm bc}^2 \rangle_z^{1/2}}$. We show $\Delta_e(z, k)$ as a function of wavenumber and redshift in Fig.~\ref{fig:Del_e}, both for our default approximation, and in the on-the-spot limit.
\begin{figure*}[htp]
\includegraphics[trim={0.7 0 0.4cm 0},width=\columnwidth]{Del_e_k_log.pdf}
\includegraphics[trim={0.7 0 0.4cm 0},width=\columnwidth]{Del_e_z_log.pdf}
\caption{\label{fig:Del_e} $\Delta_e(z,k)$ as defined in Eq.~\eqref{eq:Delta_e-def} for PBHs of 100 $M_\odot$ plotted for various redshifts as a function of scale (left) and for various scales as a function of redshift (right). We also show this function if we ignore photon propagation and consider energy deposition as spatially on-the-spot (dashed lines). These plots reveal the shape of the free-electron perturbations induced by $v_{\rm bc}^2$ as well as the amplitude suppression when considering nonlocal energy deposition from accreting PBHs.}
\end{figure*}
\section{Temperature anisotropy from perturbed recombination: general equations}\label{sec:temp_ani}
We turn to computing the temperature anisotropy in the presence of a general deviation from the standard free-electron fraction evolution, including spatial variations. Because observed CMB anisotropies are consistent with the standard $\Lambda$CDM prediction and canonical homogeneous recombination, this deviation is necessarily small, allowing for a perturbative treatment.
\subsection{Temperature Boltzmann equation}
The evolution of the phase-space distribution of photons is governed by the Boltzmann-Einstein differential system. CMB photons follow geodesics in an expanding universe subject to Thomson scattering off free electrons. Provided photons remain thermal, they are described entirely by their temperature fluctuations $\Theta(\eta,{\bm x},\hat{n})$ and transverse, symmetric trace-free $3 \times 3$ polarization tensor $P_{ab}(\eta, \bm x, \hat{n})$, where $\hat{n}$ is the propagation direction and $\eta$ is the conformal time. In the conformal Newtonian gauge, the Boltzmann-Einstein equation for the temperature perturbation is \citep{ma95a}
\begin{align}\label{eq:be_ODE}
\deriv{\Theta}{\eta}\equiv \dot{\Theta}+\hat{n}\cdot\nabla\Theta+\hat{n}\cdot\nabla\psi-\dot{\phi}&=\dot{\tau}~ \mathcal{C}[\Theta, P_{ab}, {\bm v}_b],
\end{align}
where overdots denote partial derivatives with respect to $\eta$.
In this equation, ${\bm v}_b$ is the baryon velocity, and $\dot{\tau}\equiv a n_e\sigma_T $ is the conformal scattering rate by free electrons with number density $n_e$, where $a$ is the scale factor, and $\sigma_{\rm T}$ is the Thomson cross section. $\mathcal{C}$ is a linear operator encapsulating the geometry of Thomson scattering:
\begin{align}
\mathcal{C}[\Theta, P_{ab}, {\bm v}_b](\hat{n})&\equiv \mathcal{L}[\Theta, P_{ab}, {\bm v}_b](\hat{n}) - \Theta(\hat{n}), \label{eq:thomS_ang_avg}\\[5pt]
\mathcal{L}[\Theta, P_{ab}, {\bm v}_b](\hat{n})&\equiv \Theta_0 + \hat{n} \cdot \boldsymbol{v}_b + \hat{n}_a \hat{n}_b ~\Pi_{ab}, \label{eq:thoms_piece}
\end{align}
where $\Theta_0$ is the photon temperature monopole,
\begin{align}
\Theta_0 \equiv \int \frac{d^2 \hat{n}}{4 \pi} \Theta(\hat{n}),
\end{align}
and the symmetric trace-free tensor $\Pi_{ab}$ is a linear combination of the photon quadrupole moment and the angle-averaged polarization tensor:
\begin{align}
\Pi_{ab} \equiv \int \frac{d^2 \hat{n}}{4 \pi} \left[\frac14 (3 \hat{n}_a \hat{n}_b - \delta_{ab}) \Theta(\hat{n}) + \frac32 P_{ab}(\hat{n}) \right].
\end{align}
To close the Boltzmann-Einstein differential system, the evolution equations for baryons, cold dark matter, neutrinos, and photon polarization are needed \citep{ma95a}, but we do not explicitly list them here.
\subsection{Standard solution}\label{sec:canon}
We now briefly review the standard solution obtained with scalar initial conditions and for linear evolution, and given the standard, homogeneous free-electron fraction $x_e^{(0)}$. The notation and expressions derived here will be useful in the following section dealing with the perturbation to CMB anisotropies induced by modified recombination. We denote all standard variables by a superscript $(0)$, e.g.~$\Theta^{(0)}, \psi^{(0)}$, etc... For short, we also denote $\mathcal{C}^{(0)} \equiv \mathcal{C}[\Theta^{(0)}, P_{ab}^{(0)}, \bm v_b^{(0)}]$, and similarly for $\mathcal{L}^{(0)}$.
The Boltzmann equation \eqref{eq:be_ODE} is most easily solved in terms of the variable $\Theta_{\rm eff} \equiv \Theta + \psi$, and in Fourier space. For the standard case, it takes the form
\begin{align}
\dot{\Theta}_{\rm eff}^{(0)} + i \boldsymbol{k} \cdot \hat{n} \Theta_{\rm eff}^{(0)} + \dot{\tau}^{(0)} \Theta_{\rm eff}^{(0)} = \dot{\tau}^{(0)} S^{(0)},\\
S^{(0)} \equiv \mathcal{L}^{(0)} + \psi^{(0)} + \frac1{\dot{\tau}^{(0)}} (\dot{\psi}^{(0)} + \dot{\phi}^{(0)}),\label{eq:S0-def}
\end{align}
The solution at an arbitrary conformal time is given by the line-of-sight solution \citep{seljak96a},
\begin{align}\label{eq:Thetaeff0_sol}
\Theta^{(0)}_{\rm eff}(\eta, \boldsymbol{k}, \hat{n}) &= \exp\left(\int_\eta^{\eta_0}d \eta''~\dot{\tau}^{(0)}(\eta'')\right) \nonumber\\
&\times \int_0^{\eta} d \eta' ~g(\eta')S^{(0)}(\eta',{\bm k},\hat{n}) e^{i \boldsymbol{k} \cdot \hat{n} (\eta' - \eta)},
\end{align}
where $\eta_0$ is the conformal time today, and $g(\eta)$ is the standard visibility function,
\begin{align}
g(\eta) \equiv \dot{\tau}^{(0)}(\eta) \exp\left(- \int_\eta^{\eta_0}d \eta'~\dot{\tau}^{(0)}(\eta')\right),
\end{align}
In particular, the line-of-sight solution today, and at the spatial origin ${\bm x}=0$, is given by \citep{seljak96a}
\begin{align}\label{eq:los}
\Theta^{(0)}_{\rm eff}(\eta_0,{\bm x}=0,\hat{n})=\int Dk \int_0^{\eta_0}\!\!d \eta\, g(\eta) \nonumber\\
S^{(0)}(\eta, \boldsymbol{k}, \hat{n}) ~ e^{- i \boldsymbol{k}\cdot\hat{n}\chi},
\end{align}
where from here on we denote $\chi \equiv \eta_0 - \eta$.
Under scalar adiabatic initial conditions, the baryon velocity is purely longitudinal, i.e.~in Fourier space, $\bm v_b^{(0)}(\boldsymbol{k}) = - i \boldsymbol{k} \theta_b^{(0)}/k^2$. Moreover, we have $\hat{n}_a \hat{n}_b \Pi_{ab}^{(0)}(\boldsymbol{k}) = - \Pi^{(0)} P_2(\hat{k} \cdot \hat{n})$, where $\Pi^{(0)}\equiv (F_{\gamma 2}+G_{\gamma 0}+G_{\gamma 2})/8$ is a combination of the photon temperature quadrupole and polarization monopole and quadrupole moments (here we used the notation of Ref.~\cite{ma95a}). We thus have
\begin{align}\label{eq:S0}
S^{(0)}(\boldsymbol{k}, \hat{n}) = \Theta_0^{(0)} + \psi^{(0)} + \frac1{\dot{\tau}^{(0)}} (\dot{\psi}^{(0)} + \dot{\phi}^{(0)})\nonumber\\
- \frac{i }{k} (\hat{k} \cdot \hat{n}) \theta_b^{(0)}- P_2(\hat{k} \cdot \hat{n}) \Pi^{(0)} ,
\end{align}
where the dependence of $\Theta_0^{(0)}, \bm v_b^{(0)}$, etc... on $\boldsymbol{k}$ is implicit.
To simplify this expression, note that $- i \hat{k} \cdot \hat{n} ~e^{- i \boldsymbol{k} \cdot \hat{n} \chi } = \partial_{k\chi} e^{- i \boldsymbol{k} \cdot \hat{n} \chi}$, where $\partial_{k \chi} \equiv \frac1{k} \frac{\partial}{\partial \chi}$. In Eq.~\eqref{eq:los} we may then conveniently substitute $S^{(0)}$ by an angle-independent differential operator acting on the geometric exponential term,
\begin{align}\label{eq:S_part}
S^{(0)}(\boldsymbol{k}, \hat{n}) \rightarrow S_{\partial}^{(0)}(\boldsymbol{k}) \equiv \Theta_0^{(0)} + \psi^{(0)} + \frac1{\dot{\tau}^{(0)}} (\dot{\psi}^{(0)} + \dot{\phi}^{(0)})\nonumber\\
+ \frac {\theta_b^{(0)}}{k} \partial_{k\chi} + \Pi^{(0)} \left( \frac3{2} \partial_{k\chi}^2 + \frac12 \right).
\end{align}
Using this substitution, and from the Rayleigh formula,
\begin{align}
e^{-i \boldsymbol{k} \cdot \hat{n}\chi} &= \sum_{\ell} (-i)^\ell (2 \ell +1) j_\ell(k \chi) P_\ell(\hat{n} \cdot \hat{k}) \label{eq:plane-wave1}\\
&= 4\pi \sum_{\ell m} (-i)^{\ell} j_{\ell}(k \chi) Y_{\ell m}(\hat{n}) Y_{\ell m}^*(\hat{k}),\label{eq:plane-wave}
\end{align}
where $j_\ell$ are the spherical Bessel functions, we may directly read off the harmonic multipoles (for $\ell > 0$) of the standard temperature anisotropy from Eq.~\eqref{eq:los}:
\begin{align}\label{eq:thet0}
\Theta^{(0)}_{\ell m}&=4\pi (-i)^\ell \!\int\! Dk\, Y^*_{\ell m}({\hat k})\int_0^{\eta_0}\!\!
d \eta \,g(\eta) S^{(0)}_\partial(\boldsymbol{k}, \eta) j_{\ell}(k\chi),
\end{align}
where the operator $S^{(0)}_{\partial}$ now acts on the Bessel function.
Lastly, we denote by $\widetilde{S}^{(0)}_{\partial}(k, \eta)$ the transfer function of $S^{(0)}_{\partial}$, defined such that
\begin{align}
S^{(0)}_{\partial}(\boldsymbol{k}, \eta) = \widetilde{S}^{(0)}_{\partial}(k, \eta) \zeta(\boldsymbol{k}),
\end{align}
where $\zeta(\boldsymbol{k})$ is the primordial curvature perturbation. We thus obtain
\begin{align}
\Theta^{(0)}_{\ell m} &= 4\pi (-i)^\ell \int Dk~Y^*_{\ell m}({\hat k})\Delta_\ell(k) \zeta(\boldsymbol{k}), \label{eq:Theta_lm^0}\\
\Delta_\ell(k) &\equiv \int_0^{\eta_0} d \eta ~g(\eta) ~\widetilde{S}^{(0)}_{\partial}(k, \eta) j_{\ell}(k \chi).\label{eq:Deltal-def}
\end{align}
Assuming the primordial curvature perturbation is Gaussian, with power spectrum $P_{\zeta}(k)$, i.e.~such that
\begin{align}
\langle \zeta(\boldsymbol{k}) \zeta(\boldsymbol{k}') \rangle = (2 \pi)^3 \delta_{\rm D}(\boldsymbol{k} + \boldsymbol{k}') P_{\zeta}(k) \equiv \cancel{\delta}(\boldsymbol{k} + \boldsymbol{k}') P_{\zeta}(k),
\end{align}
the canonical temperature anisotropy angular power spectrum, $\braket{\Theta^{(0)}_{\ell m}\Theta^{*(0)}_{\ell'm'}}\equiv \delta_{\ell \ell'}\delta_{m m'}C_\ell^{(0)}$, is then given by
\begin{align}\label{eq:C_l}
C_{\ell}^{(0)}= 4 \pi \int Dk ~[\Delta_\ell(k)]^2 P_{\zeta}(k).
\end{align}
The $\Delta_\ell(k)$ are the temperature fluctuation multipole transfer functions that can be extracted from cosmological codes such as \texttt{CLASS} \citep{CLASS}. In practice, since we will need to compute similar integrals later on, we compute the conformal time integral in Eq.~\eqref{eq:Deltal-def} ourselves, using only the source term transfer functions in Eq.~\eqref{eq:S_part} from \texttt{CLASS}. We also compute the $k$-integral in Eq.~\eqref{eq:C_l} ourselves, and checked that our results match those of \texttt{CLASS} to high accuracy. We discuss our numerical resolution and convergence tests in Appendix~\ref{app:conv}.
\subsection{Temperature anisotropy due to perturbed recombination }\label{sec:perturb}
We now suppose the free-electron fraction is perturbed, $x_e = x_e^{(0)} (1 + \delta_e)$. Importantly, we make no assumption about the spatial dependence of $\delta_e$, which in general has both a homogeneous and an inhomogeneous piece. As a result of the modified Thomson scattering rate $\dot{\tau} = \dot{\tau}^{(0)}(1 + \delta_e) \equiv \dot{\tau}^{(0)} + \dot{\tau}^{(1)}$, all matter and metric fields also get altered: $\Theta = \Theta^{(0)} + \Theta^{(1)}$, $\psi = \psi^{(0)} + \psi^{(1)}$, etc... For short, we again denote $\mathcal{C}^{(1)} \equiv \mathcal{C}[\Theta^{(1)}, P_{ab}^{(1)}, \bm{v}_b^{(1)}]$, and similarly for $\mathcal{L}^{(1)}$.
In general, matter and metric fields depend nonlinearly on $\delta_e$; however, in the limit of small $\delta_e$, we may solve them with a perturbative expansion in $\delta_e \ll 1$. The zero-th order equation is the canonical Boltzmann-Einstein system discussed in Sec.~\ref{sec:canon}. At first order in $\delta_e \ll 1$, the photon temperature Boltzmann equation is
\begin{align}\label{eq:pertthet} \dot{\Theta}^{(1)}+\hat{n}\cdot\nabla\Theta^{(1)}+\hat{n}\cdot\nabla&\psi^{(1)}-\dot{\phi}^{(1)}= \dot{\tau}^{(0)}\mathcal{C}^{(1)}+\dot{\tau}^{(1)}\mathcal{C}^{(0)}.
\end{align}
It is convenient to rewrite this equations in terms of the variable $\Theta_{\rm eff}^{(1)} \equiv \Theta^{(1)} + \psi^{(1)}$, as follows:
\begin{align}
\dot{\Theta}_{\rm eff}^{(1)} + \hat{n} \cdot \nabla \Theta_{\rm eff}^{(1)} + \dot{\tau}^{(0)} \Theta_{\rm eff}^{(1)} &= \dot{\tau}^{(0)} S^{(1)},\label{eq:Theta_eff1}
\end{align}
where the source term $S^{(1)}$ will be discussed shortly. Again, Eq.~\eqref{eq:Theta_eff1} can be easily solved in Fourier space, with the familiar line-of-sight solution. In particular, the order-one photon temperature perturbation at present time $\eta_0$, and at the spatial origin, takes the form
\begin{align}\label{eq:los_source}
\Theta^{(1)}_{\rm eff}&(\eta_0,{\bm x}=0,\hat{n})=\nonumber\\
&\int Dk\int_0^{\eta_0}\!\!d \eta\, g(\eta)S^{(1)}(\bm k, \eta, \hat{n}) e^{-i{\bm k}\cdot\hat{n}\chi}.
\end{align}
The first-order source term $S^{(1)}$ contains two pieces:
\begin{align}
S^{(1)} &= S^{(1) \rm d} + S^{(1) \rm f},\\
S^{(1) \rm d} &\equiv \delta_e*\mathcal{C}^{(0)}, \label{eq:S1direct}\\
S^{(1) \rm f}&\equiv\mathcal{L}^{(1)}+ \psi^{(1)} + \frac{1}{\dot{\tau}^{(0)}}\left(\dot{\psi}^{(1)} + \dot{\phi}^{(1)}\right).
\end{align}
The first piece $S^{(1) \rm d}$, we coin as the ``direct" term, as it depends directly on the perturbed free-electron fraction $\delta_e$, and otherwise on zero-th order terms through $\mathcal{C}^{(0)}$, which can thus be extracted in a relatively straightforward fashion from \texttt{CLASS}. Note that $\delta_e * \mathcal{C}^{(0)}$ denotes a multiplication in real space, or a convolution in Fourier space. The second piece $S^{(1) \rm f}$, we dub the ``feedback" term, as it depends on first-order terms; it thus requires solving explicitly for the infinite Boltzmann hierarchy similar to that solved at zeroth order, but with an additional source term, containing wavemode mixing due to convolutions in Fourier space \cite{khatri10a}.
As in previous studies \cite{khatri09a,dvorkin13a}, we will not solve for the feedback term in this work. However, we now quantify its magnitude for the first time, in the limit of \emph{homogeneous} perturbations to recombination.
\subsection{Magnitude of the feedback term for homogeneous \texorpdfstring{$\delta_e$}{de}}\label{sec:homo_de}
We consider the limiting case where $\delta_e(\eta, \boldsymbol{x}) = \overline{\delta}_e(\eta)$ is homogeneous. Our perturbative expansion in $\delta_e$ applies just as well in this case, as long as $\overline{\delta}_e \ll 1$. We shall only include the ``direct" source term, and then explicitly check our results against the exact output from \texttt{CLASS}, which can handle arbitrary homogeneous perturbations to the recombination history, thus effectively account for both ``direct" and ``feedback" sources (although the calculation is not split this way in \texttt{CLASS}).
Let us rewrite the direct source term as
\begin{align}
S^{(1) \rm d}_{\rm hom} = \overline{\delta}_e \mathcal{C}^{(0)} = \overline{\delta}_e \left(\mathcal{L}^{(0)} + \psi^{(0)}\right) - \overline{\delta}_e \Theta_{\rm eff}^{(0)},
\end{align}
where the subscript ``hom" is there to remind the reader that we are considering a homogeneous free-electron fraction in this section.
The contribution of the second term to the innermost integral of Eq.~\eqref{eq:los_source} can be rewritten in the form
\begin{align}
\int_0^{\eta_0} d \eta ~g(\eta) ~ \overline{\delta}_e(\eta) \Theta_{\rm eff}^{(0)}(\eta, \boldsymbol{k}, \hat{n}) e^{- i \boldsymbol{k} \cdot \hat{n} \chi}\nonumber\\
= \int_0^{\eta_0} d\eta ~g(\eta) ~ \overline{\mathcal{D}}_e(\eta) S^{(0)}(\eta, \boldsymbol{k}, \hat{n})e^{- i \boldsymbol{k} \cdot \hat{n} \chi},
\end{align}
where
\begin{align}
\overline{\mathcal{D}}_e(\eta)\equiv\int_{\eta}^{\eta_0} d \eta'~ \dot{\tau}^{(0)}(\eta') \overline{\delta}_e(\eta').
\end{align}
To obtain this result, we inserted the arbitrary-time line-of-sight solution \eqref{eq:Thetaeff0_sol} for $\Theta_{\rm eff}^{(0)}$, and switched the order of integration. We therefore arrive at the following expression for the direct contribution to the first-order temperature perturbation in the homogeneous case:
\begin{align}
{\Theta}^{(1) \rm d}_{\rm hom}(\eta_0,{\bm x}=0,\hat{n})=\int Dk\int_0^{\eta_0}\!\!d \eta\, g(\eta) \nonumber\\
\left[\overline{\delta}_e(\eta) (\mathcal{L}^{(0)}_{\partial} + \psi^{(0)}) - \overline{\mathcal{D}}_e(\eta) S^{(0)}_{\partial} \right] e^{-i{\bm k}\cdot\hat{n}\chi},
\end{align}
where $\mathcal{L}^{(0)}_{\partial}(\boldsymbol{k}, \eta)$ is the operator obtained from $\mathcal{L}^{(0)}(\boldsymbol{k}, \eta, \hat{n})$ in the same fashion as $S^{(0)}_{\partial}$ is obtained from $S^{(0)}$ (c.f. Eq.~\eqref{eq:S_part}).
Using the same steps as in Sec.~\ref{sec:canon}, we thus arrive at the following expression for the spherical-harmonic components of the direct-only part of $\Theta^{(1)}_{\rm hom}$:
\begin{align}
{\Theta}_{\ell m, \rm hom}^{(1) \rm d} &= 4 \pi (-i)^{\ell} \int Dk~ Y_{\ell m}^*(\hat{k}) \Delta_{\ell, \rm hom}^{(1) \rm d}(k) \zeta(\boldsymbol{k}), \label{eq:Theta1_lm_hom} \\
\Delta_{\ell, \rm hom}^{(1) \rm d}(k) &\equiv \int_0^{\eta_0} d \eta ~g(\eta)~ \Big{[}\overline{\delta}_e(\eta) (\widetilde{\mathcal{L}}^{(0)}_{\partial} + \widetilde{\psi}^{(0)}) \nonumber\\
&\quad \quad\quad \quad\quad\quad\quad- \overline{\mathcal{D}}_e(\eta) \widetilde{S}^{(0)}_{\partial} \Big{]} j_{\ell}(k \chi), \label{eq:Delta1_lm_hom}
\end{align}
where $\widetilde{\mathcal{L}}^{(0)}_{\partial}(k, \eta)$ and $\widetilde{\psi}^{(0)}(k, \eta)$ are the transfer functions of $\mathcal{L}^{(0)}_{\partial}(\boldsymbol{k}, \eta)$ and $\psi^{(0)}(\boldsymbol{k}, \eta)$.
We may now compute the perturbation to the angular power spectrum. To linear order in $\overline{\delta}_e \ll 1$, we have $C_{\ell} = C_{\ell}^{(0)} + {C}_{\ell, \rm hom}^{(1)}$, where we defined $2\braket{{\Theta}^{(1)}_{\ell m, \rm hom}\Theta^{*(0)}_{\ell'm'}}\equiv \delta_{\ell \ell'}\delta_{m m'}{{C}^{(1)}_{\ell, \rm hom}}$. We find that the direct contribution to ${C}_{\ell, \rm hom}^{(1)}$ is then
\begin{align}
{C}_{\ell, \rm hom}^{(1)\rm d} = 8 \pi \int Dk ~P_{\zeta}(k) \Delta_\ell(k) \Delta_{\ell, \rm hom}^{(1) \rm d}(k). \label{eq:Cl1_hom}
\end{align}
We computed ${C}_{\ell, \rm hom}^{(1) \rm d}$ using the homogeneous part of the free-electron perturbation sourced by accreting PBHs, as calculated in AK17. We compare this result against the exact ${C}_{\ell, \rm hom}^{(1)}$ obtained from \texttt{CLASS} in Fig.~\ref{fig:class_v_comm}. We see that neglecting the feedback term $S^{(1) \rm f}_{\rm hom}$ leads to an order $\sim 10\%$ relative error on ${C}_{\ell,\rm hom}^{(1)}$ for relevant black hole masses, indicating that the term is subdominant. While there is no guarantee that this subdominance carries over in general at higher-order statistics, it still gives us some confidence that neglecting $S^{(1) \rm f}$ is a reasonable approximation, at least as a first step, and especially considering the large theoretical uncertainty in the PBH accretion model.
In what follows, we will therefore approximate $S^{(1)} \approx S^{(1) \rm d} = \delta_e * \mathcal{C}^{(0)}$, and no longer indicate that we use the direct-term only by a label ``d".
\begin{figure}[htb]
\includegraphics[trim={0cm 0.5cm 0.1cm 1cm},width=1\columnwidth,clip]{homo_cTT_mbh.pdf}
\caption{\label{fig:class_v_comm} Fractional change to the temperature anisotropy power spectrum from the homogeneous perturbation to the free-electron fraction, $\overline{\delta_e}(\eta)$, for various PBH masses and abundances. We compare the exact non-perturbative effect extracted from \texttt{CLASS} to the perturbative solution including only the ``direct" source term discussed in Sec.~\ref{sec:homo_de}. Our approximation of neglecting the ``feedback'' term is reasonably accurate and we assume this carries over for the inhomogeneous free-electron fraction case.}
\end{figure}
\subsection{Alternative calculation of \texorpdfstring{$C_{\ell, \rm hom}^{(1) \rm d}$}{Cl1hom}}
Before moving to the full calculation of $\Theta^{(1)}$, including ionization fraction inhomogeneities, we present an alternative calculation of $\Delta_{\ell, \rm hom}^{(1) \rm d}(k)$, required for the cross-power spectrum $C_{\ell, \rm hom}^{(1)}$. This approach relies on intermediate quantities also used for the trispectrum calculation, and provides a useful cross check of our numerical methods.
For any quantity $X(k, \hat{n} \cdot \hat{k})$, we define its Legendre multipole moments $X_\ell(k)$ as usual through
\begin{align}
X_\ell(k) \equiv \frac{i^{\ell}}2 \int_{-1}^1 d \mu ~ P_\ell(\mu) X(k, \mu),
\end{align}
such that
\begin{align}
X(k, \hat{k} \cdot \hat{n}) &= \sum_\ell (-i)^{\ell} (2 \ell +1) X_\ell(k) P_\ell(\hat{n} \cdot \hat{k}) \\
&= 4 \pi \sum_{\ell m} (-i)^{\ell} X_\ell(k) Y_{\ell m}(\hat{k}) Y_{\ell m}^*(\hat{n}). \label{eq:multipole_def}
\end{align}
We denote by $\widetilde{\mathcal{C}}^{(0)}(\eta, k, \hat{k} \cdot \hat{n})$ the transfer function of $\mathcal{C}^{(0)}(\eta, \boldsymbol{k}, \hat{n})$ (i.e.~such that $\mathcal{C}^{(0)} = \widetilde{\mathcal{C}}^{(0)} \zeta$), and define $\mathcal{J}(\eta, k, \hat{n} \cdot \hat{k}) \equiv e^{- i \chi \boldsymbol{k} \cdot \hat{n}} \widetilde{\mathcal{C}}^{(0)}(\eta, k, \hat{k} \cdot \hat{n})$.
Substituting $S^{(1)}(\boldsymbol{k}, \eta, \hat{n}) e^{- i \boldsymbol{k} \cdot \hat{n} \chi} = \overline{\delta}_e \mathcal{J}(\eta, k, \hat{n} \cdot \hat{k}) \zeta(\boldsymbol{k})$ and inserting the spherical-harmonic expansion of $\mathcal{J}$ into Eq.~\eqref{eq:los_source}, we then arrive again at Eq.~\eqref{eq:Theta1_lm_hom}, with
\begin{align}
\Delta_{\ell, \rm hom}^{(1) \rm d}(k) \equiv \int_0^{\eta_0} d \eta ~ g(\eta) ~\overline{\delta}_e(\eta) \mathcal{J}_\ell(\eta, k).\label{eq:Delta1_lm_hom_alternative}
\end{align}
Using the plane-wave expansion \eqref{eq:plane-wave} and the Legendre expansion of the product of two Legendre polynomials, we may relate the coefficients $\mathcal{J}_\ell$ to the Legendre coefficients of $\widetilde{\mathcal{C}}^{(0)}$ as follows:
\begin{equation}
\mathcal{J}_\ell(\eta, k) = \frac{4 \pi}{2 \ell +1} \sum_{\ell_1 \ell_2} i^{\ell - \ell_1 - \ell_2} (g_{\ell_1 \ell_2 \ell})^2 j_{\ell_1}(k \chi)\widetilde{\mathcal{C}}^{(0)}_{\ell_2}(\eta, k), \label{eq:mathcalJ_main}
\end{equation}
where $g_{\ell_1 \ell_2 \ell}$ is proportional to a three-J symbol, and is defined in Eq.~\eqref{eq:g_sym}. Since this coefficient is nonvanishing only if $\ell_1 + \ell_2 + \ell$ is even, we may substitute $i^{\ell - \ell_1 - \ell_2} = (-1)^{(\ell -\ell_1 - \ell_2)/2} = (-1)^{(\ell + 3 \ell_1 + 3 \ell_2)/2}$. The Legendre coefficients of the collision operator are given explicitly by
\begin{align}\label{eq:collision_ell}
\widetilde{\mathcal{C}}^{(0)}_{\ell}=&\frac13 \widetilde{v}_{b \gamma}^{(0)}\delta_{\ell 1} + \frac15 \widetilde{\Pi}^{(0)} \delta_{\ell 2} -\widetilde{\Theta}_{\ell}^{(0)}( 1- \delta_{\ell 0} - \delta_{\ell 1}).
\end{align}
The sums over $\ell_1$ and $\ell_2$ in Eq.~\eqref{eq:mathcalJ_main} are formally infinite, and must be truncated in practice. Since the higher $\ell_2$-modes from the collision term are induced after the peak of the visibility function, we choose to truncate the $\ell_2$ sum at some finite $\ell_{\rm cut}$. This automatically renders the double sum finite, since for a given $\ell_2$, $\ell_1$ is bounded by the triangle condition, $|\ell - \ell_2| \leq \ell_1 \leq \ell + \ell_2$.
We compute $\Delta_{\ell, \rm hom}^{(1) d}$ as given by Eq.~\eqref{eq:Delta1_lm_hom_alternative} and use it to obtain $C_{\ell, \rm hom}^{(1)d}$ from Eq.~\eqref{eq:Cl1_hom}. We show the results in Fig.~\ref{fig:homo_CTT}, for various $\ell_{\rm cut}$, and compare them to the result obtained with the line-of-sight commutation method described in Sec.~\ref{sec:homo_de}. We see that the former converges to the latter as $\ell_{\rm cut}$ is increased, as it should, giving us confidence in the robustness of our numerical methods and results.
\begin{figure}[htb]
\includegraphics[trim={0cm 1cm 0.5cm 1cm},width=.95\columnwidth]{homo_cTT_comb1.pdf}
\caption{\label{fig:homo_CTT} Fractional change to the temperature anisotropy power spectrum from the homogeneous perturbation to the free-electron fraction, $\overline{\delta_e}(\eta)$, due to 100 $M_\odot$ accreting PBHs, computed with the ``commutation" method, using Eq.~\eqref{eq:Delta1_lm_hom}, or with the ``direct summation method", using Eq.~\eqref{eq:Delta1_lm_hom_alternative}, where $\mathcal{J}_\ell$ is given by the sum \eqref{eq:mathcalJ_main}, truncated at $\ell_2 \leq \ell_{\rm cut}$. We see that the direct summation result converges to the ``commutation" result as $\ell_{\rm cut}$ is increased, as it should.}
\end{figure}
\section{Perturbed temperature anisotropy statistics due to inhomogeneously-accreting PBHs}\label{sec:tempstats}
\subsection{Temperature anisotropy transfer functions}\label{sec:transf}
\subsubsection{Definitions}
Neglecting lensing and other nonlinearities, the standard temperature perturbation is linearly related to the primordial curvature perturbation, through (c.f. Eq~\eqref{eq:thet0})
\begin{align}
\Theta_{\ell m}^{(0)} &= \int Dk ~ T^{(0)}_{\ell m}(\boldsymbol{k}) ~ \zeta(\boldsymbol{k}),\\
T_{\ell m}^{(0)}(\boldsymbol{k}) &\equiv 4 \pi (-i)^{\ell} \Delta_\ell(k) Y_{\ell m}^*(\hat{k}). \label{eq:T0_lm}
\end{align}
Approximating the free-electron fraction perturbation due to accreting PBHs as quadratic in the initial conditions (c.f.~Eq.~\eqref{eq:Te-def}), the corresponding temperature anisotropy perturbation due to accreting PBHs is \emph{cubic} in the initial curvature perturbation. The goal of this section is to derive an explicit expression for the cubic transfer function $T_{\ell m}^{(1)}(\boldsymbol{k}_1, \boldsymbol{k}_2, \boldsymbol{k}_3)$, defined through
\begin{align}
\Theta_{\ell m, \rm inh}^{(1)} = f_{\rm pbh} \int D(k_1 k_2 k_3) T_{\ell m}^{(1)}(\boldsymbol{k}_1, \boldsymbol{k}_2, \boldsymbol{k}_3)\nonumber\\
\times \zeta(\boldsymbol{k}_1) \zeta(\boldsymbol{k}_2) \zeta(\boldsymbol{k}_3), \label{eq:T(1)-def}
\end{align}
where the label ``inh" indicates that here we focus on the inhomogeneous-$\delta_e$ contribution to $\Theta^{(1)}$, recalling that it also has a piece $\Theta^{(1)}_{\rm hom}$ due to the homogeneous $\overline{\delta}_e$, which we computed in Sec.~\ref{sec:homo_de}, so that the total $\Theta^{(1)} = \Theta^{(1)}_{\rm hom} + \Theta^{(1)}_{\rm inh}$.
In addition, we shall derive the harmonic coefficients of this cubic transfer function, defined as
\begin{align}\label{eq:T_multipoles}
T_{\ell m}^{(1)}(\boldsymbol{k}_1, \boldsymbol{k}_2, \boldsymbol{k}_3) &= (4 \pi)^3 \sum_{\ell_1 \ell_2 \ell_3}(-i)^{\ell_1 + \ell_2 + \ell_3}\nonumber\\
&\times \sum_{m_1 m_2 m_3} T_{\ell_1 \ell_2\ell_3; \ell}^{m_1 m_2 m_3; m}(k_1, k_2, k_3)\nonumber\\
&\times Y_{\ell_1 m_1}(\hat{k}_1) Y_{\ell_2 m_2}(\hat{k}_2) Y_{\ell_3 m_3}(\hat{k}_3).
\end{align}
\subsubsection{Calculation}
Neglecting the ``feedback'' term, the source term for the line-of-sight solution of $\Theta^{(1)}_{\rm inh}$ is the convolution between the collision term and inhomogeneous part of the free-electron fraction, $S^{(1)} = \delta_{e, \rm inh} * \mathcal{C}^{(0)}$. Using Eqs.~\eqref{eq:Te-def} and \eqref{eq:dele_pbh}, we can write explicitly
\begin{align}
S^{(1)}(\eta, \boldsymbol{k}, \hat{n})& = f_{\rm pbh} \int ~D(k_1 k_2 k_3) \cancel{\delta}(\boldsymbol{k}_1 + \boldsymbol{k}_2 +\boldsymbol{k}_3 - \boldsymbol{k}) \nonumber\\
&\quad\times (\hat{k}_1 \cdot \hat{k}_2) \Delta_e(\eta,k_1)\Delta_e(\eta, k_2)\nonumber\\
&\quad \times \widetilde{\mathcal{C}}^{(0)}(\eta, k_3, \hat{k}_3 \cdot \hat{n}) \zeta(\boldsymbol{k}_1) \zeta(\boldsymbol{k}_2) \zeta(\boldsymbol{k}_3),
\end{align}
where again $\widetilde{\mathcal{C}}^{(0)}(\eta, k, \hat{k} \cdot \hat{n})$ is the transfer function of $\mathcal{C}^{(0)}(\eta, \boldsymbol{k}, \hat{n})$. Taking the harmonic transform of the line-of-sight solution for $\Theta^{(1)}$, Eq.~\eqref{eq:los_source}, we then find
\begin{align}
T_{\ell m}^{(1)}(\boldsymbol{k}_1, \boldsymbol{k}_2, \boldsymbol{k}_3) = \int_0^{\eta_0} d \eta ~ g(\eta) \int d^2 \hat{n} ~Y_{\ell m}^*(\hat{n}) \nonumber\\
\times (\hat{k}_1 \cdot \hat{k}_2) \Delta_e(\eta,k_1)\Delta_e(\eta, k_2)\nonumber\\
\times \widetilde{\mathcal{C}}^{(0)}(\eta, k_3, \hat{k}_3 \cdot \hat{n}) e^{-i\chi\hat{n}\cdot(\boldsymbol{k}_1 + \boldsymbol{k}_2 + \boldsymbol{k}_3)} . \label{eq:T1_lm}
\end{align}
Note that this function is symmetric under exchange of $\boldsymbol{k}_1$ and $\boldsymbol{k}_2$. Let us recall, also, that the expression above only holds for $\boldsymbol{k}_1 + \boldsymbol{k}_2 \neq 0$, and that $T_{\ell m}^{(1)}(\boldsymbol{k}_1, - \boldsymbol{k}_1, \boldsymbol{k}_3) = 0$, since we are only considering the inhomogeneous part (with zero mean) of the free-electron fraction perturbation.
To obtain the harmonic coefficients of $T_{\ell m}^{(1)}$, we first rewrite (denoting $\boldsymbol{\chi} \equiv \chi \hat{n}$),
\begin{align}
(\boldsymbol{k}_1 \cdot \boldsymbol{k}_2) e^{-i \boldsymbol{\chi} \cdot (\boldsymbol{k}_1 + \boldsymbol{k}_2)} &= -\left[\boldsymbol{\nabla}_{\boldsymbol{\chi}} e^{-i \boldsymbol{\chi} \cdot \boldsymbol{k}_1}\right] \cdot \left[\boldsymbol{\nabla}_{\boldsymbol{\chi}} e^{-i \boldsymbol{\chi} \cdot \boldsymbol{k}_2}\right] \nonumber\\
&= -\partial_\chi\left(e^{-i \boldsymbol{\chi} \cdot \boldsymbol{k}_1}\right) \partial_\chi \left(e^{-i \boldsymbol{\chi} \cdot \boldsymbol{k}_2}\right)\nonumber\\
& \quad - \frac1{\chi^2} \left[\boldsymbol{\nabla}_{\hat{n}} e^{-i \boldsymbol{\chi} \cdot \boldsymbol{k}_1}\right] \cdot \left[\boldsymbol{\nabla}_{\hat{n}} e^{-i \boldsymbol{\chi} \cdot \boldsymbol{k}_2}\right],
\end{align}
where $\boldsymbol{\nabla}_{\boldsymbol{\chi}}$ is the gradient with respect to $\boldsymbol{\chi}$, which we have split into its radial part $\hat{n} \partial_\chi$ and its angular part $\frac1{\chi} \boldsymbol{\nabla}_{\hat{n}}$. Using the plane-wave expansion \eqref{eq:plane-wave}, we thus have
\begin{align}
(\hat{k}_1 \cdot \hat{k}_2 ) e^{-i \chi \hat{n} \cdot (\boldsymbol{k}_1 + \boldsymbol{k}_2)} = -(4 \pi)^2 \sum_{\ell_1 \ell_2} (-i)^{\ell_1 + \ell_2} \nonumber\\
\sum_{m_1 m_2}Y_{\ell_1 m_1}(\hat{k}_1) Y_{\ell_2 m_2}(\hat{k}_2)\nonumber\\
\times \Big{[}j_{\ell_1}'(\chi k_1) j_{\ell_2}'(\chi k_2) Y_{\ell_1 m_1}^*(\hat{n}) Y_{\ell_2 m_2}^*(\hat{n})\nonumber\\
+ \frac{j_{\ell_1}(\chi k_1)}{\chi k_1} \frac{j_{\ell_1}(\chi k_2)}{\chi k_2} \boldsymbol{\nabla}_{\hat{n}} Y_{\ell_1 m_1}^*(\hat{n}) \cdot \boldsymbol{\nabla}_{\hat{n}} Y_{\ell_2 m_2}^*(\hat{n}) \Big{]}.
\end{align}
Combining this result with the Legendre-expansion of $e^{-i \chi \hat{n} \cdot \boldsymbol{k}_3} \widetilde{\mathcal{C}}^{(0)}(\eta, k_3, \hat{k}_3 \cdot \hat{n})$ [Eq.~\eqref{eq:mathcalJ_main}], we are now in the position to compute $T_{\ell_1 \ell_2 \ell_3; \ell}^{m_1 m_2 m_3; m}$ defined in Eq.~\eqref{eq:T_multipoles}:
\begin{align}
T_{\ell_1 \ell_2 \ell_3; \ell}^{m_1 m_2 m_3; m}(k_1, k_2, k_3) &= A_{\ell_1 \ell_2, \ell_3}(k_1, k_2, k_3) Q_{\ell_1 \ell_2 \ell_3 \ell}^{m_1 m_2 m_3 m} \nonumber\\
&+ B_{\ell_1 \ell_2, \ell_3}(k_1, k_2, k_3) \widetilde{Q}_{\ell_1 \ell_2, \ell_3 \ell}^{m_1 m_2, m_3 m}, \label{eq:Tmult-final}
\end{align}
where the rotationally-invariant coefficients $A_{\ell_1 \ell_2, \ell_3}$ and $B_{\ell_1 \ell_2, \ell_3}$ are given by
\begin{align}\label{eq:A}
A_{\ell_1 \ell_2, \ell_3}(k_1, k_2, k_3) \equiv&-\int\!\! d \eta ~ g(\eta) j_{\ell_1}'(\chi k_1)\Delta_e (\eta, k_1)\nonumber\\
&\quad\times j_{\ell_2}'(\chi k_2)\Delta_e(\eta, k_2) \mathcal{J}_{\ell_3}(\eta, k_3), \\
B_{\ell_1 \ell_2, \ell_3}(k_1, k_2, k_3) \equiv&-\int \!\!d \eta ~ g(\eta) \frac{j_{\ell_1}(\chi k_1)}{\chi k_1}\Delta_e (\eta, k_1)\nonumber\\
&\quad\times \frac{j_{\ell_2}(\chi k_2)}{\chi k_2} \Delta_e(\eta, k_2) \mathcal{J}_{\ell_3}(\eta, k_3),\label{eq:B}
\end{align}
and the purely geometric terms $Q_{\ell_1 \ell_2 \ell_3 \ell_4}^{m_1 m_2 m_3 m_4}$ and $\widetilde{Q}_{\ell_1 \ell_2, \ell_3 \ell_4}^{m_1 m_2, m_3 m_4}$ are integrals of the product of four spherical harmonics or their gradients:
\begin{align}
&Q_{\ell_1 \ell_2 \ell_3\ell_4}^{m_1 m_2 m_3 m_4} \!\equiv\nonumber\\
&\quad\quad\!\int\! d^2 \hat{n} ~Y^*_{\ell_1 m_1}(\hat{n}) Y^*_{\ell_2 m_2}(\hat{n}) Y^*_{\ell_3 m_3}(\hat{n}) Y_{\ell_4 m_4}^*(\hat{n}), \label{eq:Q_sym}\\
&\widetilde{Q}_{\ell_1 \ell_2, \ell_3 \ell_4}^{m_1 m_2, m_3 m_4} \equiv\nonumber\\
&\quad\quad\int d^2 \hat{n} ~\boldsymbol{\nabla}_{\hat{n}}Y^*_{\ell_1 m_1}(\hat{n})\cdot \boldsymbol{\nabla}_{\hat{n}} Y^*_{\ell_2 m_2}(\hat{n}) Y^*_{\ell_3 m_3}(\hat{n}) Y_{\ell_4 m_4}^*(\hat{n}). \label{eq:Qt_sym}
\end{align}
Note that we have separated the groups of indices on which the functions depend fully symmetrically: $A_{\ell_1 \ell_2, \ell_3}(k_1, k_2, k_3)$ and $B_{\ell_1 \ell_2, \ell_3}(k_1, k_2, k_3)$ are symmetric under exchange of $(\ell_1, k_1)$ with $(\ell_2, k_2)$, $Q_{\ell_1 \ell_2 \ell_3\ell_4}^{m_1 m_2 m_3 m_4}$ is symmetric under exchange of any two $(\ell, m)$ pairs, and $\widetilde{Q}_{\ell_1 \ell_2, \ell_3\ell_4}^{m_1 m_2, m_3 m_4}$ is symmetric under exchange of $(\ell_1, m_1)$ with $(\ell_2, m_2)$, as well as under exchange of $(\ell_3, m_3)$ with $(\ell_4, m_4)$.
\subsection{Perturbed temperature angular power spectrum}\label{sec:powerspec}
We have derived all the required transfer functions and are now equipped to compute statistical properties of $\Theta^{(1)}_{\rm inh}$. Because the perturbed temperature anisotropy is cubic in the primordial curvature perturbation, it has a non-vanishing cross-correlation with the standard temperature anisotropy. Using Eqs.~\eqref{eq:thet0} and \eqref{eq:T(1)-def}, we have
\begin{align}
\langle \Theta_{\ell m, \rm inh}^{(1)} \Theta_{\ell' m'}^{*(0)} \rangle &= f_{\rm pbh} \int\!\! D(k_1 k_2 k_3 k') ~ T^{(1)}_{\ell m}(\boldsymbol{k}_1, \boldsymbol{k}_2, \boldsymbol{k}_3) \nonumber\\
&\quad\times T^{*(0)}_{\ell' m'}(\boldsymbol{k}') \langle\zeta(\boldsymbol{k}_1) \zeta(\boldsymbol{k}_2) \zeta(\boldsymbol{k}_3) \zeta^*(\boldsymbol{k}') \rangle.
\end{align}
Using Wick's theorem, and recalling that $T^{(1)}_{\ell m}(\boldsymbol{k}_1, - \boldsymbol{k}_1, \boldsymbol{k}_3) = 0$, and that $T^{(1)}_{\ell m}$ is symmetric in its first two arguments, we get
\begin{align}
\langle \Theta_{\ell m, \rm inh}^{(1)} \Theta_{\ell' m'}^{*(0)} \rangle =& 2 f_{\rm pbh} \int D(k k') T^{(1)}_{\ell m}(\boldsymbol{k}', -\boldsymbol{k}, \boldsymbol{k}) \nonumber\\
&\quad\times T^{*(0)}_{\ell' m'}(\boldsymbol{k}') P_{\zeta}(k) P(k'), \label{eq:Cl1_inh_int}
\end{align}
From Eq.~\eqref{eq:T1_lm}, we have
\begin{align}
T^{(1)}_{\ell m}(\boldsymbol{k}', -\boldsymbol{k}, \boldsymbol{k}) = -(\hat{k}' \cdot \hat{k}) \int_0^{\eta_0} d \eta ~g(\eta) \Delta_e(\eta, k') \Delta_e(\eta, k) \nonumber\\
\times \int d^2 \hat{n} ~Y_{\ell m}^*(\hat{n}) \widetilde{\mathcal{C}}^{(0)}(\eta, k, \hat{k} \cdot \hat{n}) e^{-i \chi \hat{n} \cdot \boldsymbol{k}'}.
\end{align}
Averaging over the direction $\hat{k}$, we then obtain,
\begin{align}
\int \frac{d^2 \hat{k}}{4 \pi} T^{(1)}_{\ell m}(\boldsymbol{k}', -\boldsymbol{k}, \boldsymbol{k}) \nonumber\\
= i \int_0^{\eta_0} d \eta ~g(\eta) \Delta_e(\eta, k') \Delta_e(\eta, k) \widetilde{\mathcal{C}}^{(0)}_1(\eta, k)\nonumber\\
\times \int d^2 \hat{n} ~Y_{\ell m}^*(\hat{n}) (\hat{k}' \cdot \hat{n}) e^{-i \chi \hat{n} \cdot \boldsymbol{k}'},
\end{align}
where $\widetilde{\mathcal{C}}^{(0)}_1(\eta, k) \equiv i \frac12 \int_{-1}^1 d\mu P_1(\mu) \widetilde{\mathcal{C}}^{(0)}(\eta, k, \mu)$ is the order-1 Legendre coefficient of $\widetilde{\mathcal{C}}^{(0)}$, which is proportional to the baryon-photon relative velocity (or baryon-photon slip):
\begin{align}
\widetilde{\mathcal{C}}^{(0)}_1(\eta, k) = \frac13 \widetilde{v}_{b \gamma}(\eta, k),
\end{align}
where we defined $\boldsymbol{v}_{b \gamma}(\boldsymbol{k}) \equiv (\boldsymbol{v}_b - \boldsymbol{v}_\gamma)(\boldsymbol{k}) = -i \hat{k} \widetilde{v}_{b \gamma}(k) \zeta(\boldsymbol{k})$.
Using the plane-wave expansion Eq.~\eqref{eq:plane-wave}, this expression further simplifies to
\begin{align}
\int \frac{d^2 \hat{k}}{4 \pi} T^{(1)}_{\ell m}(\boldsymbol{k}', -\boldsymbol{k}, \boldsymbol{k}) \nonumber\\
= -\frac{4 \pi}3 (-i)^{\ell} \int_0^{\eta_0} d \eta ~g(\eta) \Delta_e(\eta, k') \Delta_e(\eta, k)\nonumber\\
\times \widetilde{v}_{b \gamma}(\eta, k) j_{\ell}'(k' \chi) Y_{\ell m}^*(\hat{k}'). \label{eq:T1kk-av}
\end{align}
Finally inserting Eqs.~\eqref{eq:T0_lm} and \eqref{eq:T1kk-av} into Eq.~\eqref{eq:Cl1_inh_int}, we arrive at the following simple result
\begin{align}
\langle \Theta_{\ell m, \rm inh}^{(1)}\Theta_{\ell' m'}^{*(0)} \rangle = \frac12 \delta_{\ell \ell'} \delta_{m m'} C_{\ell, \rm inh}^{(1)},
\end{align}
where the cross-power spectrum is given by the conformal time integral
\begin{align}
C_{\ell, \rm inh}^{(1)} &= - \frac{16 \pi}3 f_{\rm pbh} \int_0^{\eta} d\eta ~g(\eta) \gamma(\eta) \mu_\ell(\eta) , \label{eq:C1_inh}
\end{align}
where we have defined
\begin{align}
\gamma(\eta) &\equiv \int Dk ~ P_\zeta(k) \Delta_e(\eta, k) \widetilde{v}_{b \gamma}(\eta, k)\label{eq:gamma}, \\
\mu_\ell(\eta) &\equiv \int Dk ~P_\zeta(k) \Delta_e(\eta, k) \Delta_\ell(k) j_\ell'(k \chi) \label{eq:mu_l}.
\end{align}
We see that the factorization of the free-electron perturbation transfer function has allowed us to obtain a very simple expression for $C_{\ell, \rm inh}^{(1)}$: it only requires pre-computing tables of $\mu_\ell(\eta)$ and $\gamma(\eta)$, and then computing a one-dimensional integral. It can equivalently be rewritten in the same form as Eq.~\eqref{eq:Cl1_hom}:
\begin{eqnarray}
C_{\ell, \rm inh}^{(1)} &=& 8\pi \int Dk ~P_\zeta(k) \Delta_\ell(k) \Delta_{\ell, \rm inh}^{(1)}(k), \label{eq:C1_inh_2}\\
\Delta_{\ell, \rm inh}^{(1)}(k) &\equiv& - \frac23 f_{\rm pbh} \int_0^{\eta_0} d\eta ~g(\eta)\gamma(\eta) \Delta_e(k , \eta) j_{\ell}'(k \chi).~~~
\end{eqnarray}
Equation \eqref{eq:C1_inh}, or the equivalent form \eqref{eq:C1_inh_2}, constitute one of the main results of this work.
We computed $C_{\ell, \rm inh}^{(1)}$ from Eq.~\eqref{eq:C1_inh}, and checked that Eq.~\eqref{eq:C1_inh_2} gives the same result. We show the result in Fig.~\ref{fig:AK17_jen}, where we compare this term to its counterpart $C_{\ell, \rm hom}^{(1)}$ sourced by the homogeneous part of the free-electron fraction perturbation, for 100-$M_{\odot}$ accreting PBHs. Even though these two contributions should in principle be comparable, given that $\langle \delta_e^2\rangle^{1/2} \sim \overline{\delta}_e$ (see Paper I), we find that $C_{\ell, \rm inh}^{(1)}$ is suppressed by a factor $\sim 10-100$, depending on scale, relative to $C_{\ell, \rm hom}^{(1)}$ for all black hole masses. This turns out to be due to both a poor correlation between $\Theta^{(1)}_{\rm inh}$ and $\Theta^{(0)}$, and a suppression of the characteristic amplitude of $\Theta^{(1)}_{\rm inh}$ itself. We expound on this point in Appendix~\ref{app:auto}.
\begin{figure*}[ht]
\includegraphics[trim={0 0 0 0},width=\columnwidth]{AK17_Jensen22_on_new.pdf}
\includegraphics[trim={0 0 0 0},width=\columnwidth]{inhomo_new_m100.pdf}
\caption{\label{fig:AK17_jen} Fractional change to the temperature anisotropy power spectrum due to accreting PBHs of 100 $M_\odot$ comprising all the dark matter. \emph{Left:} Comparison of the contribution due to the inhomogeneous part of the ionization fraction perturbations, $C_{\ell, \rm inh}^{(1)}$, calculated for the first time in this work, with the one arising from the homogeneous part of the free-electron fraction, $C_{\ell, \rm hom}^{(1)}$, previously computed in AK17; we also overlay the total change to the temperature power spectrum from both. \emph{Right}: the ratio between $C_{\ell, \rm inh}^{(1)}$ and $C_{\ell, \rm hom}^{(1)}$. Although one would expect $C_{\ell, \rm inh}^{(1)}$ to be of the same order of magnitude as $C_{\ell, \rm hom}^{(1)}$ \emph{a priori}, we find in practice that the former is $\sim 10-100$ times smaller that the latter. In both cases, dashed curves correspond to the spatial on-the-spot approximation, which neglects the spatial smearing of energy deposition due to the finite propagation of injected photons.}
\end{figure*}
\subsection{Temperature trispectrum}\label{sec:trispec}
We now compute the connected four-point correlation function of temperature anisotropy,
\begin{align}
\langle \Theta_1 \Theta_2 \Theta_3 \Theta_4 \rangle_c &\equiv \langle \Theta_1 \Theta_2 \Theta_3 \Theta_4 \rangle -\langle\Theta_1\Theta_2\rangle\langle\Theta_3\Theta_4\rangle\nonumber\\
&- \langle\Theta_1\Theta_3\rangle\langle\Theta_2\Theta_4\rangle - \langle\Theta_1\Theta_4\rangle\langle\Theta_2\Theta_3\rangle, \label{eq:4pt-c}
\end{align}
where the numbered subscripts index both $\ell$ and $m$, $\Theta_1\equiv\Theta_{\ell_1 m_1}$, and $c$ denotes subtracting out the unconnected parts of the trispectrum. Recalling that $\Theta = \Theta^{(0)} + \Theta^{(1)}_{\rm hom} + \Theta^{(1)}_{\rm inh}$, and that $\Theta^{(0)}$ and $\Theta^{(1)}_{\rm hom}$ are both linear in the initial Gaussian curvature perturbation, to lowest order in electron density perturbations, the trispectrum is given by
\begin{align}\label{eq:4pt}
\langle \Theta_1 \Theta_2 \Theta_3 \Theta_4\rangle_c &= \langle \Theta_{1, \rm inh}^{(1)} \Theta_2^{(0)} \Theta_3^{(0)} \Theta_4^{(0)} \rangle_c \nonumber\\
&+ \langle \Theta_1^{(0)} \Theta_{2, \rm inh}^{(1)} \Theta_3^{(0)} \Theta_4^{(0)} \rangle_c \nonumber\\
&+ \langle \Theta_1^{(0)} \Theta_2^{(0)} \Theta_{3, \rm inh}^{(1)} \Theta_4^{(0)} \rangle_c \nonumber\\
&+ \langle \Theta_1^{(0)} \Theta_2^{(0)} \Theta_3^{(0)} \Theta_{4, \rm inh}^{(1)} \rangle_c.
\end{align}
We may now compute each term using Eq.~\eqref{eq:T(1)-def} for $\Theta_{\ell m, \rm inh}^{(1)}$. For instance, the last term is
\begin{align}
\langle \Theta_{1}^{(0)} \Theta_2^{(0)} \Theta_3^{(0)} \Theta_{4, \rm inh}^{(1)} \rangle_c = f_{\rm pbh} \int D(k k' k'') T_4^{(1)}(\boldsymbol{k}, \boldsymbol{k}', \boldsymbol{k}'') \nonumber\\
\times \Big{[}\langle \zeta(\boldsymbol{k}) \zeta(\boldsymbol{k}') \zeta(\boldsymbol{k}'') \Theta^{(0)}_1 \Theta^{(0)}_2 \Theta^{(0)}_3 \rangle \nonumber\\
- \langle \zeta(\boldsymbol{k}) \zeta(\boldsymbol{k}') \zeta(\boldsymbol{k}'') \Theta^{(0)}_1\rangle \langle \Theta_2^{(0)} \Theta_3^{(0)} \rangle\nonumber\\
- \langle \zeta(\boldsymbol{k}) \zeta(\boldsymbol{k}') \zeta(\boldsymbol{k}'') \Theta^{(0)}_2\rangle \langle \Theta_1^{(0)} \Theta_3^{(0)} \rangle\nonumber\\
- \langle \zeta(\boldsymbol{k}) \zeta(\boldsymbol{k}') \zeta(\boldsymbol{k}'') \Theta^{(0)}_3\rangle \langle \Theta_1^{(0)} \Theta_2^{(0)} \rangle\Big{]},
\end{align}
where we used the explicit definition of the connected 4-point function, Eq.~\eqref{eq:4pt-c}. Using Wick's theorem to compute the 6-point and 4-point functions of Gaussian fields appearing in the integrand above, simplifying, and renaming dummy integration variables, we arrive at
\begin{align}
\langle \Theta_{1}^{(0)} \Theta_2^{(0)} \Theta_3^{(0)} \Theta_{4, \rm inh}^{(1)} \rangle_c = f_{\rm pbh} \int D(k k' k'') \nonumber\\
\times
\langle \zeta(\boldsymbol{k}) \Theta_1^{(0)}\rangle\langle \zeta(\boldsymbol{k}') \Theta_2^{(0)}\rangle\langle \zeta(\boldsymbol{k}'') \Theta_3^{(0)}\rangle \nonumber\\
\times \left[T_4^{(1)}(\boldsymbol{k}, \boldsymbol{k}', \boldsymbol{k}'') + 5 \textrm{ perms.} \right]
\end{align}
where the 5 permutations involve all other possible permutations of $\boldsymbol{k}, \boldsymbol{k}', \boldsymbol{k}''$. The relevant two-point functions are easily computed with the line-of-sight expression for $\Theta_{\ell m}^{(0)}$, Eq.~\eqref{eq:Theta_lm^0}, and we obtain
\begin{align}
\langle \zeta(\boldsymbol{k}) \Theta_{\ell m}^{(0)}\rangle = 4 \pi (-i)^\ell Y_{\ell m}^*(\hat{k}) \Delta_\ell(k) P_\zeta(k).
\end{align}
Integrating over the wavenumbers' directions, and using the harmonic decomposition of $T^{(1)}$ given in Eq.~\eqref{eq:T_multipoles}, we thus arrive at
\begin{align}
\langle \Theta_{\ell_1 m_1}^{(0)} \Theta_{\ell_2 m_2}^{(0)} \Theta_{\ell_3 m_3}^{(0)} \Theta_{\ell_4 m_4, \rm inh}^{(1)} \rangle_c
= (4 \pi)^3 f_{\rm pbh} \int D(k_1 k_2 k_3) \nonumber\\
\times P_\zeta(k_1) P_\zeta(k_2) P_\zeta(k_3) \Delta_{\ell_1}(k_1) \Delta_{\ell_2}(k_2) \Delta_{\ell_3}(k_3) \nonumber\\
\times \Big{[} T_{\ell_1 \ell_2 \ell_3; \ell_4}^{m_1 m_2 m_3; m_4}(k_1, k_2, k_3) + 5 \textrm{~perms.} \Big{]},
\end{align}
where the 5 permutations involve all other possible permutations of $k_1, k_2, k_3$ simultaneously with the corresponding permutation of the indices $\ell_i, m_i, i = 1, 2, 3$, i.e.~such that the position of the index $\ell_i, m_i$ always corresponds to the position of $k_i$.
We now take advantage of the factorized form of $T_{\ell_1 \ell_2 \ell_3; \ell_4}^{m_1 m_2 m_3; m_4}(k_1, k_2, k_3)$, given in Eqs.~\eqref{eq:Tmult-final}-\eqref{eq:B}. In addition to the function $\mu_\ell(\eta)$ defined in Eq.~\eqref{eq:mu_l}, we define the following functions of time and multipole:
\begin{align}
\nu_\ell(\eta) &\equiv \int Dk ~P_\zeta(k) \Delta_e(\chi, k) \Delta_\ell(k) \frac{j_\ell(k \chi )}{k \chi}, \label{eq:nu_l} \\
\lambda_\ell(\eta) &\equiv \int Dk ~P_\zeta(k) \Delta_\ell(k) \mathcal{J}_\ell(\eta, k). \label{eq:lambda_l}
\end{align}
We then define the following one-dimensional integrals:
\begin{align}
\mathcal{A}_{\ell_1 \ell_2, \ell_3} \equiv&- 2 (4 \pi)^3 \int d\eta ~g(\eta)~ \mu_{\ell_1}(\eta) \mu_{\ell_2}(\eta) \lambda_{\ell_3}(\eta),\\
\mathcal{B}_{\ell_1 \ell_2, \ell_3} \equiv& -2 (4 \pi)^3 \int d\eta ~g(\eta)~ \nu_{\ell_1}(\eta) \nu_{\ell_2}(\eta) \lambda_{\ell_3}(\eta),
\end{align}
which are symmetric in their first two arguments. We then find, using the symmetry of $T^{(1)}$ in its first two arguments, and the symmetries of the $Q$ and $\widetilde{Q}$ symbols defined in Eqs.~\eqref{eq:Q_sym}, \eqref{eq:Qt_sym}:
\begin{align}
&\frac1{f_{\rm pbh}}\langle \Theta_{\ell_1 m_1}^{(0)} \Theta_{\ell_2 m_2}^{(0)} \Theta_{\ell_3 m_3}^{(0)} \Theta_{\ell_4 m_4, \rm inh}^{(1)} \rangle_c \nonumber\\
&= \mathcal{A}_{(\ell_1 \ell_2 \ell_3)} Q_{\ell_1 \ell_2 \ell_3 \ell_4}^{m_1 m_2 m_3 m_4} +\mathcal{B}_{\ell_1\ell_2,\ell_3}\widetilde{Q}_{ \ell_1 \ell_2,\ell_3\ell_4}^{m_1 m_2, m_3 m_4} \nonumber\\
&+ \mathcal{B}_{\ell_2\ell_3,\ell_1}\widetilde{Q}_{ \ell_2 \ell_3,\ell_1\ell_4}^{m_2 m_3, m_1 m_4}+\mathcal{B}_{\ell_3\ell_1,\ell_2}\widetilde{Q}_{ \ell_3 \ell_1,\ell_2\ell_4}^{m_3 m_1, m_2 m_4},
\end{align}
where\footnote{Note that we do not use the standard symmetrization notation, i.e.~do not divide by the number of terms, in order to avoid the proliferation of numerical prefactors.}
\begin{align}
\mathcal{A}_{(\ell_1 \ell_2 \ell_3)} \equiv \mathcal{A}_{\ell_1 \ell_2, \ell_3} + \mathcal{A}_{\ell_2 \ell_3, \ell_1} + \mathcal{A}_{\ell_3 \ell_1, \ell_2}.
\end{align}
Finally, summing over the four permutations in Eq.~\eqref{eq:4pt}, we arrive at the main result of this work, which is the temperature trispectrum sourced by accreting PBHs:
\begin{align}
&\langle \Theta_{\ell_1 m_1} \Theta_{\ell_2 m_2} \Theta_{\ell_3 m_3} \Theta_{\ell_4 m_4}\rangle_c = f_{\rm pbh} \left(\mathcal{T}_{\rm pbh}\right)_{\ell_1 \ell_2 \ell_3 \ell_4}^{m_1 m_2 m_3 m_4}, \label{eq:Tpbh-general}\\
&\left(\mathcal{T}_{\rm pbh}\right)_{\ell_1 \ell_2 \ell_3 \ell_4}^{m_1 m_2 m_3 m_4} \equiv
\mathcal{A}_{(\ell_1 \ell_2 \ell_3 \ell_4)} Q_{\ell_1 \ell_2 \ell_3 \ell_4}^{m_1 m_2 m_3 m_4}\nonumber\\
&+ \mathcal{B}_{\ell_1 \ell_2, (\ell_3 \ell_4)} \widetilde{Q}_{\ell_1 \ell_2, \ell_3 \ell_4}^{m_1 m_2, m_3 m_4} + \mathcal{B}_{\ell_1 \ell_3, (\ell_2 \ell_4)}\widetilde{Q}_{\ell_1 \ell_3, \ell_2 \ell_4}^{m_1 m_3, m_2 m_4} \nonumber\\
&+ \mathcal{B}_{\ell_1 \ell_4, (\ell_2 \ell_3)}\widetilde{Q}_{\ell_1 \ell_4, \ell_2 \ell_3}^{m_1 m_4, m_2 m_3}+ \mathcal{B}_{\ell_2 \ell_3, (\ell_1 \ell_4)} \widetilde{Q}_{\ell_2 \ell_3, \ell_1 \ell_4}^{m_2 m_3, m_1 m_4} \nonumber\\
& + \mathcal{B}_{\ell_2 \ell_4, (\ell_1 \ell_3)} \widetilde{Q}_{\ell_2 \ell_4, \ell_1 \ell_3}^{m_2 m_4, m_1 m_3}+ \mathcal{B}_{\ell_3 \ell_4, (\ell_1 \ell_2)} \widetilde{Q}_{\ell_3 \ell_4, \ell_1 \ell_2}^{m_3 m_4, m_1 m_2}, \label{eq:Trispec-final}
\end{align}
where we have defined the symmetrized coefficients
\begin{align}
\mathcal{A}_{(\ell_1 \ell_2 \ell_3 \ell_4)}&\equiv \mathcal{A}_{(\ell_1 \ell_2 \ell_3)}+ \mathcal{A}_{(\ell_2 \ell_3 \ell_4)}\nonumber\\
&\quad+ \mathcal{A}_{(\ell_3 \ell_4 \ell_1)}+ \mathcal{A}_{(\ell_4 \ell_1 \ell_2)},\\
\mathcal{B}_{\ell_1 \ell_2, (\ell_3 \ell_4)} &\equiv \mathcal{B}_{\ell_1 \ell_2, \ell_3} + \mathcal{B}_{\ell_1 \ell_2, \ell_4}.
\end{align}
\section{Trispectrum constraints and sensitivity forecasts}\label{sec:forecast}
In this section we compute and present trispectrum constraints and sensitivity forecasts on the fraction of dark matter made of PBHs, $f_{\rm pbh}$. A full trispectrum analysis of the \emph{Planck} satellite temperature data would be very challenging and is well beyond the scope of this work. Instead, we compute the overlap of the PBH-induced trispectrum with the local-type primordial non-Gaussianity (PNG) trispectrum template, in order to extract an indirect limit on the PBH abundance, given Planck's limits on $g_{\rm NL}^{\rm loc}$ \citep{planck20c}. In addition, we forecast Planck's sensitivity to the trispectrum induced by accreting PBHs. For the scope of this paper, we ignore biases that may arise due to lensing or other nonlinear effects, but they should of course be accounted for in a full data analysis.
\subsection{General equations}
Given that the trispectrum induced by accreting PBHs is approximately linear in $f_{\rm pbh}$, as given by Eq.~\eqref{eq:Tpbh-general}, one can build an optimal quartic estimator $\widehat{f}_{\rm pbh}$ for $f_{\rm pbh}$ \cite{smith15a, regan10a}. Its precise expression will not be needed here, and is given in Eq.~(24) of Ref.~\cite{smith15a}. The inverse variance of this estimator is given by Eq.~(25) in Ref.~\cite{smith15a}. Approximating the noise covariance matrix as diagonal in $\ell$, the variance of the estimator is given by
\begin{align}
\sigma_{f_{\rm pbh}}^2 = \left \langle \mathcal{T}_{\rm pbh} \cdot \mathcal{T}_{\rm pbh} \right \rangle^{-1}, \label{eq:var_f_pbh}
\end{align}
where for any two trispectra $\mathcal{T}_A, \mathcal{T}_B$, we define their inverse-noise weighted dot product as
\begin{align}
\left \langle \mathcal{T}_A \cdot \mathcal{T}_B \right \rangle & \equiv\frac{f_{\rm sky}}{4!} \sum_{\ell's}\frac{1}{C'_{\ell_1} C'_{\ell_2} C'_{\ell_3} C'_{\ell_4}}\nonumber\\
&\times\sum_{m's} \left(\mathcal{T}_A\right)_{\ell_1 \ell_2 \ell_3 \ell_4}^{m_1 m_2 m_3 m_4} \left(\mathcal{T}_B\right)_{\ell_1 \ell_2 \ell_3 \ell_4}^{m_1 m_2 m_3 m_4}, \label{eq:dotprod}
\end{align}
where $C_{\ell}' \equiv C_\ell + N_\ell$ is the total variance of the observed CMB temperature, including both the cosmological signal $C_\ell$ and instrumental noise $N_\ell$, $f_{\rm sky}$ is the fraction of the sky covered by the experiment, and the sums carry over all four indices.
Primordial non-Gaussianity also generates a CMB temperature trispectrum, proportional to a non-Gaussianity parameter $g_{\rm NL}$:
\begin{equation}
\langle \Theta_{\ell_1 m_1} \Theta_{\ell_2 m_2} \Theta_{\ell_3 m_3} \Theta_{\ell_4 m_4} \rangle_c = g_{\rm NL} ~\left(\mathcal{T}_{\rm png}\right)_{\ell_1 \ell_2 \ell_3 \ell_4}^{m_1 m_2 m_3 m_4}.
\end{equation}
One can build an optimal estimator $\widehat{g}_{\rm NL}$ for $g_{\rm NL}$ in the same way as $f_{\rm pbh}$. The non-Gaussianity sourced by inhomogeneously accreting PBHs would lead to a systematic bias in this estimator, even in the absence of primordial non-Gaussianity. This bias is linear in $f_{\rm pbh}$:
\begin{align}
\langle \Delta \widehat{g}_{\rm NL}\rangle_{\rm pbh} &= f_{\rm pbh}~ \mathcal{R}, \label{eq:bias}\\
\mathcal{R} &\equiv \sigma_{g_{\rm NL}}^2 \left \langle \mathcal{T}_{\rm pbh} \cdot \mathcal{T}_{\rm png} \right \rangle,
\end{align}
where $\sigma_{g_{\rm NL}}^2$ is the variance of the quadratic estimator $\widehat{g}_{\rm NL}$, given by
\begin{align}
\sigma_{g_{\rm NL}}^2&\equiv \left \langle \mathcal{T}_{\rm png} \cdot \mathcal{T}_{\rm png} \right \rangle^{-1}. \label{eq:F_png}
\end{align}
Constraints on the amplitude $g_{\rm NL}$ of primordial non-Gaussianity therefore directly translate into bounds on the PBH abundance $f_{\rm pbh}$. In what follows we will specifically consider the local-type primordial trispectrum, which is most tightly constrained by CMB anisotropy observations, and whose shape is given in \citep{smith15a},
\begin{align}
\left(\mathcal{T}^{\rm loc}_{\rm png}\right)_{\ell_1 \ell_2 \ell_3 \ell_4}^{m_1 m_2 m_3 m_4} &= \mathcal{C}_{(\ell_1 \ell_2 \ell_3 \ell_4)}Q_{\ell_1 \ell_2 \ell_3 \ell_4}^{m_1 m_2 m_3 m_4}, \\
\mathcal{C}_{(\ell_1 \ell_2 \ell_3 \ell_4)} &\equiv \mathcal{C}_{\ell_1 \ell_2 \ell_3, \ell_4} + 3 ~ \textrm{perm}, \\
\mathcal{C}_{\ell_1 \ell_2 \ell_3, \ell_4} &\equiv 6 \int r^2 dr ~ \beta_{\ell_1}(r)\beta_{\ell_2}(r)\beta_{\ell_3}(r) \alpha_{\ell_4}(r),
\end{align}
where $Q_{\ell_1 \ell_2 \ell_3 \ell_4}^{m_1 m_2 m_3 m_4}$ is given by Eq.~\eqref{eq:Q_sym} and we used the standard notation of Refs.~\cite{smith15a, komatsu05a}:
\begin{align}
\alpha_{\ell}(r) &\equiv \frac53 (4 \pi) \int Dk~ \Delta_\ell(k) j_{\ell}(k r) ,\\
\beta_{\ell}(r) &\equiv \frac35 (4 \pi) \int Dk~ \Delta_\ell(k) j_{\ell}(k r) P_{\zeta}(k).
\end{align}
\subsection{Sums over \texorpdfstring{$m$}{m}'s}
Before proceeding with the numerical evaluation of Eqs.~\eqref{eq:var_f_pbh} and \eqref{eq:bias}, we first simplify the sums over $m$'s, which involve purely geometric quantities. Specifically, we define
\begin{align} \label{eq:Qstart}
(\mathcal{Q}^2)_{\ell_1 \ell_2 \ell_3 \ell_4} \equiv& \sum_{m's} \left(Q_{\ell_1 \ell_2 \ell_3 \ell_4}^{m_1 m_2 m_3 m_4}\right)^2,\\
(\mathcal{Q \widetilde{Q}})_{\ell_1 \ell_2, \ell_3 \ell_4} \equiv& \sum_{m's} {Q^*}_{\ell_1 \ell_2 \ell_3 \ell_4}^{m_1 m_2 m_3 m_4} \widetilde{Q}_{\ell_1 \ell_2, \ell_3 \ell_4}^{m_1 m_2, m_3 m_4}, \\
(\mathcal{\widetilde{Q}}^2)_{\ell_1 \ell_2, \ell_3 \ell_4} \equiv& \sum_{m's} \left( \widetilde{Q}_{\ell_1 \ell_2, \ell_3 \ell_4}^{m_1 m_2, m_3 m_4}\right)^2, \\
(\mathcal{\widetilde{Q} \widetilde{Q}}^{\rm T})_{\ell_1 \ell_2, \ell_3 \ell_4} \equiv& \sum_{m's} \widetilde{Q^*}_{\ell_1 \ell_2, \ell_3 \ell_4}^{m_1 m_2, m_3 m_4} \widetilde{Q}_{\ell_3 \ell_4, \ell_1 \ell_2}^{m_3 m_4, m_1 m_2},\\ (\mathcal{\widetilde{Q} \widetilde{Q}}^{\rm S})_{\ell_1, \ell_2 \ell_3, \ell_4} \equiv& \sum_{m's} \widetilde{Q^*}_{\ell_1 \ell_2, \ell_3 \ell_4}^{m_1 m_2, m_3 m_4} \widetilde{Q}_{\ell_1 \ell_3, \ell_2 \ell_4}^{m_1 m_3, m_2 m_4},\label{eq:Qend}
\end{align}
where ``T" stands for transpose and ``S" for ``scrambled". The same symmetry rules apply where each set of indices divided by or surrounded by commas are symmetric. We simplify these quantities in Appendix~\ref{app:Q-sums}, where we reduce them to a single sum of products of 3-J symbols.
Inserting Eq.~\eqref{eq:Trispec-final} into Eq.~\eqref{eq:var_f_pbh}, carrying out the sums over $m$'s, and simplifying, the inverse variance of the estimator $\widehat{f}_{\rm pbh}$ becomes
\begin{align}\label{eq:inv_fpbh}
\left(\sigma_{f_{\rm pbh}}^2\right)^{-1}&=\frac{f_{\rm sky}}{4!}\sum_{\ell's} \frac1{C'_{\ell_1}C'_{\ell_2} C'_{\ell_3} C'_{\ell_4}} \nonumber\\
&\times \Big{[}~~~~~\left(\mathcal{A}_{(\ell_1 \ell_2 \ell_3 \ell_4)}\right)^2 (\mathcal{Q}^2)_{\ell_1 \ell_2 \ell_3 \ell_4}\nonumber\\
&~~~~ + 12 ~\mathcal{A}_{(\ell_1 \ell_2 \ell_3 \ell_4)}\mathcal{B}_{\ell_1 \ell_2, (\ell_3 \ell_4)} (\mathcal{Q \widetilde{Q}})_{\ell_1 \ell_2, \ell_3 \ell_4} \nonumber\\
& ~~~~ + 6 ~\left(\mathcal{B}_{\ell_1 \ell_2, (\ell_3 \ell_4)}\right)^2 (\mathcal{\widetilde{Q}}^2)_{\ell_1 \ell_2, \ell_3 \ell_4} \nonumber\\
&~~~~ +6~\mathcal{B}_{\ell_1 \ell_2, (\ell_3 \ell_4)}\mathcal{B}_{\ell_3 \ell_4, (\ell_1 \ell_2)} (\mathcal{\widetilde{Q} \widetilde{Q}}^{\rm T})_{\ell_1 \ell_2, \ell_3 \ell_4} \nonumber\\
&~~~~ + 24~\mathcal{B}_{\ell_1 \ell_2, (\ell_3 \ell_4)}\mathcal{B}_{\ell_1 \ell_3, (\ell_2 \ell_4)}(\mathcal{\widetilde{Q} \widetilde{Q}}^{\rm S})_{\ell_1, \ell_2 \ell_3, \ell_4} \Big{]}.
\end{align}
Similarly, the bias on local-type non-Gaussianity due to accreting PBHs simplifies to
\begin{align}
\langle \Delta \widehat{g}^{\rm loc}_{\rm NL}\rangle_{\rm pbh}&= f_{\rm pbh}\times \frac{f_{\rm sky}}{4!} \sigma_{g_{\rm NL}^{\rm loc}}^2 \sum_{\ell's} \frac{\mathcal{C}_{(\ell_1 \ell_2 \ell_3 \ell_4)} }{C'_{\ell_1} C'_{\ell_2} C'_{\ell_3} C'_{\ell_4}}\nonumber\\
&\quad\times\Big{[}~~~~\mathcal{A}_{(\ell_1 \ell_2 \ell_3 \ell_4)} (\mathcal{Q}^2)_{\ell_1 \ell_2 \ell_3 \ell_4} \nonumber\\
&\quad\quad + 6~\mathcal{B}_{\ell_1 \ell_2, (\ell_3 \ell_4)} (\mathcal{Q \widetilde{Q}})_{\ell_1 \ell_2, \ell_3 \ell_4} \Big{]},
\end{align}
where the inverse variance of $\widehat{g}_{\rm NL}^{\rm loc}$ is given by
\begin{align}\label{eq:FPNG}
\left(\sigma_{g_{\rm NL}^{\rm loc}}^2\right)^{-1} &= \frac{f_{\rm sky}}{4!} \sum_{\ell's} \frac{\left(\mathcal{C}_{(\ell_1 \ell_2 \ell_3 \ell_4)}\right)^2}{C'_{\ell_1} C'_{\ell_2} C'_{\ell_3} C'_{\ell_4}} (\mathcal{Q}^2)_{\ell_1 \ell_2 \ell_3 \ell_4}.
\end{align}
\subsection{Application to Planck data}
We now apply the above results to the Planck experiment \citep{planck20a,planck20b,planck20c}. The relevant fraction of sky coverage is $f_{\rm sky}=0.78$ \citep{planck20c}, and the instrumental noise $N_\ell$ is obtained from combining the noises of the 100, 143 and 217 GHz frequency channels,
\begin{align}
N_\ell=\left[\sum_c N^{-1}_{\ell,c}\right]^{-1},
\end{align}
where, for each channel $c$, the noise is modelled as a Gaussian with variance per pixel $\sigma^2_c$ and beam size $\theta_{{\rm FWHM},c}$:
\begin{align}
N_{\ell,c}=\left(\frac{\sigma_c~\theta_{{\rm FWHM},c}}{T_0}\right)\exp\left[\frac{\ell(\ell+1)\theta^2_{{\rm FWHM},c}}{8\ln 2}\right],
\end{align}
where $T_0=2.73$ K is the CMB monopole. The respective parameters for each channel are\footnote{\url{https://wiki.cosmos.esa.int/planckpla/index.php/Main_Page}}
\begin{center}
\begin{tabular}{ c c c }
$\nu_c$ & $\theta_{\rm FWHM, c}$ & $\sigma_c$ \\
\hline
100 GHz & 9.66$'$ & 10.77 $\mu$K\\
143 GHz & 7.27$'$ & 6.40 $\mu$K\\
217 GHz & 5.01$'$ & 12.48 $\mu$K\\
\end{tabular}
\end{center}
The \textit{Planck} 2018 limits on $g^{\rm loc}_{\rm NL}$ are given by \citep{planck20c}:
\begin{align}
g_{\rm NL}^{\rm loc} &= \left(- 5.8 \pm 6.5 \right) \times 10^4 \ \ \ (68\%~~ \textrm{confidence}) \\
&\equiv \widehat{g_{\rm NL}^{\rm loc}} \pm \sigma_{g_{\rm NL}^{\rm loc}}
\end{align}
As a cross check of our numerical code, we compared the standard deviation of the local-type trispectrum estimator that we obtain from Eq.~\eqref{eq:FPNG} to the one reported by the Planck collaboration, and given above. We find that they agree within 5\%.
To derive an indirect bound on $f_{\rm pbh}$ from the Planck constraint on $g_{\rm NL}^{\rm loc}$, we proceed as follows. Using Bayes' theorem, and assuming the estimator for $g_{\rm NL}^{\rm loc}$ has a Gaussian distribution, the un-normalized posterior probability distribution for $f_{\rm pbh}$ is given by
\begin{align}
\mathcal{P}(f_{\rm pbh}) \propto \exp\left[-\frac12 \frac{\left(\mathcal{R} f_{\rm pbh} - \widehat{g_{\rm NL}^{\rm loc}} \right)^2}{\sigma_{g_{\rm NL}^{\rm loc}}^2} \right] H(f_{\rm pbh}),
\end{align}
where in this context $H(x)$ designates the Heaviside function, enforcing a positive prior on $f_{\rm pbh}$. The $(1 - \epsilon)$-confidence upper limit on $f_{\rm pbh}$, is then obtained from solving the implicit equation
\begin{equation}
\int_{f_{\rm pbh}}^\infty d f ~\mathcal{P}(f) = \epsilon ~ \int_0^{\infty} d f ~\mathcal{P}(f).
\end{equation}
\subsection{Results and discussion}
We are now fully equipped to compute upper limits on $f_{\rm pbh}$ indirectly from the Planck constraint on $g_{\rm NL}^{\rm loc}$, and to forecast Planck's sensitivity to $f_{\rm pbh}$ from the temperature trispectrum. We shall compare these limits and forecasts to Planck power-spectra limits on $f_{\rm pbh}$. We obtain the latter with exactly the same procedure as in AK17, but using Planck 2018 data \cite{planck20a} (instead of 2015). Specifically, we use the foreground-marginalized Plik-lite log-likelihood for $C_\ell$'s at $\ell \geq 30$, which we Taylor-expand near the Planck best-fit cosmology, and account approximately for the low-$\ell$ data by imposing a Gaussian prior on the optical depth to reionization. For the joint $TT,TE,EE$ limits, we use the modified version of \texttt{HYREC} and \texttt{CLASS} as implemented by AK17; in particular we use their approximate homogeneous injection-to-deposition Green's function. In addition to the joint $TT,TE,EE$ limits, we also compute a $TT$-only upper limit on $f_{\rm pbh}$ -- we still retain the optical depth prior, however, so our ``$TT$-only" limits are technically temperature + low-$\ell$ polarization limits. For a fair comparison with our $TTTT$ trispectrum limits and forecasts, for the $TT$ limit we compute the effect of accreting PBHs at first order in $f_{\rm pbh}$, including the ``direct" term only, and using our more accurate injection-to-ionization Green's function. We also include the effect of inhomogeneous ionization perturbations on the temperature power spectrum for completeness, but this makes a negligible difference on the results.
We find that the indirect limit on $f_{\rm pbh}$ obtained from Planck's bounds on $g_{\rm NL}^{\rm loc}$ is systematically one order of magnitude weaker than the $TT$-only power spectrum limit, for all PBH masses. This is due to the weak overlap of the trispectrum induced by primordial non-Gaussianity with the one induced by accreting PBHs: we find that the correlation coefficient of the two shapes is less than $10\%$ across all black hole masses (using the dot product defined in Eq.~\eqref{eq:dotprod}). We therefore do not show this limit on our final figure.
We show our forecasted 1-$\sigma$ sensitivity of Planck to the trispectrum of accreting PBHs in Fig.~\ref{fig:const}, alongside current Planck power spectra upper limits on $f_{\rm pbh}$. The upper set of curves correspond to the conservative collisional ionization limit of AK17, while the lower set of curves correspond to the photoionization limit (see AK17 for details). In both cases the qualitative results are the same: we see that the temperature-only trispectrum is not as sensitive as we had expected it to be a priori, as its sensitivity is comparable to current $TT$ upper limits (rather than an order of magnitude better than joint temperature and polarization limits). Nevertheless, the temperature trispectrum is still more sensitive than temperature-only power spectrum constraints for $M_{\rm pbh} \lesssim 10^3 M_{\odot}$. In particular, the temperature-only trispectrum has the potential to probe PBHs lighter by a factor $\sim 2$ than the current reach of temperature-only power spectrum limits.
Interestingly, the mass dependence of the trispectrum sensitivity forecast is shallower than that of the power spectrum constraints. Moreover, we find that making the spatial on-the-spot approximation (as described in Sec.~\ref{sec:approx}) affects trispectrum forecasts by no more than $20\%$. Both of these features can be explained qualitatively by the different redshift dependence of the trispectrum and power spectrum signals, which we explore in Appendix~\ref{app:slope}.
Fig.~\ref{fig:const} also shows the updated Planck joint temperature and polarization power-spectrum constraints ($TT,TE,EE$). We see that these constraints are tighter than the $TT$-only constrains by about an order of magnitude. This stems from the relatively larger effect of recombination perturbations on the polarization signal (see e.g.~Fig.~13 of AK17), indicating a stronger cross-correlation of the perturbed CMB polarization with the unperturbed field. This provides a strong motivation to extend our work to all temperature and $E$-mode polarization trispectra, $TTTE, TTEE, TEEE, EEEE$, which may be significantly more sensitive to accreting PBHs than the temperature-only trispectrum. In addition, the inhomogeneity in the free-electron fraction ought to induce $B$-mode polarization of magnitude comparable with the corresponding $E$-mode polarization, $B^{(1)}_{\rm inh} \sim E^{(1)}_{\rm inh}$. This means that, to linear order in $f_{\rm pbh}$, trispectra involving one $B$ mode ($TTTB, TTEB, TEEB, EEEB$) ought to carry comparable signal to the corresponding 4-point functions involving temperature and $E$-modes only. Importantly, absent primordial tensor modes or accreting PBHs, the primary (unlensed) CMB $B$-mode polarization vanishes. Therefore, after delensing, one can effectively eliminate cosmic variance in the $B$-mode measurement. We thus expect these $B$-mode trispectra to have a significantly enhanced signal-to-noise ratio relative to their $E$-mode counterparts \cite{Meerburg_16}. We defer to a future publication the extension of this work to polarization trispectra.
\begin{figure*}[ht]
\includegraphics[trim={0 1cm 0 .5cm},width=1.5\columnwidth]{trispec_pbh_CI_jen.pdf}
\caption{\label{fig:const} \emph{Planck} 2018 CMB power spectra constraints (solid lines) and temperature trispectrum forecasted sensitivity (dashed red line) to the fraction of dark matter in PBHs, as a function of PBH mass. Our forecasted sensitivity from the temperature trispectrum is better than $TT$-only constraints for $M_{\rm pbh} \lesssim 10^3 M_{\odot}$ for both the collisional ionization (thick lines) and photoionization (thin lines) limits (see AK17 for details about these different regimes).}
\end{figure*}
\section{Conclusions}\label{sec:conc}
This work is the second part of a series of three papers studying the imprints of inhomogeneously-accreting PBHs on CMB anisotropies, in particular their higher-order statistics. The first part, Ref.~\citep{jensen21a}, inspected in detail how inhomogeneous energy injection from non-uniformly accreting PBHs perturbs recombination. In the present analysis, we compute the perturbed temperature anisotropy and its 2-point and 4-point functions. In the upcoming third paper of this series, we will extend this work to polarization.
Our main results can be summarized as follows: \\
$(i)$ The inhomogeneous part of the free-electron perturbation leads to a sub-10\% effect on the perturbation to the CMB temperature power spectrum. In other words, it is sufficient to only account for the average perturbation to the free-electron fraction when computing the effect of accreting PBHs on the CMB temperature power spectrum. This sub-dominant contribution was not expected a priori and is due to the poor correlation of the perturbed CMB temperature field with the standard temperature anisotropy. It is not guaranteed that the same holds true for CMB polarization power spectra.
\\
$(ii)$ We set new constraints on the PBH abundance, obtained indirectly from Planck's upper limits on local-type primordial non-Gaussianity. Indeed, the shape of PBH-induced trispectrum overlaps with that of primoridal non-Gaussianities, although weakly. This weak correlation implies that our new constraints are not competitive with existing CMB temperature power spectrum constraints. Still, they provide a qualitatively different probe of PBH abundance, complementary to the usual 2-point function limits.\\
$(iii)$ We forecast the sensitivity of Planck to the temperature trispectrum induced by inhomogeneously-accreting PBHs. Although our numerical results show a weaker sensitivity than what could have been expected from simple order-of-magnitude estimates, still we find that the temperature trispectrum would be sensitive to PBH abundances lower than current bounds from the CMB temperature-only 2-point function, for $M_{\rm pbh} \lesssim 10^3 M_\odot$. This is our most important result, which demonstrates that the CMB trispectrum is indeed a useful probe of PBHs.
The calculation of higher-order CMB statistics is quite involved, and we necessarily had to make several approximation to keep it tractable. First, following previous studies of perturbed recombination, we only accounted for the ``direct" piece of the source term for the perturbation to CMB anisotropies, and neglected the ``feedback" piece. Unlike previous studies, however, we explicitly quantified this approximation in the limiting case of homogeneous ionization perturbations, and showed that it is accurate to better than $\sim 20\%$ in that case. Still, a rigorous and definitive calculation of the trispectrum should eventually include the ``feedback" term self-consistently. Second, we made several approximations in order to derive a factorized quadratic transfer function for the free-electron fraction perturbation. In particular, we conservatively approximated the injection-to-ionization Green's function by a factorized form that bounds it from below. This approximation was needed to get a factorized trispectrum, much more manageable computationally than the exact trispectrum would be. In order to quantify the error induced by this approximation, we also considered the limit of spatially-on-the-spot energy deposition, which bounds the injection-to-ionization Green's function from above. We found that all our results are nearly unchanged when considering this limit, thus giving us confidence in their robustness. Third, in our analysis of the primordial non-Gaussianity bias and our trispectrum sensitivity forecast, we neglected non-Gaussianities induced by CMB lensing. An actual analysis of CMB data should of course correct for the lensing bias.
The most uncertain part of our calculation remains the physics of accretion and radiation. All our numerical results rely on the semi-analytic model of AK17 \cite{yacine17a}, with a simple prescription for the effect of relative velocities. While of course the quantitative results would change with different assumptions about the accretion geometry and radiative efficiency, it seems unavoidable that the PBH accretion luminosity should be strongly modulated by large-scale supersonic relative velocities. We also neglected entirely the effects of non-linear clustering post-recombination \cite{inman19}. We expect relative velocities would also modulate the baryon content of the first halos, hence the accretion rate in these environments. Hence, our results should still be robust qualitatively, regardless of the details of the accretion model, or of the relevance of accretion in non-linear halos. Moreover, the formalism we develop is quite general and could be applied to arbitrary perturbations of recombination spatially modulated by relative velocities, or even more generally quadratic in initial conditions.
Even if the temperature trispectrum is not quite as sensitive to PBHs as we had anticipated from the simple order of magnitude presented in the introduction, our results are still very significant and promising. Indeed, we uncovered a completely new CMB observable to probe PBHs, with a sensitivity comparable to, and in some cases better than, current CMB temperature power-spectrum constraints. Importantly, while several energy injection processes could in principle mimick the effect of accreting PBHs in CMB power spectra, to our knowledge the trispectrum signature studied in this work is unique to them. These considerations provide strong motivation to extend this work and study the polarization signal of inhomogeneously accreting PBHs. In addition to trispectra involving $E$-mode polarization ($TTTE, TTEE, TEEE, EEEE$), we also expect $B$-mode non-Gaussianity, in the form of $TTTB, TTEB, TEEB, EEEB$ trispectra at leading order in PBH abundance. These $B$-mode trispectra ought to have amplitudes comparable to their $E$-mode counterparts, but much lower noise. We defer the computation of these promising observables to the third and last installment of this series of publications.
\section*{Acknowledgements}
YAH is a CIFAR-Azrieli Global scholar and acknowledges funding from CIFAR. This work was supported in part by NSF grant No.~1820861. We thank Daan Meerburg for suggesting to seek a factorized form for the trispectrum, and Colin Hill for comments on this manuscript.
\begin{appendix}
\section{Correlation functions involving a function of relative velocity}\label{app:vbcsq}
In what follows we denote $\boldsymbol{v} \equiv \boldsymbol{v}_{\rm bc}$ the relative velocity of baryons and dark matter. We need to compute $(N+1)$-point functions of the form
\begin{align}
\braket{F(\boldsymbol{k}) \delta_1(\boldsymbol{k}_1) \cdots \delta_{N}(\boldsymbol{k}_{N})} = P_N(\boldsymbol{k}, \boldsymbol{k}_1, \cdots, \boldsymbol{k}_{N})\nonumber\\
\times (2 \pi)^3 \delta^{(3)}(\boldsymbol{k} + \boldsymbol{k}_1 + \cdots + \boldsymbol{k}_N), \label{eq:BFd1d1-def}
\end{align}
where $F$ depends on position only through the magnitude $v$ of the relative velocity field, i.e.~$F(\boldsymbol{x}) = F(v(\boldsymbol{x}))$, and has zero mean, $\braket{F} = 0$, and $\delta_1, \cdots, \delta_N$ are scalars also with zero mean linearly related to the primordial curvature perturbation. This $(N+1)$-point function is non-zero only if $N$ is even, given that $F$ is an even function of relative velocity. The $(N+1)$-spectrum $P_N$ is the Fourier transform of the $(N+1)$-point correlation function
\begin{equation}
\xi_N(\bs{x}_1, \cdots, \bs{x}_N) \equiv \braket{F(v(\boldsymbol{0})) \delta_1(\bs{x}_1) \cdots \delta_N(\bs{x}_N)}.
\end{equation}
The goal of this appendix is to derive an approximate expression for $\xi_N$, from which one can also approximate $P_N$.
In full generality, provided $\boldsymbol{v}, \delta_1, \cdots, \delta_N$ are Gaussian-distributed, we have
\begin{align}
\xi_N(\bs{x}_1, \cdots, \bs{x}_N) = \int d^3 v~ d\delta_1\cdots d \delta_N ~ F(v) \delta_1 \cdots \delta_N \nonumber\\
\times \frac{1}{\sqrt{2 \pi \det(\boldsymbol{C})}} \exp\left[- \frac12 \boldsymbol{X}^{\rm T} \cdot \boldsymbol{C}^{-1} \cdot \boldsymbol{X} \right],
\end{align}
where
\begin{align}
\boldsymbol{X}^{\rm T} \equiv (\widetilde{\boldsymbol{v}}, \widetilde{\boldsymbol{\delta}}^{\rm T}) \equiv \left(\frac{\boldsymbol{v}}{\sigma_{1d}}, \frac{\delta_1}{\sigma_{\delta_1}}, \cdots, \frac{\delta_N}{\sigma_{\delta_N}}\right),
\end{align}
with $\sigma_{1d}^2 \equiv \braket{v^2}/3$ and $\sigma_{\delta_i}^2\equiv \braket{\delta_i^2}$. $\boldsymbol{C}$ is the $(N+3)$ by $(N+3)$ normalized covariance matrix of $\widetilde{\boldsymbol{v}}(\boldsymbol{0}), \widetilde{\delta_1}(\bs{x}_1), \cdots , \widetilde{\delta_N}(\bs{x}_N)$. Explicitly, this matrix is given by $\boldsymbol{C} = \boldsymbol{C}_0 + \boldsymbol{\Delta}$, with
\begin{align}
&\boldsymbol{C}_0 \equiv \begin{pmatrix}
\mbox{\normalfont\Large\bfseries 1}_{3\times 3}
& \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep} & \mbox{\normalfont\Large\bfseries 0}_{3 \times N} \\[8pt]
\hline
\mbox{\normalfont\Large\bfseries 0}_{N \times 3} & \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep} &
\boldsymbol{C}_{\widetilde{\delta}}
\end{pmatrix}, \\
&\boldsymbol{\Delta} \equiv \begin{pmatrix}
\mbox{\normalfont\Large\bfseries 0}_{3 \times 3} & \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep} &
\begin{matrix}
\boldsymbol{\Xi}_1^{\rm T}\\
\boldsymbol{\Xi}_2^{\rm T}\\
\boldsymbol{\Xi}_3^{\rm T}
\end{matrix} \\
\hline
\begin{matrix}
\boldsymbol{\Xi}_1 & \boldsymbol{\Xi}_2 & \boldsymbol{\Xi}_3
\end{matrix}
& \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep} & \mbox{\normalfont\Large\bfseries 0}_{N \times N}
\end{pmatrix},
\end{align}
where $\boldsymbol{C}_{\widetilde{\delta}}$ is the $N\times N$ normalized covariance matrix of the $\widetilde{\delta}$'s, and $\boldsymbol{\Xi}_i, i = 1, 2, 3$, are the $N$-dimensional column vectors
\begin{equation}
\boldsymbol{\Xi}_i \equiv \begin{pmatrix} \braket{{\widetilde{v}_i(0)\widetilde{\delta}_1(\bs{x}_1)}}\\ \vdots\\ \braket{\widetilde{v}_i(0)\widetilde{\delta}_N(\bs{x}_N)} \end{pmatrix}.
\end{equation}
In words, the matrix $\boldsymbol{C}_0$ includes all correlations except for the velocity-$\delta$ correlations, which are included in $\boldsymbol{\Delta}$.
So far, these expressions are exact. We expect that, in general, $\boldsymbol{\Delta}$ is small for \emph{any} separation. Indeed, this is always true in the large-separation limit. Moreover, statistical isotropy implies that $\braket{\boldsymbol{v}(0) \delta(\boldsymbol{x})} \rightarrow 0$ when $\boldsymbol{x} \rightarrow 0$, since there is no non-null isotropic rank-1 tensor. This can be seen in Fig.~\ref{fig:v_corr} where we correlate $v_i$ with the canonical monopoles of the $\Theta^{(0)}$ line-of-sight source transfer functions of ${S}^{(0)}$ (c.f. the first line of Eq.~\eqref{eq:S0}) for example.
\begin{figure}[ht]
\includegraphics[trim={0 0 0 0},width=\columnwidth]{v_corr.pdf}
\caption{\label{fig:v_corr} Correlation function of the monopole terms of the line-of-sight source for $\Theta^{(0)}$ near recombination with the relative velocity between CDM and baryons at various redshifts. Namely we plot the variance normalized correlation function $\widetilde{c}_r$, where $\langle {\bf v}(z){S^{(0)}_0}(z'=1100)\rangle \equiv c_r(r,z)\hat{r}$, and the subscript implies the monopole terms only. Even at intermediate scales it is less than unity and justifies expanding the covariance matrix discussed in Appendix~\ref{app:vbcsq}.}
\end{figure}
We may therefore expand $\boldsymbol{C}^{-1}$ around $\boldsymbol{C}_0^{-1}$ to compute $\xi_N$. We'll see that it is required to include terms at second order in $\boldsymbol{\Delta}$:
\begin{equation}
\boldsymbol{C}^{-1} = \boldsymbol{C}_0^{-1} - \boldsymbol{C}_0^{-1}\boldsymbol{\Delta} \boldsymbol{C}_0^{-1} + \boldsymbol{C}_0^{-1}\boldsymbol{\Delta} \boldsymbol{C}_0^{-1} \boldsymbol{\Delta} \boldsymbol{C}_0^{-1} + \mathcal{O}(\Delta^3),
\end{equation}
from which we get, at second order in $\Delta$,
\begin{eqnarray}
&&\exp\left[- \frac12 \boldsymbol{X}^{\rm T} \boldsymbol{C}^{-1} \boldsymbol{X} \right] = \Lambda \times \exp\left[- \frac12 \boldsymbol{X}^{\rm T} \boldsymbol{C}_0^{-1} \boldsymbol{X} \right] ,\nonumber\\
&&\Lambda \equiv 1+ \frac12 \boldsymbol{\tilde{X}}^{\rm T} \boldsymbol{\Delta} \boldsymbol{\tilde{X}} \nonumber\\
&&~~ + \frac18 \left(\boldsymbol{\tilde{X}}^{\rm T} \boldsymbol{\Delta} \boldsymbol{\tilde{X}} \right)^2 - \frac12 \boldsymbol{\tilde{X}}^{\rm T} \boldsymbol{\Delta}\boldsymbol{C}_0^{-1} \boldsymbol{\Delta} \boldsymbol{\tilde{X}},~~~~~~~~~
\end{eqnarray}
with $\boldsymbol{\tilde{X}} \equiv \boldsymbol{C}_0^{-1} \boldsymbol{X}$. With this approximation, we thus have
\begin{equation}
\xi_N \approx \sqrt{\frac{\det(\bf{C}_0)}{\det(\bf{C})}}\langle F(v) \delta_1 \cdots \delta_N \Lambda\rangle_0 \approx \langle F(v) \delta_1 \cdots \delta_N \Lambda\rangle_0, \label{eq:xi_N-approx}
\end{equation}
where the average $\langle \cdots \rangle_0$ is over the ``unperturbed" $(N+3)$-D Gaussian distribution with covariance matrix $\bf{C}_0$, which is the product of two uncorrelated Gaussian distributions: an isotorpic Gaussian distribution for $\boldsymbol{v}(\boldsymbol{0})$ and a $N$-dimensional Gaussian for $(\delta_1(\boldsymbol{x}_1), \cdots, \delta_N(\boldsymbol{x}_N))$, with covariance matrix $\boldsymbol{C}_\delta$. The second equality is valid to lowest order in $\Delta$.
Upon integrating over velocities, the contribution of the first term in $\Lambda$ (i.e.~1) vanishes, since $\braket{F(v)} = 0$. Let us now compute the other terms. First, let us compute
\begin{equation}
\tilde{\boldsymbol{X}}^{\rm T} \boldsymbol{\Delta} = \left( \boldsymbol{\delta}^{\rm T} \boldsymbol{C}_\delta^{-1} \boldsymbol{\Xi}_1, \boldsymbol{\delta}^{\rm T} \boldsymbol{C}_\delta^{-1} \boldsymbol{\Xi}_2, \boldsymbol{\delta}^{\rm T} \boldsymbol{C}_\delta^{-1} \boldsymbol{\Xi}_3, \frac{v_i}{\sigma_{1 \rm d}^2} \boldsymbol{\Xi}_i^{\rm T} \right),
\end{equation}
where the first three terms are scalars, and the last term contains an implicit sum over $i$, and is a $N$-dimensional vector. We therefore have
\begin{equation}
\Lambda_1 \equiv \boldsymbol{\tilde{X}}^{\rm T} \boldsymbol{\Delta} \boldsymbol{\tilde{X}} = \frac{2v_i}{\sigma_{\rm 1d}^2} \boldsymbol{\Xi}_i^{\rm T} \boldsymbol{C}_\delta^{-1} \boldsymbol{\delta} = \frac{2v_i}{\sigma_{\rm 1d}^2} \boldsymbol{\delta}^{\rm T} \boldsymbol{C}_{\delta}^{-1} \boldsymbol{\Xi}_i.
\end{equation}
The second term $\Lambda_1$ is therefore linear in $v_i$. Therefore $\langle F(v) \delta_1... \delta_N \Lambda_1\rangle_0 = 0$ since $\langle v_i F(v) \rangle = 0$, by isotropy.
We thus need to only include the third and last terms in $\Lambda$, quadratic in $\Delta$.
Let's start with the third term, proportional to $\Lambda_1^2$. From our previous results, we have
\begin{equation}
\Lambda_1^2 = \frac4{\sigma_{\rm 1d}^4} v_i v_j \boldsymbol{\Xi}_i^{\rm T} \boldsymbol{C}_\delta^{-1} \boldsymbol{\delta} \boldsymbol{\delta}^{\rm T} \boldsymbol{C}_\delta^{-1}\boldsymbol{\Xi}_j,
\end{equation}
where repeated indices are summed over. Using $\langle F(v) v_i v_j \rangle_0 = \frac13 \delta_{ij} \langle v^2 F(v)\rangle_0$, we thus find
\begin{align}
\Big{\langle} F(v) \delta_1 \cdots \delta_N \Lambda_1^2 \Big{\rangle}_0 = \frac{4}{3 \sigma_{1d}^4} \langle v^2 F(v) \rangle \nonumber\\
\times \boldsymbol{\Xi}_i^{\rm T} \boldsymbol{C}_{\delta}^{-1} \langle \delta_1 \cdots \delta_N \boldsymbol{\delta} \boldsymbol{\delta}^{\rm T} \rangle_0 \boldsymbol{C}_{\delta}^{-1} \boldsymbol{\Xi}_i.
\end{align}
On the other hand, we have
\begin{eqnarray}
\Lambda_2 &\equiv& \boldsymbol{\tilde{X}}^{\rm T} \boldsymbol{\Delta}\boldsymbol{C}_0^{-1} \boldsymbol{\Delta} \boldsymbol{\tilde{X}} = \frac{v_i v_j}{\sigma_{1d}^4} \boldsymbol{\Xi}_i^{\rm T} \boldsymbol{C}_{\delta}^{-1} \boldsymbol{\Xi}_j \nonumber\\
&+& \textrm{terms independent of $v$}.
\end{eqnarray}
This implies
\begin{align}
\Big{\langle} F(v) \delta_1 \cdots \delta_N \Lambda_2 \Big{\rangle}_0 = \frac{\braket{v^2 F(v)}}{3 \sigma_{1d}^4} \langle \delta_1 \cdots \delta_N \rangle \boldsymbol{\Xi}_i^{\rm T} \boldsymbol{C}_{\delta}^{-1} \boldsymbol{\Xi}_i.
\end{align}
Therefore, combining terms, we obtain
\begin{eqnarray}
\xi_N(\bs{x}_1, \cdots, \bs{x}_N) &\approx& \frac18 \Big{\langle} F(v) \delta_1 \cdots \delta_N \Lambda_1^2 \Big{\rangle}_0 \nonumber\\
&-& \frac12 \Big{\langle} F(v) \delta_1 \cdots \delta_N \Lambda_2 \Big{\rangle}_0\nonumber\\
&=& \braket{v^2 F(v)}S(\bs{x}_1, \cdots, \bs{x}_N),
\end{eqnarray}
where we have defined
\begin{equation}
S \equiv \frac1{6 \sigma_{1d}^4} \boldsymbol{\Xi}_i^{\rm T} \boldsymbol{C}_{\delta}^{-1} \Big{\langle} \delta_1 \cdots \delta_N \left(\boldsymbol{\delta \delta}^{\rm T} - \boldsymbol{C}_\delta \right)\Big{\rangle} \boldsymbol{C}_{\delta}^{-1} \boldsymbol{\Xi}_i.
\end{equation}
We see that in this approximation, the shape of the $N$-point correlation is entirely determined by $S(\bs{x}_1, \cdots, \bs{x}_N)$, regardless of the function $F(v)$. The latter only affects the overall amplitude of the correlation function, and only through its moment $\langle v^2 F(v) \rangle$.
Therefore, to compute the $N$-point correlation function, one may substitute $F(v)$ with a simpler function $\tilde{F}(v)$, as long as $\langle v^2 \tilde{F}(v) \rangle = \langle v^2 F(v) \rangle$. The simplest such function is $\tilde{F}(v) \equiv b_F \left(\frac{v^2}{3 \sigma_{1d}^2} -1\right)$. It is such that
\begin{equation}
\langle v^2 \tilde{F}(v) \rangle = 2 b_F \sigma_{1 \rm d}^2.
\end{equation}
Hence, we may use $\tilde{F}(v)$ instead of $F(v)$ provided the parameter $b_F$ is given by
\begin{equation}
b_F = \frac1{2 \sigma_{1d}^2} \braket{v^2 F(v)}.
\end{equation}
This result was proven in configuration space but also holds in the Fourier domain, where it is most useful: we have proven that, for any $N$-point function involving $N$ scalar functions (provided the $\delta v_i$ correlations are sufficiently small at all separations), we may use $\tilde{F}(v) = b_F \left(\frac{v^2}{3 \sigma_{1d}^2} -1\right) $ in order to compute $N$-point functions. Importantly, this means that the shape we derive for the trispectrum should be relatively insensitive to the details of accretion physics -- the shape still has some dependence on it, as in practice the bias parameter $b_F$ is redshift-dependent, in a way that depends on the details of accretion.
\section{Numerical resolution and convergence}\label{app:conv}
In this appendix we describe our sampling of $\eta, k$ and $\ell$ integrals and sums.
Because each conformal-time integral relevant to the PBH-induced trispectrum includes the visibility function $g(\eta)$, we sample $\eta$ more finely during recombination. Starting from $z_{\rm max} = 1400$, we sample $\eta$ with logarithmic step size $\Delta\ln \eta = 10^{-3}$ until $z_{\rm rec} = 900$, after which we increase the step size to $\Delta \ln \eta = 2\times 10^{-2}$ until $z_{\rm re} = 10$, and then finally we sample linearly in $\eta$ until $z = 0$ with step size $\Delta\eta = 50$ Mpc.
For $k$ integrals, we compute quantities on a grid from $k_{\rm min}=10^{-5}$ Mpc$^{-1}$ up to a maximum wave number $k_{\rm max} = 5000 \eta_0^{-1}$ with a step size $\Delta k={\rm min}(\epsilon k, \kappa_0)$, where $\epsilon = 0.006$ and $\kappa_0 = 10^{-4}$ Mpc$^{-1}$, i.e.~use logarithm spacing for low-$k$ to linear spacing at high-$k$.
Finally, our $\ell$ sampling consists of the floors of an array of real $\ell$ values spaced logarithmically in $2 \leq \ell < 400$ with $\Delta \ln \ell = 0.0225$, and linearly in $400 \leq \ell < \ell_{\max} = 3000$ with $\Delta \ell = 19.5$. Note that these values were chosen to produce an $\ell$ sampling similar to the standard output of \texttt{CLASS}, with almost double the resolution.
We reproduce the standard CMB temperature angular power spectrum, Eq.~\eqref{eq:C_l}, and compare to the output of \texttt{CLASS} \citep{CLASS}. We find a sub-percent fractional difference for all $\ell <\ell_{\rm max}$.
We also recompute all results with increased resolution prescribed via,
\begin{align}
(\Delta\ln \eta,\,\Delta\eta,\,k_{\rm max}&,\epsilon, \kappa_0)\rightarrow\nonumber\\
&\left(\frac{2}{3}\Delta\ln \eta,\,\frac{2}{3}\Delta\eta,\,\frac{3}{2}k_{\rm max},\frac{2}{3}\epsilon, \frac{2}{3}\kappa_0\right).
\end{align}
We find, for both the power spectrum and trispectrum calculations, there is a fractional change in the results only at sub-percent level, far below the theoretical uncertainty of the problem at hand.
Lastly, the trispectrum results depend on the intermediate quantity $\mathcal{J}_\ell$, given in Eq.~\eqref{eq:mathcalJ_main} as an infinite double sum. We truncate this sum at a maximum $\ell_2 = \ell_{\rm cut}$ (which automatically truncates the $\ell_1$ sum due to the triangle inequality). We find that our trispectrum results are converged within $0.1\%$ by $\ell_{\rm cut} = 50$.
\section{Spin-weighted spherical harmonics}\label{app:spin}
Spin-weighted spherical harmonics are related to Wigner $D$-matrices. They become regular spherical harmonics when their spin is zero, $(_0Y_{\ell m})=Y_{\ell m}$ and inherit similar orthogonal and completeness relations. They have the familiar property $(_sY_{\ell m})^*=(-1)^{s+m}(_{-s}Y_{\ell -m})$, as well as a product rule similar to the Gaunt relation involving Wigner 3$j$ symbols \citep{Qtheory},
\begin{align}\label{eq:product_rule}
~_{s_1}Y_{\ell_1 m_1}(\hat{n}) ~_{s_2}Y_{\ell_2 m_2}(\hat{n}) =\sum_{s_3,\ell_3m_3}g^{-s_1(-s_2)(-s_3)}_{\ell_1\ell_2\ell_3}\nonumber\\
\times\threej{\ell_1}{\ell_2}{\ell_3}{m_1}{m_2}{m_3} ~_{s_3}Y^*_{\ell_3 m_3}(\hat{n}),
\end{align}
where the $g$-symbols are defined by
\begin{align}\label{eq:g_sym}
g_{\ell_1 \ell_2 \ell_3}^{s_1 s_2 s_3} &\equiv \sqrt{\frac{(2 \ell_1 +1)(2 \ell_2 +1)(2 \ell_3 +1)}{4 \pi}} \threej{\ell_1}{\ell_2}{\ell_3}{s_1}{s_2}{s_3},\nonumber\\
g_{\ell_1 \ell_2 \ell_3}&\equiv g_{\ell_1 \ell_2 \ell_3}^{0\, 0\, 0},
\end{align}
For shorthand we also define the Gaunt coefficient,
\begin{align}\label{def:G}
\mathcal{G}^{\ell_1\ell_2\ell_3}_{m_1 m_2m_3}\equiv\,& g_{\ell_1 \ell_2 \ell_3}
\begin{pmatrix}
\ell_1 & \ell_2 & \ell_3\\
m_1 & m_2 & m_3
\end{pmatrix}.
\end{align}
For the Wigner $3j$ symbols to be nonzero, the $\ell$'s in the first row must be positive and obey the triangle inequality. Likewise, the sum of the bottom row of azimuthal modes ($m_1,m_2,m_3$) must equate to zero, and each must satisfy $-\ell_i\le m_i\le \ell_i$.
The Wigner 3$j$ symbols also have an orthogonality condition that we utilize,
\begin{align}\label{eq:wig_orth}
\sum_{m_1 m_2} \threej{\ell_1}{\ell_2}{\ell_3}{m_1}{m_2}{m_3}& \threej{\ell_1}{\ell_2}{\ell_3'}{m_1}{m_2}{m_3'} =\nonumber\\
&\quad\quad\frac{\delta_{\ell_3 \ell_3'} \delta_{m_3 m_3'}}{2 \ell_3 + 1} \{ \ell_1 \ell_2 \ell_3 \},
\end{align}
where $\{\ell_1 \ell_2 \ell_3 \}$ is 1 if the three $\ell$'s satisfy the triangle inequality and 0 otherwise.
When summing over the azimuthal modes of the product of spin-weighted spherical harmonics, we have,
\begin{align}\label{eq:d_sum}
\sum_m(_sY_{\ell m}(\hat{n}))(_{s'}Y_{\ell m}(\hat{n}'))^*=(-1)^s\frac{2\ell+1}{4\pi}d^\ell_{ss'}\left(\mu\right),
\end{align}
where we have introduced the Wigner small $d$-functions and $\mu\equiv \hat{n}\cdot\hat{n}'$ \citep{smith15a}. If $s=s'=0$, then the $d$-functions reduce to normal Legendre polynomials. These $d$-functions themselves satisfy the orthogonality condition,
\begin{align}\label{eq:d_orth}
\int^1_{-1}d\mu~ d^{\ell_{1}}_{ss'}(\mu)d^{\ell_{2}}_{ss'}(\mu)=\frac{2}{2\ell+1}\delta_{\ell_1\ell_2},
\end{align}
and are equipped with the identity,
\begin{align}
d^\ell_{-s(-s')}(\mu)=d^\ell_{s's}(\mu)=(-1)^{s+s'}d^\ell_{ss'}(\mu).
\end{align}
They also have the property that their product can be expanded via \cite{Qtheory},
\begin{align}\label{eq:d_prod}
&d^{\ell_{1}}_{s_1s_2'}(\mu)d^{\ell_{2}}_{s_2s_2'}(\mu)=\nonumber\\
&\quad\quad\sum_{\ell,s,s'}(2\ell+1)\threej{\ell_1}{\ell_2}{\ell}{s_1}{s_2}{s} d^{\ell}_{s s'}(\mu){\threej{\ell_1}{\ell_2}{\ell}{s_1'}{s_2'}{s'}}.
\end{align}
\section{Sums of products of \texorpdfstring{$Q$}{Q} and \texorpdfstring{$\widetilde{Q}$}{Q~} symbols}
\label{app:Q-sums}
Computing the multipoles of the nonlinear perturbation of temperature anisotropy introduces integrals of products of four (spin-weighted) spherical harmonics, denoted as the $Q$-symbols in Eq.~\eqref{eq:Q_sym} and \eqref{eq:Qt_sym}. In this appendix we lay out the math to simplify the sums and products of these $Q$-symbols necessary for the first-order trispectrum calculations. We borrow the tools introduced in Appendix~\ref{app:spin}.
Lest us start with $(\mathcal{Q}^2)_{\ell_1 \ell_2 \ell_3 \ell_4}$ defined in Eq.~\eqref{eq:Qstart}. Given the definitions of $Q_{\ell_1 \ell_2 \ell_3 \ell_4}^{m_1 m_2 m_3 m_4}$, it is given by
\begin{equation}
(\mathcal{Q}^2)_{\ell_1 \ell_2 \ell_3 \ell_4} = \sum_{m's} \int d^2 \hat{n} \int d^2 \hat{n}' \prod_i Y_{\ell_i m_i}(\hat{n})Y^*_{\ell_i m_i}(\hat{n}').
\end{equation}
Let us now use Eq.~\eqref{eq:d_sum}, which reduces to Legendre polynomials in this case:
\begin{equation}
\sum_{m_i} Y_{\ell_i m_i}(\hat{n})Y^*_{\ell_i m_i}(\hat{n}') = \frac1{4 \pi} (2 \ell_i +1) P_{\ell_i}(\mu),
\end{equation}
where $\mu\equiv \hat{n}\cdot\hat{n}'$.
We may carry out one of the angular integrals and get
\begin{align}
(\mathcal{Q}^2)_{\ell_1 \ell_2 \ell_3 \ell_4} &= \frac1{(4 \pi)^2} \frac12 \prod_{i} (2 \ell_i + 1)\nonumber\\
&\times\int_{-1}^1 d \mu~ P_{\ell_1}(\mu) P_{\ell_2}(\mu) P_{\ell_3}(\mu) P_{\ell_4}(\mu).
\end{align}
Now, recall the product rule for Wigner $d$-functions (or Legendre polynomials in this case), Eq~\eqref{eq:d_prod},
\begin{equation}
P_{\ell_1} P_{\ell_2} = \sum_{\ell} (2 \ell + 1) {\threej{\ell_1}{\ell_2}{\ell}{0}{0}{0}}^2 P_{\ell}.
\end{equation}
Therefore, using the orthogonality relation Eq.~\eqref{eq:d_orth}, we obtain
\begin{align}
(\mathcal{Q}^2)_{\ell_1 \ell_2 \ell_3 \ell_4} =& \frac1{(4 \pi)^2} \prod_{i} (2 \ell_i + 1) \sum_{\ell} (2 \ell +1)\nonumber\\
&\quad\quad\times{\threej{\ell_1}{\ell_2}{\ell}{0}{0}{0}}^2{\threej{\ell}{\ell_3}{\ell_4}{0}{0}{0}}^2 \nonumber\\
=& \sum_{\ell} \frac1{2 \ell +1} (g_{\ell_1 \ell_2 \ell})^2 (g_{\ell \ell_3 \ell_4})^2,
\end{align}
with a result that should be independent of the grouping of the two pairs of $\ell$'s.
To generalize this to sums involving $\widetilde{Q}$ is relatively straightforward, but requires general Wigner small $d$-functions. We note that the angular derivatives present in $\widetilde{Q}$ can be expressed in terms of spin-1 spin-weighted spherical harmonics \citep{smith15a}. That is,
\begin{align}
\boldsymbol{\nabla}_{\hat{n}}Y_{\ell_1 m_1} \cdot \boldsymbol{\nabla}_{\hat{n}} Y_{\ell_2 m_2} &= -\frac12 \sqrt{\ell_1(\ell_1 +1) \ell_2 (\ell_2 + 1)}\nonumber\\
\quad &\times \sum_{s = \pm 1} ~_s Y_{\ell_1 m_1} ~_{-s}Y_{\ell_2 m_2},
\end{align}
such that,
\begin{align}\label{eq:Qtil_sym}
&\widetilde{Q}_{\ell_1 \ell_2, \ell_3 \ell_4}^{m_1 m_2, m_3 m_4} = - \frac12 \sqrt{\ell_1(\ell_1 +1) \ell_2 (\ell_2 + 1)}\nonumber\\
&\quad\sum_{s = \pm 1}\int d^2 \hat{n} ~_sY^*_{\ell_1 m_1}(\hat{n})~_{-s}Y^*_{\ell_2 m_2}(\hat{n}) Y^*_{\ell_3 m_3}(\hat{n}) Y_{\ell_4 m_4}^*(\hat{n}).
\end{align}
For short we define
\begin{equation}
\widetilde{g}_{\ell_1 \ell_2, \ell} \equiv \sum_{s = \pm 1} g_{\ell_1, \ell_2, \ell}^{s, -s, 0},
\end{equation}
which is symmetric in its first two indices. Using the properties of $d$-functions outlined in Appendix~\ref{app:spin}, we obtain for Eq.~\eqref{eq:Qstart}~--~\eqref{eq:Qend},
\begin{align}
&(\mathcal{Q \widetilde{Q}})_{\ell_1 \ell_2, \ell_3 \ell_4} = - \frac12 \sqrt{\ell_1 (\ell_1 +1) \ell_2 (\ell_2 +1)} \nonumber\\
&\quad\quad\times\sum_{\ell}\frac1{2 \ell +1} g_{\ell_1 \ell_2 \ell}~\widetilde{g}_{\ell_1 \ell_2, \ell} (g_{\ell_3 \ell_4 \ell})^2, \\
&(\mathcal{\widetilde{Q}}^2)_{\ell_1 \ell_2, \ell_3 \ell_4} = \frac14 \ell_1 (\ell_1 +1) \ell_2 (\ell_2 +1) \nonumber\\
&\quad\quad\times\sum_{\ell} \frac1{2 \ell +1} (\widetilde{g}_{\ell_1 \ell_2, \ell})^2 (g_{\ell \ell_3 \ell_4})^2, \\
&(\mathcal{\widetilde{Q} \widetilde{Q}}^{\rm T})_{\ell_1 \ell_2, \ell_3 \ell_4} = \frac14 \sqrt{\ell_1 (\ell_1 +1) \ell_2 (\ell_2 +1)} \nonumber\\
&\quad\quad\times\sqrt{\ell_3 (\ell_3 +1) \ell_4 (\ell_4 +1)} \nonumber\\
&\quad\quad\times\sum_{\ell} \frac1{2 \ell +1} (g_{\ell_1 \ell_2 \ell} ~\widetilde{g}_{\ell_1 \ell_2, \ell}) (g_{\ell_3 \ell_4 \ell} ~\widetilde{g}_{\ell_3 \ell_4, \ell}), \\
&(\mathcal{\widetilde{Q} \widetilde{Q}}^{\rm S})_{\ell_1, \ell_2 \ell_3, \ell_4} = - \frac14 \ell_1 (\ell_1 +1) \sqrt{\ell_2(\ell_2 +1) \ell_3 (\ell_3 + 1)} \nonumber\\
&\quad\times \sum_{s = \pm 1} \sum_{\ell} \frac1{2 \ell +1}\widetilde{g}_{\ell_1 \ell_2, \ell} g_{\ell, \ell_1, \ell_2}^{s, -s, 0} ~g_{\ell, \ell_3 ,\ell_4}^{-s, s, 0}~g_{\ell \ell_3 \ell_4}.
\end{align}
\section{Perturbed temperature anisotropy auto-power-spectrum}\label{app:auto}
In this paper we found that the temperature-only trispectrum induced by accreting PBHs was not as sensitive as we expected a priori. Additionally, the amplitude of the power spectrum perturbation sourced by inhomogeneities in the free-electron fraction, $C^{(1)}_{\ell,\rm inh} \equiv 2 \langle \Theta^{(1)}_{\ell m, \rm inh} \Theta^{(0)*}_{\ell m} \rangle$ is up to two orders of magnitude smaller than its counterpart $C^{(1)}_{\ell,\rm hom} \equiv 2 \langle \Theta^{(1)}_{\ell m, \rm hom} \Theta^{(0)*}_{\ell m} \rangle$, as revealed in Fig.~\ref{fig:AK17_jen}. In this appendix, we show this is due to a combination of a poor correlation between $\Theta^{(1)}_{\rm inh}$ and the standard CMB temperature anisotropy $\Theta^{(0)}$, and a suppression of the characteristic amplitude of $\Theta^{(1)}_{\rm inh}$ itself, relative to its counterpart $\Theta^{(1)}_{\rm hom}$. We do so by computing and comparing the \textit{auto}-power-spectra of $\Theta^{(1)}_{\rm hom}$ and $\Theta^{(1)}_{\rm inh}$. The results are shown in Fig.~\ref{fig:auto}.
From Eq.~\eqref{eq:Theta1_lm_hom}, the auto-power-spectrum of $\Theta^{(1)}_{\rm hom}$ is trivially
\begin{align}\label{eq:homo_auto}
C_{\ell,\rm hom}^{(11)}=4\pi\int Dk~ P_\zeta(k) \left[\Delta_{\ell, \rm hom}^{(1) \rm d}(k)\right]^2,
\end{align}
where $\langle\Theta^{(1)}_{\ell m,\rm hom}\Theta^{*(1)}_{\ell' m',\rm hom}\rangle\equiv \delta_{\ell\ell'}\delta_{mm'}C_{\ell,\rm hom}^{(11)}$.
The auto-power-spectrum of the inhomogeneous-ionization counterpart, $\langle\Theta^{(1)}_{\ell m,\rm inh}\Theta^{*(1)}_{\ell' m',\rm inh}\rangle\equiv \delta_{\ell\ell'}\delta_{mm'}C_{\ell,\rm inh}^{(11)}$, is much more involved. In what follows, we denote the integral operator,
\begin{align}
\int D(k_1 k_2 k_3)P_\zeta(k_1)P_\zeta(k_2)P_\zeta(k_3)\equiv \int \mathcal{D}^3\!P.
\end{align}
We begin similarly as we did for the trispectrum calculation in Sec.~\ref{sec:trispec}. Starting with Eq.~\eqref{eq:T(1)-def}, using Wick's theorem, and exploiting the fact that $T^{(1)}_{\ell m}(\boldsymbol{k}_1, - \boldsymbol{k}_1, \boldsymbol{k}_3) = 0$ and $T^{(1)}_{\ell m}$ is symmetric in its first two ${\bm k}$ arguments, we find,
\begin{align}
\langle&\Theta_{\ell m, \rm inh}^{(1)}\Theta_{\ell' m', \rm inh}^{*(1)}\rangle =\nonumber\\
&\quad\quadf_{\rm pbh}^2\int \mathcal{D}^3\!P\left\{4T_{\ell m}^{(1)}(\boldsymbol{k}_1, \boldsymbol{k}_2, -\boldsymbol{k}_2)T_{\ell' m'}^{*(1)}(\boldsymbol{k}_1, \boldsymbol{k}_3, -\boldsymbol{k}_3)\right.\nonumber\\
&\quad\quad\quad\quad+2T_{\ell m}^{(1)}(\boldsymbol{k}_1, \boldsymbol{k}_2, \boldsymbol{k}_3)T_{\ell' m'}^{*(1)}(\boldsymbol{k}_1, \boldsymbol{k}_2, \boldsymbol{k}_3)\nonumber\\
&\quad\quad\quad\quad+4\left.T_{\ell m}^{(1)}(\boldsymbol{k}_1, \boldsymbol{k}_2, \boldsymbol{k}_3)T_{\ell' m'}^{*(1)}(\boldsymbol{k}_3, \boldsymbol{k}_2, \boldsymbol{k}_1)\right\}.
\end{align}
The first term can be solved with the same method as for the inhomogeneous power spectrum in Sec.~\ref{sec:powerspec}. That is,
\begin{align}
\int \mathcal{D}^3\!P &~T_{\ell m}^{(1)}(\boldsymbol{k}_1, \boldsymbol{k}_2, -\boldsymbol{k}_2)T_{\ell' m'}^{*(1)}(\boldsymbol{k}_1, \boldsymbol{k}_3, -\boldsymbol{k}_3)=\nonumber\\
&\quad\delta_{\ell\ell'}\delta_{m m'}\frac{16\pi}9\int_0^{\eta_0} d \eta\int_0^{\eta_0} d \eta' g(\eta)g(\eta')\nonumber\\
&\quad \quad \quad\quad\quad\quad\quad\quad\quad\times\mathcal{A}_\ell(\eta,\eta')\gamma(\eta)\gamma(\eta'),
\end{align}
where $\gamma(\eta)$ is defined in Eq.~\eqref{eq:gamma} and,
\begin{align}
\mathcal{A}_\ell(\eta,\eta')&\equiv \int\! Dk P_\zeta(k) \Delta_e(\eta, k) j_{\ell}'(k\chi)\Delta_e(\eta', k) j_{\ell}'(k\chi').
\end{align}
The remaining two terms are not as simple. Using Eq.~\eqref{eq:T_multipoles} and integrating over all three $\hat{k}$'s (but restoring them for notational convenience by absorbing factors of $4\pi$), the second term can be written as,
\begin{align}
&\int \mathcal{D}^3\!P~T_{\ell m}^{(1)}(\boldsymbol{k}_1, \boldsymbol{k}_2, \boldsymbol{k}_3)T_{\ell' m'}^{*(1)}(\boldsymbol{k}_1, \boldsymbol{k}_2, \boldsymbol{k}_3)=\nonumber\\
&\quad\quad (4\pi)^3\sum_{m_i,\ell_i}\int \mathcal{D}^3\!P ~T_{\ell_1 \ell_2\ell_2; \ell}^{m_1 m_2 m_3; m}T_{\ell_1 \ell_2\ell_3; \ell'}^{*m_1 m_2 m_3; m'},
\end{align}
where we have suppressed the $k$ dependence in $T_{\ell_1 \ell_2 \ell_3; \ell_4}^{m_1 m_2 m_3; m_4}$, defined in Eq.~\eqref{eq:Tmult-final}, and the sum is over $\ell_i$ and $m_i$, with $i = 1, 2, 3$. Without having to expand the terms with spherical harmonics, we can exploit the fact that the Universe is statistically isotropic and instead compute,
\begin{align}
\langle\Theta_{\ell m, \rm inh}^{(1)}\Theta_{\ell' m', \rm inh}^{*(1)}\rangle=\frac{\delta_{\ell\ell'}\delta_{mm'}}{2\ell+1}\sum_{m''}\langle\Theta_{\ell m'', \rm inh}^{(1)}\Theta_{\ell m'', \rm inh}^{*(1)}\rangle.
\end{align}
This enables us to use the machinery we derived in Appendix~\ref{app:Q-sums} and write the second term as,
\begin{align}
\int \mathcal{D}^3\!P~T_{\ell m}^{(1)}(\boldsymbol{k}_1, \boldsymbol{k}_2,& \boldsymbol{k}_3)T_{\ell' m'}^{*(1)}(\boldsymbol{k}_1, \boldsymbol{k}_2, \boldsymbol{k}_3)=\nonumber\\
(4\pi)^3\frac{\delta_{\ell\ell'}\delta_{mm'}}{2\ell+1}\!\!\sum_{m_i,\ell_i}\!&\int \!\!\mathcal{D}^3\!P\!\left\{\vphantom{\widetilde{Q}^2}A^2_{\ell_1 \ell_2, \ell_3}(k_1, k_2, k_3) (\mathcal{Q}^2)_{\ell_1 \ell_2 \ell_3 \ell}\right.\nonumber\\
& + 2AB_{\ell_1 \ell_2, \ell_3}(k_1, k_2, k_3)(\mathcal{Q \widetilde{Q}})_{\ell_1 \ell_2, \ell_3 \ell}\nonumber\\
&\left.+B^2_{\ell_1 \ell_2, \ell_3}(k_1, k_2, k_3)(\mathcal{\widetilde{Q}}^2)_{\ell_1 \ell_2, \ell_3 \ell}\right\}\nonumber.
\end{align}
We apply the same logic to the third term. We then take advantage of the factorized forms of Eqs.~\eqref{eq:Tmult-final}-\eqref{eq:B} to write a computationally manageable final solution as,
\begin{align}\label{eq:inh_auto}
C_{\ell,\rm inh}^{(11)}=f_{\rm pbh}^2\Bigr[4\mathfrak{A}_\ell+\sum_{\ell_1\ell_2\ell_3}\left(2\mathfrak{B}_{\ell_1\ell_2,\ell_3;\ell}+4\mathfrak{C}_{\ell_1,\ell_2,\ell_3;\ell}\right)\Bigr],
\end{align}
where
\begin{align}
&\mathfrak{A}_\ell\equiv\!\frac{4\pi}9\int_0^{\eta_0}\!\!\! d \eta\int_0^{\eta_0} \!\!\!d \eta' g(\eta)g(\eta')\mathcal{A}_\ell(\eta,\eta')\beta(\eta)\beta(\eta'),\\
&\mathfrak{B}_{\ell_1\ell_2,\ell_3;\ell}\equiv\frac{(4\pi)^3}{2\ell+1}\int d \eta\int d \eta' g(\eta)g(\eta')\nonumber\\
&\times\left\{\mathcal{A}_{\ell_1}(\eta,\eta')\mathcal{A}_{\ell_2}(\eta,\eta') J_{\ell_3}(\eta,\eta')(\mathcal{Q}^2)_{\ell_1 \ell_2 \ell_3 \ell}\right.\nonumber\\
&\quad+2\mathcal{K}_{\ell_1}(\eta,\eta')\mathcal{K}_{\ell_2}(\eta,\eta') J_{\ell_3}(\eta,\eta')(\mathcal{Q \widetilde{Q}})_{\ell_1 \ell_2, \ell_3 \ell}\nonumber\\
&\quad\left.+\mathcal{B}_{\ell_1}(\eta,\eta')\mathcal{B}_{\ell_2}(\eta,\eta') J_{\ell_3}(\eta,\eta')(\mathcal{\widetilde{Q}}^2)_{\ell_1 \ell_2, \ell_3 \ell}\right\},
\end{align}
\begin{align}
&\mathfrak{C}_{\ell_1,\ell_2,\ell_3;\ell}\equiv\frac{(4\pi)^3}{2\ell+1}\int d \eta \int d\eta' g(\eta)g(\eta')\nonumber\\
&\times\left\{\widetilde{\mathcal{A}}_{\ell_1}(\eta,\eta')\widetilde{\mathcal{A}}_{\ell_2} (\eta',\eta)\mathcal{A}_{\ell_3}(\eta,\eta')(\mathcal{Q}^2)_{\ell_1 \ell_2 \ell_3 \ell}\right.\nonumber\\
&\quad\!\!\!+2\widetilde{\mathcal{A}}_{\ell_1}(\eta,\eta'){\mathcal{K}}_{\ell_2}(\eta,\eta') \widetilde{\mathcal{B}}_{\ell_3}(\eta',\eta)(\mathcal{Q \widetilde{Q}})_{\ell_3 \ell_2, \ell_1 \ell}\nonumber\\
&\quad\!\!\!\left.+\widetilde{\mathcal{B}}_{\ell_1}(\eta,\eta')\widetilde{\mathcal{B}}_{\ell_2}(\eta',\eta) \mathcal{B}_{\ell_3}(\eta,\eta')(\mathcal{\widetilde{Q} \widetilde{Q}}^{\rm S})_{\ell_1, \ell_2 \ell_3, \ell} \right\}\!,
\end{align}
with
\begin{align}
\mathcal{B}_{\ell}(\eta,\eta')&\equiv\!\int\!\! D(k)P_\zeta(k) \frac{j_{\ell}(\chi k)}{\chi k}\Delta_e (\eta, k)\frac{j_{\ell}(\chi' k)}{\chi' k}\Delta_e (\eta', k),\nonumber\\
\mathcal{K}_{\ell}(\eta,\eta')&\equiv\!\int\!\! D(k) P_\zeta(k)j_{\ell}'(\chi k)\Delta_e (\eta, k)\frac{j_{\ell}(\chi' k)}{\chi' k}\Delta_e (\eta', k),\nonumber\\
J_{\ell}(\eta,\eta')&\equiv\!\int\!\! D(k) P_\zeta(k)\mathcal{J}_{\ell}(\eta, k)\mathcal{J}_{\ell}(\eta', k),\nonumber\\
\widetilde{\mathcal{A}}_{\ell}(\eta,\eta')&\equiv\!\int\!\! D(k) P_\zeta(k)j_{\ell}'(\chi k)\Delta_e (\eta, k)\mathcal{J}_{\ell}(\eta', k),\nonumber\\
\widetilde{\mathcal{B}}_{\ell}(\eta,\eta')&\equiv\!\int\!\! D(k)P_\zeta(k) \frac{j_{\ell}(\chi k)}{\chi k}\Delta_e (\eta, k)\mathcal{J}_{\ell}(\eta', k).
\end{align}
We plot the results, Eq.~\eqref{eq:inh_auto} and Eq.~\eqref{eq:homo_auto}, in Fig.~\ref{fig:auto}. We see there is ultimately an order of magnitude difference between the amplitudes of the inhomogeneous and homogeneous temperature perturbation auto-power-spectrum. We also see that the correlation between our newly computed inhomogeneous temperature perturbation and the standard CMB temperature anisotropy is very poor. Both these facts are likely the culprits behind both the unexpected sensitivity from the forecast on the trispectrum \textit{and} the two orders of magnitude difference in the power spectra amplitudes we observe in Fig.~\ref{fig:AK17_jen}. Additionally, it can be seen that, if it were not for the very small correlation at large $\ell$, the scale suppression due to photon propagation would have a much bigger effect in both the inhomogeneous power spectrum and trispectrum results.
\begin{figure}[htb]
\includegraphics[trim={0cm 1cm 0.5cm .5cm},width=.9\columnwidth]{auto_and_corr.pdf}
\caption{\label{fig:auto} Top: auto-power-spectrum of the perturbed temperature anisotropy due to accreting PBHs defined as $\langle\Theta^{(1)}_{\ell m}\Theta^{*(1)}_{\ell' m'}\rangle\equiv \delta_{\ell\ell'}\delta_{mm'}C_{\ell}^{(11)}$, normalized by the standard angular power spectrum. Bottom: correlation coefficients between $\Theta^{(1)}$ and $\Theta^{(0)}$. In both cases we assume 100-$M_\odot$ PBHs comprising all the dark matter, but the qualitative trends are general for all PBH masses. The suppressed amplitude in the auto-power-spectrum of $\Theta^{(1)}_{\rm inh}$ (purple) compared to $\Theta^{(1)}_{\rm hom}$ (red) and the poor correlation explains the large difference in amplitude for the computed power spectra in Sec.~\ref{sec:powerspec}.}
\end{figure}
\section{Redshift dependence of the temperature trispectrum induced by accreting PBHs}\label{app:slope}
In this appendix we inspect the redshift dependence of the temperature trispectrum from accreting PBHs, by reproducing the forecast analysis of Sec.~\ref{sec:forecast}, but artificially imposing that the free-electron fraction perturbation vanishes outside of redshift bins of size $\Delta z = 50$. In a given redshift bin, we compute the signal-to-noise, $S/N$, assuming a Planck-like experiment for both the temperature-only trispectrum and power spectrum. That is, for the trispectrum we compute $(S/N)_{\rm tri}\equiv 1/\sigma_{f_{\rm pbh}}$ from Eq.~\eqref{eq:inv_fpbh}. For the power spectrum we compute the similar forecasted quantity,
\begin{align}
(S/N)_{\rm ps}\equiv \left[\frac{f_{\rm sky}}{2}\sum_\ell (2\ell+1)\left(\frac{C_{\ell}^{(1)}}{C_\ell'}\right)^2\right]^{1/2},
\end{align}
where $C_{\ell}^{(1)}=C_{\ell,\rm hom}^{(1)}+C_{\ell, \rm inh}^{(1)}$ is the total (c.f. Fig.~\ref{fig:AK17_jen}) perturbed $TT$ power spectrum due to accreting PBHs (considering only the ``direct'' term discussed in Sec.~\ref{sec:homo_de}). Note that a rigorous treatment would properly account for correlations between different redshift bins, and involve a principal component analysis. Still, the simple estimation of $S/N$ should give us a reasonable qualitative understanding of the redshift dependence of the signal.
We compare the two $S/N$ as a function of redshift in Fig.~\ref{fig:SN} for 100-$M_\odot$ PBHs. We see that the temperature trispectrum $S/N$ is rather sharply peaked around $z \sim 900-1000$, in contrast with the temperature power spectrum signal, which receives comparable contributions from a broad range of redshifts $200 \lesssim z \lesssim 1200$.
This is consistent with the the following observations. First, by inspecting the on-the-spot energy deposition limit discussed in Sec.~\ref{sec:approx}, we found that the trispectrum is negligibly affected by photon propagation that is more suppressive at late times. Namely the strongest spatial fluctuations for the accreting PBHs due to relative velocities occur at a few $10$'s of Mpc scales as shown in Paper~I Fig.~13, but this is not noticeably suppressed until $z\approx 800$ when inspecting Fig.~\ref{fig:G_e}. Second, we find that the trispectrum constraints converge much more quickly than compared to the power spectrum when varying the max multipole on the zeroth-order collision term present in the line-of-sight source. Namely, the trispectrum is unaffected by higher order multipoles of zeroth-order temperature anisotropy which are induced at later times. Thirdly and more subtly, the $f_{\rm pbh}$-$M_{\rm pbh}$ powerlaw dependence is weaker for the trispectrum constraints than it is for the power spectrum constraints. As found in AK17, the luminosity of a spherically accreting PBH is proportional to $M^3$ at all times when excluding their radiative efficiency. The radiative efficiency, however, turns out to have an inverse dependence on black hole mass whose power depends on redshift. This power converges to zero at late times, and implies that the mass dependence on $f_{\rm pbh}$ constraints is weaker if the signal receives support from earlier redshifts. This can be seen directly in Fig.~8 of AK17 where they plot the mean luminosity as a function of redshift for various $M_{\rm pbh}$.
\begin{figure}[htb]
\includegraphics[trim={0cm 1cm 0.5cm .25cm},width=.95\columnwidth]{tri_v_ps_SN_fixed.pdf}
\caption{\label{fig:SN} Forecasted Planck signal-to-noise ratio for the $TT$-only power spectrum and $TTTT$ trispectrum induced by accreting PBHs. For ease of comparison we normalize the curves such that they integrate to unity over redshift. Each point is computed assuming the perturbed free-electron fraction is only nonzero in redshift bins of size $\Delta z=50$.}
\end{figure}
\end{appendix} |
2212.00032 | \section{Introduction} \label{sec:intro}
\begin{figure*}[ht!]
\plotone{jwst_muse_alma.pdf}
\caption{Three-colour image of NGC~628, with ALMA CO in blue, MUSE H$\alpha$ in green, and {\it JWST} 21$\mu$m in red. The blue, red, and green boxes show the extent of each corresponding observation. The two spurs we focus on are shown as cyan (CO-rich), and orange (CO-poor) contours (see \S\ref{sec:spur_offset_timescale}). Also shown are the spiral arms from the environmental mask, in gray. The position angle (21$^\circ$; corresponding to $\theta = 0^\circ$ in Figure \ref{fig:polar_unwrap}) is indicated in the lower-right, and a 1~kpc scalebar is shown in the lower-left.}
\label{fig:jwst_muse_alma}
\end{figure*}
\begin{figure*}[ht!]
\plotone{spurmap.pdf}
\caption{A holistic overview of the spurs analysed in this study. {\it Left:} three-colour image of NGC628 produced from the 770W (blue), 1000W (green), and 1130W (red) band filters from the {\it JWST} \citep{2022Lee}, and overlaid in orange is the continuum subtracted {\it HST}-H$\alpha$, with CO-rich and CO-poor spur (see \S\ref{sec:spur_offset_timescale}) highlighted in cyan and orange, respectively. {\it Right, top row}: from left to right, three-colour MIRI (same as left panel) and NIRCam (red: 200W, green: 300M, blue: 335M) zooms of the spurs, as well as MUSE H$\alpha$ and ALMA CO. {\it Right, middle}: From left to right, increasing {\it JWST} NIRCam wavelengths, showing the stellar light. {\it Right, bottom}: From left to right, increasing {\it JWST} MIRI wavelengths, showing the ISM emission.}
\label{fig:spurmaps}
\end{figure*}
\begin{figure*}[ht!]
\plotone{polar_unwrap.pdf}
\caption{Polar deprojection of NGC~628 in CO (top), 21$\mu$m (middle), and MUSE H$\alpha$ (bottom). $0^\circ$ is defined as the position angle (see Fig. \ref{fig:jwst_muse_alma}), and $\theta$ increases from left to right. The nominal co-rotation radius from \cite{2021Williams} is shown as a horizontal dashed white line. We show the approximate ridge of three spiral arms as red lines (determined from the CO). We note there is a clear offset between this CO ridge and the 21$\mu$m/H$\alpha$ ridges \citep[see also, e.g.][]{2018Kreckel} The cyan contour indicates the `CO-rich' spur, and the orange the `CO-poor' spur.}
\label{fig:polar_unwrap}
\end{figure*}
Spiral arms are a distinctive characteristic of star-forming galaxies, featuring large, curved arcs across the galaxy discs as gas is compressed and star formation occurs. Historically, spiral arms have been seen as the sites where the majority of stars form within galaxies \citep[e.g.][]{1953MorganWhitfordCode, 1969Roberts, 2013Louie}. This is thought to stem from high gas densities achieved in the spiral arms, combined with low shear \citep{2007Elmegreen}, which favours cloud (and from this, star) formation \citep[see reviews by][]{2014DobbsBaba, 2022Chevance}. Star formation triggering is also thought to take place in spiral arms, given the shocking that occurs at these locations \citep{1969Roberts}, and the potential for cloud-cloud collisions \citep[e.g.][]{1998Kennicutt, 2000Tan, 2014Longmore, 2021Fukui, 2020Chevance}. Here, gas compressed by molecular clouds colliding can lead to an episode of star formation.
Modern views of the process of star formation in spiral arms built from observations and simulations emphasise that the pattern of star formation in and around these locations is complex \citep[e.g.][]{2010DobbsPringle, 2017Chandar, 2017Schinnerer, 2020KimKimOstriker}. This is thought to reflect that spiral arms are not smooth, singular structures but instead themselves host a complex array of substructures. These structures have a variety of names, such as spurs or feathers \citep[see][for a discussion of the nomenclature]{2006LaVigne}. These features protrude from the spiral arms, are fairly regularly spaced with azimuth, and are predicted in simulations to extend to kpc scales. The origin of these spurs is currently unclear. Several mechanisms have been proposed, including gravitational instabilities \citep[e.g.][]{2006Dobbs}, magneto-Jeans instabilities \citep[e.g.][]{2006KimOstriker}, wiggle instabilities \citep[e.g.][]{2004WadaKoda,2022Mandowara}, supernova feedback, or formation on the edges of superbubbles \citep[primarily feedback driven expansions of gas; see e.g.][]{1997Oey, 2020KimKimOstriker}. Depending on the formation mechanism, these spurs are also thought to be viable sites of further fragmentation and collapse. Thus, star formation may not occur exclusively in the high density spiral arm ridge. Measuring the star formation as a function of distance from spiral arms is critical for testing spur formation pathways, and can help us to better understand star formation associated with the spiral-arm passage of gas.
Spurs can be seen in molecular gas tracers \citep{2008Corder, 2009Koda, 2017Schinnerer, 2022Stuber}, as well as in the dust morphology \citep{2006LaVigne}. The goal of this work is to ask and answer whether stars are forming natively within spurs or if stars have formed in the dense spiral ridge and drifted to their present positions coincident with the spurs. Certainly for M~51, the former appears to be the case \citep{2017Schinnerer}, as typical extragalactic star formation rate tracers (H$\alpha$, 24$\mu$m) are coincident with CO in the spurs, rather than with the ridge of the CO spiral arm. Localizing the natal site of star formation requires the use of a tracer of the youngest, most embedded phase of the star formation process (i.e. with timescales \textless10~Myr). This ensures that we can catch star formation `in the act'. For this, the mid- and far-infrared are ideal, as bright and compact emission at these wavelengths directly traces the hot dust heated by young, embedded stars. However, given the limited resolution of the {\it Spitzer} MIPS \citep{2004Rieke} instrument (which had a resolution of $\sim$300~pc at a distance of 10~Mpc, but was the only viable instrument in this wavelength range before {\it JWST}), until now localising the mid infrared (MIR) emission to spurs or spiral arms has been challenging in galaxies outside the Local Group. This means establishing whether the phenomenon of star formation within spurs is unique to M~51 or a general feature of all disc galaxies is still an open question, with important implications for star formation models.
In this Letter, we use new {\it JWST} observations taken as part of the PHANGS-JWST Treasury Program \citep[PI J.~C.~Lee;][]{2022Lee} to study star formation in the spiral arms of NGC~628. We test whether star formation off of the spiral arms in NGC~628 could be from stars forming within spiral arms and then drifting, or whether stars are formed locally within spurs. The structure of this Letter is as follows: we briefly describe why NGC~628 is an ideal target for this study, and the data provenance in \S\ref{sec:data}, identify our spiral arm region of interest and the timescales for offset between spiral arm and spur in \S\ref{sec:spur_offset_timescale}. We conclude in \S\ref{sec:conclusions}.
\section{NGC~628 and Data} \label{sec:data}
As an archetypal grand-design spiral galaxy, NGC~628 is an ideal target for studies of spiral arms, given its clear arm structure and lack of a bar. Located at a distance of 9.84~Mpc \citep{2017McQuinn, 2021Anand, 2021bAnand}, NGC~628 is almost face-on \citep[$i=8.9^\circ$;][]{2020Lang}, and aligned nearly north-up with a position angle of $20.7^\circ$ \citep{2020Lang}. It is also the only galaxy in PHANGS--MUSE with a robustly measured spiral arm pattern speed from stellar kinematics by \citet[][the rest of the pattern speeds in this work being attributed to bars]{2021Williams}, which is necessary to obtain timescales for the spur offset (\S\ref{sec:spur_offset_timescale}). We present a three-colour composite image in Figure \ref{fig:jwst_muse_alma} including the data we will use in this study, and Figure \ref{fig:spurmaps} presents a more holistic overview of the spurs, which shows the wealth of high-quality observations, and rich detail present in the PHANGS (and especially the PHANGS-JWST) data that exists for this galaxy. Returning to Figure \ref{fig:jwst_muse_alma}, in blue, we show a CO(${\it J}= 2-1$), hereafter CO, moment 0 map from ALMA \citep{2021aLeroy, 2021bLeroy}, tracing the cold molecular gas across the galactic disc. Here, a number of gas spurs are visible as structures that are almost perpendicular to the spiral arm. These data have a resolution of $\sim$1\arcsec ($\sim$50~pc) and a sensitivity of $\sim$1~K~km~s$^{-1}$. In green, we show H$\alpha$ emission from VLT-MUSE observations as part of PHANGS-MUSE \citep{2022Emsellem}, tracing young stars that are producing ionising radiation but have blown a hole in their natal cloud. The MUSE data also have a resolution of around 1\arcsec, with an H$\alpha$ sensitivity of $\sim1.5 \times 10^{37}$~erg~s$^{-1}$~kpc$^{-2}$. H$\alpha$ emission traces star formation over timescales of \textless10~Myr \citep[e.g.][]{2006Moustakas, 2012Leroy, 2012KennicuttEvans, 2014Boquien}. In red, we show 21$\mu$m {\it JWST} data \citep{2022Lee}, with a resolution of 0.67\arcsec\ and a surface brightness sensitivity of around 0.3~MJy~sr$^{-1}$. In star-forming regions, this wavelength traces young, highly embedded star formation \citep[see, e.g., radiative transfer models by][]{2014DeLooze, 2019Williams}, with an emitting timescale in the star-forming regions of $10$\,Myr for NGC~628 \citep{2022Kim}. Finally, we overlay spiral arms as defined by \cite{2021Querejeta} in gray, based on {\it Spitzer} 3.6$\mu$m imaging, which traces the spiral arms from the old stars.
\section{Spur Offset and Timescale}\label{sec:spur_offset_timescale}
\begin{figure*}[ht!]
\plotone{radial_sfr_sfe.pdf}
\caption{Profiles of 21$\mu$m, Balmer-corrected H$\alpha$ (equivalent to SFR), CO, and SFE values across the CO-rich and CO-poor spurs (the mask generated from the contours in Fig. \ref{fig:jwst_muse_alma}) as a function of galactocentric radius. The solid line shows the rolling median of the data, and the shaded regions the 16$^{\rm th}$ and 84$^{\rm th}$ percentiles. Each intensity is normalised by its 50$^{\rm th}$ percentile value within the spur mask, and is offset for visual clarity. The shaded gray region indicates the parts of the spur within the spiral arm mask. For reference, the offsets are 1, 2, 3, and 4 for 21$\mu$m, SFR, CO, and SFE, respectively. These lines correspond to the median value of 1.6MJy~sr$^{-1}$ (21$\mu$m), $5.3\times10^{-3}$M$_\odot$~kpc$^{-2}$ \citep[SFR surface density, assuming a][conversion factor]{2007Calzetti}, 4.7K~km~s$^{-1}$ (CO), and $3.4\times10^{-7}$yr$^{-1}$ (SFE) for the CO-rich spur, and 1.2MJy~sr$^{-1}$, $4.2\times10^{-3}$M$_\odot$~kpc$^{-2}$, 3.7K~km~s$^{-1}$, and $3.3\times10^{-7}$yr$^{-1}$ for the CO-poor spur.}
\label{fig:radial_sfr_sfe}
\end{figure*}
\begin{figure}[ht!]
\plotone{timescales.pdf}
\caption{Offset timescale (see Eq. \ref{eq:timescale}) as a function of galactocentric radius for the CO-rich (blue) and CO-poor (red) spurs, calculated between a galactocentric radius of 3 and 4~kpc.}
\label{fig:timescales}
\end{figure}
In Figure \ref{fig:polar_unwrap}, we perform a polar (i.e.\ $r$, $\theta$ space) remapping (i.e.\ deprojection and derotation) of Figure \ref{fig:jwst_muse_alma}. Here, 0$^\circ$ corresponds to the position angle of the galaxy shown in Figure \ref{fig:jwst_muse_alma}, with $\theta$ increasing in a clockwise direction. In this polar projection, the spiral arms appear as nearly straight lines (these are well-described as log-spirals in \citealt{2021Querejeta}, but the difference here is minor), and the spurs off the arms become more clear. There is also a clear offset between these three tracers in the spiral arms, as a consequence of the spiral pattern \citep[e.g.][]{2009Egusa, 2018Kreckel}. This is not a focus of this paper, but we note that the high resolution now available with these tracers may be useful for future direct measurements of spiral pattern speeds. There is a significant amount of CO, H$\alpha$, and 21$\mu$m emission in spurs off the spiral arms, which is visible off all the spiral arms in Figure \ref{fig:jwst_muse_alma}.
We will focus on two spurs in particular, both between a (deprojected) galactocentric radius of 3 to 4~kpc. These spurs are present near co-rotation \citep{2021Williams}, implying the drift times from the spiral arm will be quite long. The first has a maximum offset from the spiral arm of $\theta \simeq 40^\circ$, and has clearly associated CO, H$\alpha$ and 21$\mu$m emission. We will refer to this as the `CO-rich' spur. The second has a maximum offset from the spiral arm of $\theta \simeq 55^\circ$, and is well detected in H$\alpha$ and 21$\mu$m but has no associated emission in the `strict mask' (i.e. a high-confidence, but lower completeness mask, see \citealt{2021aLeroy} for details) CO moment 0 image (shown in Figure \ref{fig:jwst_muse_alma}, although it is barely detected in the `broad mask' moment 0, which has lower confidence but higher completeness as compared to the strict mask, and is not shown here, but see \citealt{2021aLeroy}). Both of these spurs are also clearly detected in the other MIRI (7.7, 10, and 11.3$\mu$m) bands, and the emission is coincident with that at 21$\mu$m. We have selected these two as a test case, as they are neighbouring spurs but quite different in their ISM composition. We reserve a more thorough cataloguing and study of spurs for future work with the larger PHANGS-JWST sample.
The total 21$\mu$m flux outside the spiral arms as defined by the environmental mask (excluding the central 1.2~kpc diameter region based on photometric decomposition by \citealt{2015Salo}, where disentangling the spiral arms from any potential stellar bulge or nuclear component is difficult) is around 60\%, indicating a non-negligible amount of star formation outside the spiral arms. We caution that the environmental mask is defined by {\it Spitzer} data and the gas and stellar spiral arms may not necessarily coincide. However, the spiral arm width follows an empirical definition based on the CO emission, to attempt to overcome this \citep{2021Querejeta}. There also may be a significant amount of diffuse emission at 21$\mu$m flux, that is unlikely to originate from star formation \citep{2022Leroy}. In this sense, the percentage is likely an upper limit to the amount of 21$\mu$m flux that can be ascribed to star-formation.
We next investigate how the flux profiles of the CO, H$\alpha$ and 21$\mu$m vary with galactocentric radius along these two spurs, to better understand the role the spiral arms have in enhancing the star formation rate (SFR) and star formation efficiency (SFE; SFR per unit molecular gas mass). Using the spur contour in Figure \ref{fig:jwst_muse_alma} as a mask, we calculate radial profiles of the intensity of the three tracers for the two spurs. We use Balmer decrement-corrected H$\alpha$ as a proxy for the SFR, and also calculate the profile for SFE (i.e. corrected H$\alpha$/CO). We show the profiles in Figure \ref{fig:radial_sfr_sfe}. Between the spiral arms, the SFR appears to be relatively constant, arguing against the idea of an evolutionary sequence with stars further out in the spurs being formed at an early time to those closer to the spiral arm. However, there is an increase towards the spiral arms -- the inner arm for the CO-rich spur, and the outer for the CO-poor. This agrees with simulations showing that the spiral arms act to concentrate star forming regions, leading to an overall increase in the SFR surface density \citep[e.g.][]{2020KimKimOstriker}. Yet, comparing the 21$\mu$m and H$\alpha$ to that of the CO, we see a good correspondence between the profiles, with the tracers tending to upturn at the same radii. Indeed, the SFE profile bears this out -- the SFE sometimes shows strong variation along the spurs, but the increases in SFE are localised and do not correlate with the spiral arm positions (although there is a slight increase towards the outer spiral arm in the case of the CO-poor spur). Taken together, these results advocate that the spiral arms gather together gas and star forming regions, but have little impact on how efficiently stars are being formed, as seen in larger (but lower resolution) samples \citep{2021Querejeta} or in simulations \citep[e.g.][]{2015Dobbs}.
We estimate the timescale for both of these features to appear, assuming they have drifted from the spiral arm with the passage of the density wave and at the same pattern speed. Following \cite{2009Egusa} we compute the timescale required for this spiral arm offset to occur (neglecting any non-circular motion), as
\begin{equation}\label{eq:timescale}
t = 76.8\, {\rm Myr}\, \left(\frac{\Delta \theta}{45^\circ} \right) \left(\frac{\Omega(r) - \Omega_P}{10 \, {\rm km\, s^{-1} \, kpc^{-1}}} \right)^{-1} ,
\end{equation}
where $\Omega$ is the angular rotation velocity, $\Omega_P$ the pattern speed (both in km~s$^{-1}$~kpc$^{-1}$), $\Delta \theta$ the offset in degrees (i.e. the distance from spiral arm to spur along the $x$-axis in Fig. \ref{fig:polar_unwrap}, and will vary from 0 where the spur meets the spiral arm to some maximum offset), and $t$ the timescale in units of Myr. We use $\Omega_P = 31.1^{+4.0}_{-2.9}~{\rm km~s^{-1}~kpc^{-1}}$ from \cite{2021Williams}. This is a conservative value, as, for example, gravity will act to pull the gas back towards the spiral arm, lengthening the timescales. We obtain $\Omega(r)$ from the measured rotation curves in \citet{2020Lang}, which vary from around $36\,{
\rm km~s^{-1}~kpc^{-1}}$ to $39\,{
\rm km~s^{-1}~kpc^{-1}}$. We estimate the maximum spur offset from Figure \ref{fig:polar_unwrap}, and assume it varies linearly (as the spurs are mostly vertical in the polar projection) with $r$ up to a maximum offset at a galactocentric radius of 4~kpc. We assign a relatively conservative uncertainty to these values of 5$^\circ$.
The calculated timescales are shown in Figure \ref{fig:timescales}, and are indeed quite long, as expected. These numbers are also likely lower bounds, as processes like gravitational attraction towards to spiral arm ridge will only serve to make these timescales longer. We see that the values range from close to 0 at the point where the spur joins the spiral arm up to more than 100~Myr at the farthest extent of the spur, significantly longer than the timescale we would expect the H$\alpha$ and 21$\mu$m emission to be visible for, if star formation was initiated in the arms (\textless10~Myr, see \S\ref{sec:data}). The same conclusion was found by \cite{2017Schinnerer} in M~51, perhaps indicating that this is a general result within galaxies. Altogether, our analysis suggests that stars can form in-situ within spurs, rather than moving from the spiral arms. This has been seen in some recent simulations \citep[e.g.][]{2020Smith, 2021Tress}, and combined with results showing the star formation efficiency may not be higher in the spiral arms \citep[e.g.][]{2018Ragan,2021Querejeta}, these results point towards a picture where the spiral arms merely gather gas together, rather than being instrumental in causing the onset of star formation.
The fact that one of these spurs is rich in CO and the other poor is also intriguing, given their close proximity. It seems possible that these spurs could potentially be forming from superbubble expansion \citep{2020KimKimOstriker}, as these spurs are on the edge of one of the large bubbles catalogued in \cite{2022Barnes} and \citet{2022Watkins}, and so should be in roughly the same evolutionary state. Perhaps, then, some feedback mechanisms have been more efficient at destroying gas in one spur compared to the other, or maybe the CO-poor one is older. This could be addressed both by observing the coincidence of spurs and bubbles, and using stellar clusters from combined {\it HST/JWST} observations as `clocks'. This is beyond the scope of this work, but would be an interesting future study with a full PHANGS-JWST sample of spurs, bubbles, and stellar clusters.
\section{Conclusions}\label{sec:conclusions}
In this Letter, we have combined ALMA, VLT-MUSE and new {\it JWST} observations in the context of the PHANGS collaboration to examine the youngest, highly embedded stage of star formation in a CO-rich and CO-poor spur off the prominent northern spiral arm of NGC~628. These were chosen as a test case, as they are next to each other but clearly quite different in their ISM composition. Both of these spurs show an increase in star formation towards spiral arms, but little indication of an increase in the star formation efficiency. Given the angular offset of these spurs, assuming they are formed on the arm and drifted off due to the difference between circular rotation speed and arm pattern speed, we infer a timescale of around 100~Myr or more, an order of magnitude higher than the timescales of the H$\alpha$ and 21$\mu$m emission \citep{2021Kim, 2022Kim}. These results imply that stars are forming in-situ within the spurs, rather than being produced within the spiral arms and then travelling there.
This work represents an initial exploration into how {\it JWST} observations will redefine our view of the earliest phases of galactic-scale star formation, and how this affects the structure of the ISM and the process of star formation in different environments. In particular, combining a spur catalogue with both exposed (measured from {
\it HST}) and embedded (measured from {\it JWST}) stellar clusters will help to understand the evolutionary sequence of the structure of the ISM \citep[e.g.][]{2017Chandar}. With the full 19 galaxies of the PHANGS-JWST sample, we will be able to form a new picture of the highly complex, filamentary nature of the ISM.
\section*{Acknowledgments}
The authors would like to thank the anonymous referee for their constructive comments, which have improved this manuscript. TGW would also like to thank David Williams, for everything over the years. This work was carried out as part of the PHANGS collaboration. The analysis scripts underlying this work are available at \url{https://github.com/thomaswilliamsastro/jwst_ngc628}. All the {\it JWST} data used in this paper can be found in MAST: \dataset[10.17909/436y-rd76]{http://dx.doi.org/10.17909/436y-rd76}.
This work is based on observations made with the NASA/ESA/CSA JWST. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127. The observations are associated with JWST program 2107.
Based on observations collected at the European Southern Observatory under ESO programmes 094.C-0623 (PI: Kreckel), 095.C-0473, 098.C-0484 (PI: Blanc), 1100.B-0651 (PHANGS-MUSE; PI: Schinnerer), as well as 094.B-0321 (MAGNUM; PI: Marconi), 099.B-0242, 0100.B-0116, 098.B-0551 (MAD; PI: Carollo) and 097.B-0640 (TIMER; PI: Gadotti). This paper makes use of the following ALMA data:
ADS/JAO.ALMA\#2012.1.00650.S.
ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ.
TGW and ES acknowledge funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 694343).
JS acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC) through a Canadian Institute for Theoretical Astrophysics (CITA) National Fellowship.
JMDK gratefully acknowledges funding from ERC via the ERC Starting Grant ``MUSTANG'' (grant agreement number 714907). COOL Research DAO is a Decentralized Autonomous Organization supporting research in astrophysics aimed at uncovering our cosmic origins.
JPe acknowledges support by the DAOISM grant ANR-21-CE31-0010 and by the Programme National ``Physique et Chimie du Milieu Interstellaire'' (PCMI) of CNRS/INSU with INC/INP, co-funded by CEA and CNES.
MC gratefully acknowledges funding from the Deutsche Forschungsgemeinschaft (DFG) through an Emmy Noether Research Group (grant number CH2137/1-1).
MB acknowledges support from FONDECYT regular grant 1211000 and by the ANID BASAL project FB210003.
EJW. RSK, and SCOG acknowledge funding from DFG via the Collaborative Research Center ``The Milky Way System''(SFB 881, funding ID 138713538, subprojects A1, B1, B2, B8, and P1).
KK, OE gratefully acknowledge funding from DFG in the form of an Emmy Noether Research Group (grant number KR4598/2-1, PI Kreckel).
FB would like to acknowledge funding from ERC via the ERC Consolidator Grant ``Empire'' (grant agreement No.726384).
JK gratefully acknowledges funding from DFG through the DFG Sachbeihilfe (grant number KR4801/2-1).
ER acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), funding reference number RGPIN-2022-03499.
RSK and SCOG acknowledge support from ERC via the ERC Synergy Grant ``ECOGAL'' (project ID 855130) and from the Heidelberg Cluster of Excellence (EXC 2181 - 390900948) ``STRUCTURES'', funded by the German Excellence Strategy. RSK also thanks the German Ministry for Economic Affairs and Climate Action for funding in project ``MAINN'' (funding ID 50OO2206).
MQ acknowledges support from the Spanish grant PID2019-106027GA-C44, funded by MCIN/AEI/10.13039/501100011033.
KG is supported by the Australian Research Council through the Discovery Early Career Researcher Award (DECRA) Fellowship DE220100766 funded by the Australian Government.
AKL gratefully acknowledges support by grants 1653300 and 2205628 from the National Science Foundation, by award JWST-GO-02107.009-A, and by a Humboldt Research Award from the Alexander von Humboldt Foundation.
G.A.B. acknowledges the support from ANID Basal project FB210003.
\vspace{5mm}
\facilities{JWST, ALMA, VLT-MUSE}
\software{astropy \citep{astropy:2013, astropy:2018},
numpy \citep{harris2020array},
scipy \citep{2020SciPy-NMeth},
scikit-image \citep{van2014scikit},
matplotlib \citep{Hunter:2007},
uncertainties\footnote{Uncertainties: a Python package for calculations with uncertainties, Eric O. LEBIGOT, \url{http://pythonhosted.org/uncertainties/}}
} |
0810.0175 | \section{Introduction}
In recent history, a substantial amount of scientific interest has
been directed towards photomagnetic materials from a technological,
application-oriented point of view, due to their favorable
properties.\cite{sato04} Among materials displaying photo-induced
spin-crossover\cite{decurtins84,decurtins85,hauser99,guttlich00,boillot04}
and valence
tautomerism\cite{carducci97,caneschi98,hendrickson04,carbonera04,carb04},
an important subclass is formed by the so-called Prussian Blue
Analogues. These molecular heterobimetallic coordination compounds
exhibit intervalence charge transfer transitions induced by various
external stimuli (temperature\cite{ohk05},
pressure\cite{ksenofontov03,morimoto03}, visible
light\cite{sato04,ohk05} and X-rays.\cite{margadonna04}) A prominent
member of this subclass is Rb$_{x}$Mn[Fe(CN)$_6$]$_{\frac{(2+x)}{3}}$$\cdot$zH$_2$O\
\cite{tokoro03,morimoto03,ohkoshi05,kat03}, which undergoes a
temperature-induced charge transfer transition from a high
temperature cubic lattice, space group $F$\={4}$3m$ (HT phase), to a
low temperature tetragonal ($I$\={4}$m2$) phase (LT phase). This
reversible, entropy-driven\cite{luzon08,cobo07} phase transition,
which is described by Fe\superscript{III}($t^{5}_{2g}$,
S=$\nicefrac{1}{2}$)-CN-Mn\superscript{II}($t^{3}_{2g}e^{2}_{g}$,
S=$\nicefrac{5}{2}$) \textbf{$\rightleftharpoons$}
Fe\superscript{II}($t^{6}_{2g}$,
S=$0$)-CN-Mn\superscript{III}($t^{3}_{2g}e^{1}_{g}$, S=$2$), occurs
not only under the influence of temperature (HT$\rightarrow$LT at
$\sim$ 225 K, LT$\rightarrow$HT at $\sim$ 290 K), but can also be
induced by visible light irradiation at various
temperatures\cite{tokoro03,tokoro05,mo03,cobo07,tokoro08}, by
hydrostatic pressure\cite{morimoto03} and possibly by X-ray
radiation.\cite{margadonna04} In addition, these type of Prussian
Blue Analogues have demonstrated a variety of other interesting
properties such as a pressure-induced magnetic pole
inversion\cite{egan06} and multiferroicity\cite{ohkoshi07}. The
capability of these materials to display switching phenomena,
however, is known to depend rather crucially on its exact
stoichiometry.\cite{ver06,cobo07,ohkoshi05} Even though it has been
established that the degree of conversion in these materials is
maximized for systems closest to a Rb:Mn:Fe stoichiometry of 1:1:1,
there appears to be an intrinsic limit to the maximum conversion
achieved. To our knowledge, no Rb$_{x}$Mn[Fe(CN)$_6$]$_{\frac{(2+x)}{3}}$$\cdot$zH$_2$O\ system has ever been shown
to undergo a complete transition from either configuration
(Fe\superscript{III}Mn\superscript{II} or
Fe\superscript{II}Mn\superscript{III}) to the other. That is, all
data on Rb$_{x}$Mn[Fe(CN)$_6$]$_{\frac{(2+x)}{3}}$$\cdot$zH$_2$O\ systems seem to indicate the presence of at least
small amounts of the HT configuration
(Fe\superscript{III}Mn\superscript{II}) when in the LT phase
(configuration Fe\superscript{III}Mn\superscript{II}) and often also
vice versa. The present understanding is that this incompleteness
originates from the intrinsic local inhomogeneities of these
materials, such as Fe(CN)$_{6}$-vacancies and alkali ion
nonstoichiometry. This paper investigates where exactly the
intrinsic incompleteness of the transition stems from in two ways:
Firstly, by comparing the charge transfer (CT) properties of the
bulk material to those of the surface material for both a 'near
perfect' (close to 1:1:1 stoichiometry) sample and a less
stoichiometric sample. Secondly, by quantitatively investigating the
effect of substituting part of the metal ions involved in the CT
transition by CT-inactive ions. Three different samples, Rb$_{0.97}$Mn[Fe(CN)$_6$]$_{0.98}$$\cdot$1.03H$_2$O, Rb$_{0.81}$Mn[Fe(CN)$_6$]$_{0.95}$$\cdot$1.24H$_2$O, and
Rb$_{0.70}$Cu$_{0.22}-$
Mn$_{0.78}$[Fe(CN)$_6$]$_{0.86}$$\cdot$2.05H$_2$O are quantitatively
compared utilizing different experimental techniques, which are all
capable of distinguishing between the two configurations
(Fe\superscript{III}Mn\superscript{II} (HT) and
Fe\superscript{II}Mn\superscript{III} (LT)). Magnetic measurements
are performed to obtain information on the bulk properties of the
various samples, while XPS spectroscopy is used to extract the
surface properties. Finally, Raman scattering is employed as a
tertiary probe to investigate the materials' properties.
\section{Experimental Methods}
\subsection{Sample synthesis} All chemicals (of analytical grade) were purchased at Sigma-Aldrich and used without further purification.
Rb$_{0.97}$Mn[Fe(CN)$_6$]$_{0.98}$$\cdot$1.03H$_2$O\ (sample A) and Rb$_{0.81}$Mn[Fe(CN)$_6$]$_{0.95}$$\cdot$1.24H$_2$O\ (sample B) are, respectively, samples 3 and 4
of a previous publication.\cite{ver06} Their synthesis and detailed
initial characterization can be found there. Rb$_{0.70}$Cu$_{0.22}$Mn$_{0.78}$[Fe(CN)$_6$]$_{0.86}$$\cdot$2.05H$_2$O\ (sample C) was
prepared similarly, by slowly adding a mixed aqueous solution (25mL)
containing CuCl$_{2}$$\cdot$$2$H$_{2}$O (0.085 g, 0.02 M) and
MnCl$_{2}$$\cdot$$4$H$_{2}$O (0.396 g, 0.08 M) to a mixed aqueous
solution (25 mL) containing K$_{3}$[Fe(CN)$_{6}$] (0.823 g, 0.1 M)
and RbCl (3.023 g, 1 M). The addition time was 20 minutes and the
resulting solution was stirred mechanically and kept at a
temperature of 50$^{\circ}$C both during the addition time and for
the subsequent hour. A brown powder precipitated. This was
centrifuged and washed twice with distilled water of room
temperature. The powder was allowed to dry in air for about 12 hours
at room temperature. Yield (based on Mn + Cu): 81 \%. Elemental
analysis (details in Supporting Information) showed that the
composition of sample C was
Rb$_{0.70}$Cu$_{0.22}$Mn$_{0.78}$[Fe(CN)$_{6}$]$_{0.86}$ $\cdot$2.05
H$_{2}$O. X-ray powder diffraction showed that the sample was
primarily (weight fraction 80.4(8) \%) in the typical $F$\={4}$3m$
phase with the other fraction (19.6(8) \%) in the $I$\={4}$m2$
phase. The sample was confirmed to be single phase; phase separation
into Rb$_{x}$Mn[Fe(CN)$_6$]$_{\frac{(2+x)}{3}}$$\cdot$zH$_2$O\ and
Rb$_{x}$Cu[Fe(CN)$_6$]$_{\frac{(2+x)}{3}}$$\cdot$zH$_2$O fractions
was excluded. See for details the Supporting Information. As noted
in a previous paper\cite{ver06}, the samples under discussion here
deviate from a perfect Rb:Mn(+Cu):Fe stoichiometry of 1:1:1. This
deviation is ascribed to [Fe(CN)$_{6}$]$^{3-}$
vacancies\cite{ver06,ohkoshi05,cobo07} which are filled by H$_{2}$O
molecules, consistent with the hydration found in the materials.
\subsection{Instrumentation and measurement}
\emph{Magnetic measurements.} Magnetic measurements were performed
on a Quantum Design MPMS magnetometer equipped with a
superconducting quantum interference device (SQUID). Samples were
prepared by fixing 20-30 mg of the compound (0.5 mg weight accuracy)
between two pieces of cotton wool in a gelcap. For the magnetic
susceptibility measurements of samples A and B, the samples were
first slowly cooled from room temperature to 5 K (to ensure the
samples are not quenched in their HT phase\cite{tokoro06}). Then the
field was kept constant at 0.1 T while the temperature was varied
from 5 K to 350 K and back to 150 K (rate $\leq$ 4 K/min.). For the
magnetic susceptibility measurements of sample C the field was kept
constant at 0.1 T while the temperature slowly varied from 330 to 5
K and back to 330 K.
\emph{X-ray photoemission spectroscopy.} X-ray photoemission
spectroscopy (XPS) data were collected at the IFW Leibniz Institute
for Solid State and Materials Research in Dresden, using a SPECS
PHOIBOS-150 spectrometer equipped with a monochromatic Al
K$_{\alpha}$ X-ray source ($h\nu = 1486.6$ eV); the photoelectron
take off angle was 90\ensuremath{^\circ} and an electron flood gun
was used to compensate for sample charging. The spectrometer
operated at a base pressure of $1\cdot10^{-10}$ Torr. Evaporated
gold films supported on mica served as substrates. Each powdered
microcrystalline sample was dispersed in distilled-deionized water,
stirred for 5 minutes, and a few drops of the resulting suspension
were left to dry in air on a substrate. Directly after drying, the
samples were introduced into ultra high vacuum and placed on a He
cooled cryostat equipped with a Lakeshore cryogenic temperature
controller to explore the 50-350 K temperature range. All binding
energies were referenced to the nitrogen signal (cyanide groups) at
398 eV.\cite{ver06} No X-ray induced sample degradation was
detected. Spectral analysis included a Tougaard background
subtraction\cite{tougaard05} and peak deconvolution employing
Gaussian line shapes using the WinSpec program developed at the LISE
laboratory, University of Namur, Belgium.
\emph{Raman scattering.} Inelastic light scattering experiments in
the spectral region 2000-2300 cm$^{-1}$ (the spectral region of the
C-N stretching vibration) were performed in a 180\ensuremath{^\circ}
backscattering configuration, using a triple grating micro-Raman
spectrometer (T64000-Jobin Yvon), consisting of a double grating
monochromator (acting as a spectral filter) and a polychromator
which disperses the scattered light onto a liquid N$_{2}$ cooled CCD
detector. The spectral resolution was better than 2 cm$^{-1}$ for
the spectral region considered. Sample preparation was identical to
that for XPS measurements and samples were placed in a liquid He
cooled optical flow-cryostat (Oxford Instruments), where the
temperature was stabilized with an accuracy of 0.1 K throughout the
whole temperature range (from 300 to 50 K). A fraction of the second
harmonic output of a Nd:YVO$_{4}$ laser (532.6 nm, Verdi-Coherent)
was used as an excitation source and focused on the samples using a
50x microscope objective (Olympus, N.A. 0.5). The power density on
the samples was of the order of 600 W cm$^{-2}$.
\section{Results and Discussion}
\subsection{Magnetic susceptibility measurements}
The inverse of the molar magnetic susceptibility, $\chi_{M}^{-1}$,
of the three samples is plotted in fig. \ref{Inverse_Chi}, as a
function of temperature. In all three samples the magnetic
properties show a thermal hysteresis; when heating the sample from 5
K up, a decrease in $\chi_{M}^{-1}$, accompanied by a decrease in
the slope of the $\chi_{M}^{-1},T$-curve, occurs at a characteristic
temperature T$_{1/2}$$\uparrow$, signaling the charge transfer (CT)
transition from the Fe$^{\textrm{II}}$-CN-Mn$^{\textrm{III}}$ (LT)
configuration to the Fe$^{\textrm{III}}$-CN-Mn$^{\textrm{II}}$ (HT)
configuration. Subsequent cooling shows the samples undergoing the
reverse transition at a temperature T$_{1/2}$$\downarrow$, which is
significantly lower than T$_{1/2}$$\uparrow$. Temperatures
T$_{1/2}$$\uparrow$ and T$_{1/2}$$\downarrow$ are defined as the
temperatures at which half of the CT transition has occurred in the
respective heating and cooling runs. Values of T$_{1/2}$$\downarrow$
= 240 K and T$_{1/2}$$\uparrow$ = 297 K for sample A (Rb$_{0.97}$Mn[Fe(CN)$_6$]$_{0.98}$$\cdot$1.03H$_2$O),
T$_{1/2}$$\downarrow$ = 197 K and T$_{1/2}$$\uparrow$ = 283 K for
sample B (Rb$_{0.81}$Mn[Fe(CN)$_6$]$_{0.95}$$\cdot$1.24H$_2$O) and T$_{1/2}$$\downarrow$ = 170 K and
T$_{1/2}$$\uparrow$ = 257 K for sample C (Rb$_{0.70}$Cu$_{0.22}$Mn$_{0.78}$[Fe(CN)$_6$]$_{0.86}$$\cdot$2.05H$_2$O), as extracted from
corresponding $\chi_{M}T$-curves, yield hysteresis widths of 57, 86
and 87 K, respectively.
The magnetic properties of these samples are comparable to those
reported for other Rb$_{x}$Mn[Fe(CN)$_6$]$_{\frac{(2+x)}{3}}$$\cdot$zH$_2$O\
compounds.\cite{ver06,ohkoshi05,ohk02,cobo07} The samples discussed
here are consistent with the correlation between stoichiometry and
hysteresis properties as found by Ohkoshi \emph{et
al.}\cite{ohkoshi05} and Cobo \emph{et al.}\cite{cobo07} That is,
with increasing amount of Fe(CN)$_{6}$ vacancies,
T$_{1/2}$$\downarrow$, T$_{1/2}$$\uparrow$ and the amount of Rb in
the sample decrease, while
$\Delta$T=(T$_{1/2}$$\uparrow$-T$_{1/2}$$\downarrow$) and the amount
of H$_{2}$O in the sample increase. For all samples, the
susceptibility of both the LT and HT phase was fit to Curie-Weiss
behavior ($\chi_{M}^{-1}=\frac{(T-T_{c})}{C}$), the corresponding
fits are depicted by the blue (LT) and red (HT) lines in figure
\ref{Inverse_Chi}. Fits to the LT phase data were done in the
temperature range from 20 K up to approximately 10 K below the
respective T$_{1/2}$$\uparrow$ temperatures, while the fits to the
data of the HT phases were done in the range from approximately 10 K
above the respective T$_{1/2}$$\downarrow$ temperatures up to the
maximum measurement temperature (325 K). From these fits we
extracted the corresponding Curie constants (\emph{C}), which are
reported in table \ref{Magn measurements}. Theoretical values of the
Curie constant, $C$, were calculated\cite{chiT} for the assumed HT
and LT phases of samples A (Rb$_{0.97}$Mn[Fe(CN)$_6$]$_{0.98}$$\cdot$1.03H$_2$O), B (Rb$_{0.81}$Mn[Fe(CN)$_6$]$_{0.95}$$\cdot$1.24H$_2$O) and C (Rb$_{0.70}$Cu$_{0.22}$Mn$_{0.78}$[Fe(CN)$_6$]$_{0.86}$$\cdot$2.05H$_2$O) and are also
given in table \ref{Magn measurements}. The fits also yielded the
characteristic temperatures $\theta$ for all samples in both the HT
and LT phases. Upon incorporation of ferromagnetic interactions in
the system (present in the HT phase of sample C, where
Fe\superscript{III} and Cu\superscript{II} ions interact
ferromagnetically), one would expect to see a shift of the negative
$\theta$ to smaller values (antiferromagnetic interactions remain
dominant). Indeed, such a shift can be seen ($\theta$ $= -7.7$ K,
$-17.6$ K and $-6.2$ K for sample A, B and C, respectively.).
However, the $\theta$ values extracted from the HT fits are the
result of an extrapolation over approximately 200 K, which would
make the observed shift be within the expected error bars. The
$\theta$ values corresponding to the LT fits do not show an obvious
trend ($\theta$ $= 9.6$ K, $4.1$ K and $6.5$ K, respectively), which
can be explained by the fact that in the LT the majority of the Fe
ions have assumed the $S = 0$, Fe\superscript{II} configuration. The
result is that the ferromagnetic Mn-Mn interaction dominates the
extracted parameters.
\begin{table*}[htb]
\centering \caption{Curie constants (\emph{C}, emu K mol$^{-1}$) and
'inactive fractions' (IFs, see text) for all samples} \label{Magn
measurements}
\setlength{\extrarowheight}{3pt}
\begin{tabular*}{17.0 cm}{@{\extracolsep{\fill}}ccD{.}{.}{3.3}cc}
\hline \hline
Sample (phase) & \multicolumn{1}{c}{$C$ (exp.)\footnotemark[1]} & \multicolumn{1}{c}{$C$ (calc.)\cite{chiT}} & IF (bulk, \%)\footnotemark[2]$^{,}$\cite{inactivefraction} & IF (surface, \%)\footnotemark[2]$^{,}$\cite{inactivefraction}\\% & T$_{c}$, from fit\\
\hline
A (HT-phase) & 4.87 $\pm$ 0.13 & 4.75 & 1 $\pm$ 1.9 & 72 $\pm$ 0.5\\
A (LT-phase) & 3.04 $\pm$ 0.09 & 3.03\footnotemark[3] & 1 $\pm$ 1.9 & 72 $\pm$ 0.5\\
B (HT-phase) & 5.03 $\pm$ 0.06 & 4.73 & 28 $\pm$ 1.1 & 65 $\pm$ 0.5\\
B (LT-phase) & 3.53 $\pm$ 0.05 & 3.07\footnotemark[3] & 28 $\pm$ 1.1 & 65 $\pm$ 0.5\\
C (HT-phase) & 3.65 $\pm$ 0.13 & 3.82 & 24 $\pm$ 0.8 & 78 $\pm$ 0.5\\
C (LT-phase) & 2.78 $\pm$ 0.11 & 2.45\footnotemark[3] & 24 $\pm$ 0.8 & 78 $\pm$ 0.5\\
\hline \hline
\end{tabular*}
\footnotetext\protect[1]\protect{Errors are mostly due to the
uncertainty in the weight of the samples. This error is the same for
both phases of a particular compound. The fitting error is of the
order of 0.02 emu K mol$^{-1}$}.\\
\footnotetext\protect[2]\protect{The inactive fraction is defined as
the fraction of the magnetic species not undergoing the CT
transition, even though CT is stoichiometrically possible. I.e., the
magnetic species that do not undergo CT due to Fe:Mn
nonstoichiometry are excluded.}\\
\footnotetext\protect[3]\protect{The calculation of the C value
requires assumptions on the degree of charge transfer in the
compound. These calculated values correspond to the case of maximum
stoichiometrically possible degree of conversion in each of the
samples. See text for details.}
\end{table*}
\begin{figure}[htb]
\centering
\includegraphics[width=7.5cm]{fig1}
\caption{$\chi^{-1}_{M}$ as a function of temperature for samples A
(Rb$_{0.97}$Mn[Fe(CN)$_6$]$_{0.98}$$\cdot$1.03H$_2$O), B (Rb$_{0.81}$Mn[Fe(CN)$_6$]$_{0.95}$$\cdot$1.24H$_2$O) and C (Rb$_{0.70}$Cu$_{0.22}$Mn$_{0.78}$[Fe(CN)$_6$]$_{0.86}$$\cdot$2.05H$_2$O), showing the magnetic properties of the
materials. The Curie-Weiss law was fitted to the data of both the HT
and LT phases of the samples, the corresponding fits are indicated
by the blue (LT) and red (HT) lines in the graphs. From the fits,
experimental Curie constants ($C$) were extracted (see text). Dashed
arrows indicate the temperature dependence of the data as measured
in successive heating and cooling runs, while vertical indicate the
corresponding transition temperatures.} \label{Inverse_Chi}
\end{figure}
The theoretical $C$ values calculated for sample A are, within the
experimental accuracy, in good agreement with those obtained from
the Curie-Weiss fit. In combination with the fact that in the
calculations\cite{chiT} all Fe ions and a stoichiometric amount of
Mn ions were assumed to undergo the charge transfer (CT) transition
in going from the HT to the LT configuration, these $C$ values
indicate that nearly all metal ions in sample A (which is very close
to the 'perfect' 1:1:1 stoichiometry) undergo CT, resulting in a
near maximum change in the magnetic properties. The calculated $C$
value for the HT phase of sample B is slightly higher than the
experimental value by about 0.3 emu K mol$^{-1}$. This might
indicate that the sample contains trace amounts of magnetic
impurities, which would increase the experimental C-value. The Curie
constant calculated for the LT phase of sample B is also lower than
its experimental value. Aside from aforementioned considerations,
this difference can be explained by realizing that while the
calculations assume a maximum possible degree of CT in the samples,
this is not necessarily the case (Sample B deviates somewhat from
the 1:1:1 stoichiometry). Thus, the difference can be ascribed to a
fraction of the magnetic species not undergoing the CT transfer
transition, even though the stoichiometry of the material would
allow for it (i.e. excluding the magnetic species that do not
undergo CT due to nonstoichiometry of Fe and Mn). This fraction will
hereafter be referred to as the "inactive fraction" (IF) of the
material. The estimated inactive fraction of each sample is also
given in table \ref{Magn measurements} for both bulk (or more
accurately, for bulk + surface material, as extracted from the
magnetic measurements\cite{inactivefraction}) and surface material
(as extracted from XPS spectra\cite{inactivefraction} (vide infra)).
The Curie constants calculated for sample C are somewhat lower due
to the partial substitution of Mn ions by Cu$^{\textrm{II}}$
entities ($S$ = 1/2) in the sample (the Cu ions are assumed to
remain divalent at all temperatures, which is verified by the XPS
measurements (vide infra)), the experimentally found value for the
HT phase is consistent with calculations. In the calculation of the
Curie constant of the LT phase, all the Mn ions and the
corresponding number of Fe ions were assumed to have undergone the
CT transition. Whether or not a metal ion undergoes the CT
transition in these systems is known, however, to depend on the
local stoichiometry in the system.\cite{ver06,cobo07,ohkoshi05}
Therefore, the calculation, which effectively ignores the effect of
the Cu ions on the CT probability of the other metal ions, is only a
lower limit to the LT phase Curie constant, which corresponds to the
case of maximum magnetic change across the transition for the given
stoichiometry (Rb$_{0.70}$Cu$_{0.22}$Mn$_{0.78}$[Fe(CN)$_6$]$_{0.86}$$\cdot$2.05H$_2$O) (i.e., assuming IF = 0 in the material). And
indeed, the $C$-value found for the LT phase of sample B is somewhat
larger than this calculated lower limit, as would be intuitively
expected upon the incorporation of Cu ions into the lattice. For
comparison, assuming a random distribution of the Cu ions on the Mn
positions, the expected Curie constant for the LT phase would be
3.48 emu K mol$^{-1}$ (IF = 75.2 \%) if only the Fe ions surrounded
by 6 Mn ions (Fe[-CN-Mn]$_{6}$ clusters) are assumed to undergo the
CT transition, 2.91 emu K mol$^{-1}$ (IF = 33.1 \%) when also the Fe
ions in a Mn$_{5}$Cu environment would transfer an electron and 2.50
emu K mol$^{-1}$ (IF = 3.5 \%) when in addition even the Fe ions in
Mn$_{4}$Cu$_{2}$ surroundings would undergo CT. The experimentally
found $C$ value (2.78 emu K mol$^{-1}$) indicates the magnetic
properties are quite robust upon Cu-substitution; an estimated 76 \%
of the maximum possible CT transfer is still achieved (IF $\simeq$
24 \%), which is even more than is the case for sample B (IF =
$\simeq$ 28 \%). Moreover, it suggests the substitution of one (and
possibly even two) Mn ion by a Cu ion in the Fe[-(CN)-Mn]$_{6}$
cluster does not 'deactivate' the CT-capability of the Fe-center,
nor does it appear to reduce the cooperativity in the material,
since the CT transition occurs in a similar short temperature
interval in all samples (see fig. \ref{Inverse_Chi}).
\subsection{X-ray Photoemission Spectroscopy (XPS).}
XPS is a direct method for identifying the surface composition of a
compound as well as the oxidation state of the various elements at
the surface. Hence this technique is well suited to follow the phase
transition as a function of temperature. Indeed, as the structural
change of the compound is accompanied by a charge transfer between
metallic ions, XPS will accurately quantify the corresponding
changes in the oxidation states of the elements at the surface of
the material.
\begin{figure}[htb]
\centering
\includegraphics[width=5.0cm]{fig2}
\caption{Fe $2p$\subscript{3/2} core level photoemission spectra of
sample A (Rb$_{0.97}$Mn[Fe(CN)$_6$]$_{0.98}$$\cdot$1.03H$_2$O) collected at 300 and 140 K; fits to the raw data are
depicted as solid lines. Spectra labeled a) are recorded during the
cooling cycle, whereas the spectrum labeled b) refers to the
measurement done after warming back up to 300 K. The binding energy
scale is corrected for the temperature dependent shift (see text)
for clarity.} \label{XPS_A}
\end{figure}
\textbf{Sample A} (Rb$_{0.97}$Mn[Fe(CN)$_6$]$_{0.98}$$\cdot$1.03H$_2$O). Figure \ref{XPS_A} shows the Fe
$2p$\subscript{3/2} core level photoemission spectrum of sample A at
room temperature and at 140 K, as well as fits to the raw data. At
both temperatures, the Fe $2p$\subscript{3/2} signal consists of
three distinct contributions: the Fe\superscript{II} line at 708.8
eV binding energy, the Fe\superscript{III} line at 710.5 eV and the
Fe\superscript{III} satellite at 711.7 eV, the latter appearing 1.2
eV higher in binding than the Fe\superscript{III} main line, having
30 \% of its intensity.\cite{ver06} By comparing the relative
intensities of the Fe\superscript{II} and Fe\superscript{III}
components in the room temperature spectrum, one deduces that the
surface material of the compound is composed of 76 \%
Fe\superscript{III} and 24 \% Fe\superscript{II}. No spectral
changes are detected with respect to the room temperature data when
slowly cooling the sample to 252 K (rate $\sim$2 K/min., spectrum
not shown here), just above the HT to LT phase transition (see
figure \ref{Inverse_Chi}). As previously mentioned, further cooling
to 140 K induces a transition from the HT to the LT phase,
accompanied by an electron transfer from the Mn\superscript{II} to
the Fe\superscript{III} ions which is described as
Fe\superscript{III}($t^{5}_{2g}$,
S=$\nicefrac{1}{2}$)-CN-Mn\superscript{II}($t^{3}_{2g}e^{2}_{g}$,
S=$\nicefrac{5}{2}$) \textbf{$\rightarrow$}
Fe\superscript{II}($t^{6}_{2g}$,
S=$0$)-CN-Mn\superscript{III}($t^{3}_{2g}e^{1}_{g}$, S=$2$). The
photoemission spectrum of the Fe $2p$\subscript{3/2} clearly
qualitatively supports this transition, since the
Fe\superscript{III}/Fe\superscript{II} ratio after the transition is
48 \%/52 \%. However, the relative spectral intensities of the
Fe\superscript{III} and Fe\superscript{II} components also show that
the Fe\superscript{III}\textbf{$\rightarrow$}Fe\superscript{II}
conversion at the surface is far from complete (IF = 72 \%), in
contrast to the magnetic susceptibility measurements, which indicate
a near complete conversion for the bulk (see also table \ref{Magn
measurements}). This is explained by the surface sensitivity of the
XPS technique, since the surface composition and structure can
differ substantially from that of the corresponding bulk due to
surface reconstruction. Thus, probing this surface stoichiometry
using XPS shows the estimated\cite{inactivefraction} inactive
fraction of the surface material to be substantially larger in
sample A.
\textbf{Sample B} (Rb$_{0.81}$Mn[Fe(CN)$_6$]$_{0.95}$$\cdot$1.24H$_2$O). The left panel of figure \ref{XPS_B} shows
the Fe $2p$\subscript{3/2} core level photoemission spectrum (fits
and raw data) of sample B for various temperatures, collected
starting from 325 K, then while cooling to a minimum temperature of
50 K and successively warming up back to 325 K (cooling and warming
rates both $\sim$2 K/min.). As in the photoemission spectrum of
sample A, the Fe $2p$\subscript{3/2} signal consists of three
distinct features: the Fe\superscript{II} line at 708.8 eV binding
energy, the Fe\superscript{III} line at 710.5 eV and the
Fe\superscript{III} satellite at 711.7 eV. Additionally, a fourth
small feature appears at the high binding energy side of the Fe
$2p$\subscript{3/2} signal. This contribution, which grows larger
with decreasing temperature, is attributed to a contribution of the
Fe\superscript{II} shake up satellite. By comparing the relative
Fe\superscript{III}/Fe\superscript{II} intensity ratio for 325 K,
135 K and 50 K one observes the same trend as mentioned for sample
A. The sample initially (at 325 K) consists of 85 \% of
Fe\superscript{III} and 15 \% of Fe\superscript{II} and upon
cooling, a decrease in the Fe\superscript{III} peak intensity is
observed, whereas the Fe\superscript{II} peak intensity
simultaneously increases, consistent with the charge transfer
between Fe\superscript{III} and Mn\superscript{II} ions. Spectra
recorded at 135 K and 50 K both reveal similar
Fe\superscript{III}/Fe\superscript{II} ratios, namely 51 \%/49 \%
and 50 \%/50 \%, respectively, in qualitative agreement with the
magnetic susceptibility measurements (figure \ref{Inverse_Chi}). As
for sample A, quantitative differences are due to the surface
sensitivity of the XPS technique and the increased inactive fraction
of the surface material with respect to that of the bulk ($\sim$72
\% vs. $\sim$28 \%, respectively).
\begin{figure}[htb]
\centering
\includegraphics[width=7.5cm]{fig3}
\caption{Left panel: Fe $2p$\subscript{3/2} core level photoemission
spectra of sample B (Rb$_{0.81}$Mn[Fe(CN)$_6$]$_{0.95}$$\cdot$1.24H$_2$O) collected at 325, 135 and 50 K; solid lines
depict fits to the raw data. Spectra labeled a) are recorded during
the cooling cycle, whereas the spectrum labeled b) refers to the
measurement done after warming back up to 325 K. The binding energy
scale is corrected for the temperature dependent shift (see text)
for clarity. The right panel shows the evolution of the Fe
$2p$\subscript{3/2} core level photoemission spectrum as sample B is
warmed up from 182 to 325 K. Consecutive spectra are $\sim$8 K
apart. The binding energy scale is not rescaled here.} \label{XPS_B}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=7.5cm]{fig4}
\caption{Comparison between Fe\superscript{III} fractions in bulk
and surface material of sample B when heated through the CT
transition. Bulk data (estimated from $\chi_{M}$T values) shows a
much larger conversion fraction across a rather abrupt transition,
whereas surface data (as extracted from XPS spectra) shows a
substantially smaller and smooth conversion.} \label{comparison}
\end{figure}
In order to visualize the continuous conversion across the CT
transition in the surface material, we also recorded a series of Fe
$2p$\subscript{3/2} spectra while slowly warming up the sample at a
constant rate ($\sim$3.5 K/min.). The right panel of figure
\ref{XPS_B} shows this sequence starting at 182 K and reaching 325 K
with a temperature step of $\sim$8 K between spectra. The series
clearly shows a continuous conversion from the LT to the HT
configuration, testified to by the steadily decreasing intensity of
the Fe\superscript{II} peak and the increasing Fe\superscript{III}
spectral intensity. Thus, in contrast to the bulk material, the
surface material undergoes a smooth transition from the HT to the LT
phase, becoming progressively Fe\superscript{III}Mn\superscript{II}
due to the charge transfer from Fe\superscript{II} to
Mn\superscript{III} ions (see also fig. \ref{comparison}). In the
right panel of figure \ref{XPS_B} a slight shift of the Fe
$2p$\subscript{3/2} peak toward higher binding energy is observed
when increasing the temperature. A similar binding energy shift is
observed for the Mn spectra (not shown). Comparable shifts where
observed for all samples and cannot simply be attributed to sample
charging since the Rb and Fe shifts occur in opposite directions.
These shifts are due to a charge delocalization in the CN
vicinities.\cite{arrio05}
To illustrate the large differences between surface and bulk
behavior across the CT transition, the fraction of
Fe\superscript{III} versus temperature is plotted in figure
\ref{comparison} for both bulk and surface material of sample B,
during a heating process. Data for the bulk curve are estimated from
corresponding $\chi_{M}$T values, using the same formula as was used
to calculate Curie constants.\cite{chiT} Data for the surface curve
are extracted from the XPS spectra of the warming sequence in the
right panel of figure \ref{XPS_B}. The figure nicely visualizes the
differences: bulk material shows a high degree of conversion (IF
$\sim$ 28 \%) and displays a rather abrupt transition, while surface
material shows a much smoother transition of a substantially lower
amount of Fe ions. Also, neither the HT phase nor the LT phase of
the surface material consists of only one configuration
(Fe\superscript{III}Mn\superscript{II} or
Fe\superscript{II}Mn\superscript{III}). The differences are
attributed to a strongly increased degree of disorder and
inhomogeneity at the surface of the material, which increases the
inactive fraction and effectively eliminates cooperativity between
the metal centers, resulting in a smooth transition across a broad
temperature range.
\textbf{Sample C} (Rb$_{0.70}$Cu$_{0.22}$Mn$_{0.78}$[Fe(CN)$_6$]$_{0.86}$$\cdot$2.05H$_2$O). The left panel of figure \ref{XPS_C} shows
the Fe $2p$\subscript{3/2} core level photoemission spectrum (fits
and raw data) of sample C for six temperatures, recorded during
cooling down from 325 K to 50 K and subsequent warming up back to
325 K. As in samples A and B discussed above, the Fe
$2p$\subscript{3/2} signal of sample C consists of the same three
distinct features; the Fe\superscript{II} line at 708.8 eV, the
Fe\superscript{III} line at 710.5 eV and the Fe\superscript{III}
satellite at 711.7 eV. What distinguishes sample C from the others
is the fact that it contains $\approx$ 22 \% of copper on the Mn
positions. As expected, due to the inclusion of Cu into the lattice,
sample C (fig. \ref{XPS_C}) shows a lower absolute
Fe\superscript{III} to Fe\superscript{II} surface conversion in
comparison to samples A and B: starting out from 77 \%
Fe\superscript{III} and 23 \% Fe\superscript{II} at 325 K, the
sample shows a minimum ratio of 57 \% Fe\superscript{III} and 43 \%
Fe\superscript{II} at 50 K. Comparison of the surface inactive
fractions (in which the maximum degree of CT that is
stoichiometrically possible is taken into account) however, shows
the degree of conversion not to be anomalous (see table \ref{Magn
measurements}). As for the previous two samples A and B, the surface
IF is much higher than the corresponding bulk IF ($\sim$ 78 \% vs.
$\sim$ 24 \%), showing that also for sample C the phase transition
is far from complete at the surface of the sample.
\begin{figure}[htb] \centering
\includegraphics[width=7.5cm]{fig5}
\caption{Left panel: Fe $2p$\subscript{3/2} core level photoemission
spectra of sample C (Rb$_{0.70}$Cu$_{0.22}$Mn$_{0.78}$[Fe(CN)$_6$]$_{0.86}$$\cdot$2.05H$_2$O) collected at 325, 210, 100 and 50 K. Solid
lines depict fits to the raw data. Spectra indicated with an a)
refer to measurements recorded during the cooling run, whereas
spectra indicated with a b) were recorded during the subsequent
heating run. The binding energy scale is corrected for the
temperature dependent shift (see text) for clarity. Right panel:
Corresponding Cu $2p$\subscript{3/2} core level photoemission
spectra of sample C recorded at different temperatures during the
same cooling and subsequent heating cycle.} \label{XPS_C}
\end{figure}
Additional information about the role of Cu comes from the Cu
$2p$\subscript{3/2} core level photoemission data shown in the right
panel of figure \ref{XPS_C}. One observes that the main peak at 935
eV\cite{chawla92} remains constant in intensity and binding energy
throughout the temperature loop, revealing that the chemical
environment of the copper does not change across the phase
transition. One can therefore conclude that copper is not involved
in the charge transfer process.
\subsection{Raman Spectroscopy.}
Inelastic light scattering is employed in order to indirectly
determine the electronic and magnetic properties of the materials by
addressing the vibrational stretching mode of its CN-moieties. In
fact, the frequency of this vibrational mode, $\nu_{\textrm{CN}}$, is highly
sensitive to the local environment of the CN-moiety. Upon
coordination, $\nu_{\textrm{CN}}$\ will typically shift from its unbound ion
frequency, 2080 cm$^{-1}$, to a higher frequency, characteristic of
the local environment.\cite{nak86} The extent of this shift is
dependent on the electronegativity, valence and coordination number
of the metal ion(s) coordinated to the CN-group and whether they are
coordinated to the C or the N atom. Table \ref{CN-vibrations} shows
the typical frequency ranges where different CN stretching modes are
expected to be observed, when the cyano-moiety is in a bimetallic
bridge (an Fe-CN-M environment, where M = 3d metal ion). The given
frequency ranges are estimates based on literature and experimental
data of materials containing the specific or closely related
CN-environments (varying the N-bound metal
ion).\cite{nak86,reg90,sat99,ber88,ohk05,ver06} Across the thermal
phase transition, in addition to the intervalence charge transfer,
the Rb$_{x}$Mn[Fe(CN)$_6$]$_{\frac{(2+x)}{3}}$$\cdot$zH$_2$O\ lattice also contracts by approximately 10 \%. This
volume change decreases the average bond lengths in the system,
thereby generally increasing the vibrational frequencies.
\begin{table}[htb]
\centering \caption{Specific frequencies and frequency ranges for CN
stretching modes, $\nu_{\textrm{CN}}$\, of CN-moieties in different environments in
Prussian Blue Analogues (M = 3d metal ion).} \label{CN-vibrations}
\setlength{\extrarowheight}{3pt}
\begin{tabular*}{8.0 cm}{@{\extracolsep{\fill}}cc}
\hline \hline
CN-moiety & $\nu_{\textrm{CN}}$\ (cm$^{-1}$)\\
\hline
CN$^{-}$(aq) & 2080 \cite{nak86}\\
Fe$^{\textrm{II}}$-CN-Mn$^{\textrm{II}}$ & 2065 \cite{ber88}\\
Fe$^{\textrm{II}}$-CN-Mn$^{\textrm{III}}$ & 2095, 2096, 2114 \cite{ohk05,cobo07}\\
Fe$^{\textrm{II}}$-CN-Cu$^{\textrm{II}}$ & 2100 \cite{ber88}\\
Fe$^{\textrm{II}}$-CN-Mn$^{\textrm{III}}$ & 2114 \cite{cobo07}\\
\multirow{2}{*}{Fe$^{\textrm{III}}$-CN-Mn$^{\textrm{II}}$} & 2146, 2152, 2155,\\
& 2159, 2165, 2170 \cite{ber88,ohk05,ver06,cobo07}\\
Fe$^{\textrm{III}}$-CN-Cu$^{\textrm{II}}$ & 2172 \cite{ber88,ver06}\\
\hline
Fe$^{\textrm{II}}$-CN-M$^{\textrm{II}}$ & 2065-2100 \cite{nak86,reg90,sat99,ber88}\\
Fe$^{\textrm{II}}$-CN-M$^{\textrm{III}}$ & 2090-2140 \cite{nak86,ver06,sat99,ohk05}\\
Fe$^{\textrm{III}}$-CN-M$^{\textrm{II}}$ & 2146-2185 \cite{nak86,ver06,reg90,ber88,sat99,ohk05}\\
Fe$^{\textrm{III}}$-CN-M$^{\textrm{III}}$ & 2180-2210 \cite{nak86,reg90}\\
\hline
\end{tabular*}
\end{table}
The Raman spectrum of all three samples at room temperature (in the
HT phase) in the spectral window 2000-2250 cm$^{-1}$ is shown in
Fig. \ref{HTspectra}. Samples were heated to 330 K prior to
measurements to ensure the samples were in their HT phase. Group
theory analysis predicts that the vibrational stretching mode of the
free CN$^{-}$ ion ($A_{1}$ symmetry) splits up into an $A_{1}$, an
$E$ and a $T_{2}$ normal mode, when the CN moiety is placed on the
$C_{2v}$ site of the $F$\={4}$3m$ ($T_{d}^{2}$) space group (the
space group in the HT phase\cite{mor02,mor03,kat03,ver06}).
\begin{figure}[htb]
\centering
\includegraphics[width=7.5cm]{fig6}
\caption{Raman scattering spectra for all samples recorded at room
temperature. Multiple lorenztian contributions (color filled peaks)
were summed to obtain a fit (solid yellow line) to the data (black
circles). Red peaks correspond to the HT configuration of the
material (Fe$^{\textrm{III}}$-CN-Mn$^{\textrm{II}}$), while blue
peaks represent the LT configuration
(Fe$^{\textrm{II}}$-CN-Mn$^{\textrm{III}}$). The orange peak in the
lowest panel (sample C) represents the CN-vibrations corresponding
to Fe$^{\textrm{III}}$-CN-Cu$^{\textrm{II}}$ configurations.}
\label{HTspectra}
\end{figure}
From the corresponding Raman tensors\cite{kuz98} it is clear that
the $A_{1}$, and $E$ normal modes are expected to be observed in the
parallel polarization spectra of figure \ref{HTspectra} (due to
their non-zero diagonal tensor components). Indeed, all spectra show
a double peak structure (red colored lines), with vibrations at 2156
and 2165 cm$^{-1}$, which are ascribed to the $A_{1}$ and $E$ normal
modes of the Fe$^{\textrm{III}}$-CN-Mn$^{\textrm{II}}$ (HT phase)
moieties. These peaks are observed in the expected frequency range
(see table \ref{CN-vibrations}) and are consistent with
IR-data\cite{ohk05,ber88,sat99} and previous Raman
measurements.\cite{cobo07,ver06,ver08} Also, the spectra of all
samples show some additional Raman intensity at lower wavenumbers
(blue colored lines), consistent with the presence of
LT-configuration moieties
(Fe$^{\textrm{II}}$-CN-Mn$^{\textrm{III}}$, table
\ref{CN-vibrations}), as was observed in the XPS data. In samples A
and B, one can distinguish clear peak features at $\sim$2080 and
$\sim$2118 cm$^{-1}$ on top of the broad LT-phase intensity, whereas
sample C shows only a featureless broad band. The presence of
Fe$^{\textrm{III}}$-CN-Cu$^{\textrm{II}}$ cyano bridges in sample C
is reflected in its spectrum through a shoulder on the high
wavenumber side of the double peak structure (orange filled peak).
The incorporation of Cu$^{\textrm{II}}$ ions into the lattice is
also evident in the width of the lines; inhomogeneous broadening as
a result of the increased degree of disorder causes the peaks in
sample C to be broader with respect to those in samples A and B,
which is arguably also the reason the LT peak features are not
distinguishable in sample C. Unfortunately, no reliable quantitative
estimation regarding the Fe\superscript{III}/Fe\superscript{II}
ratios can be made from the intensities of the Raman lines, since
these involve a phonon-dependent proportionality factor. In
addition, possible photo-induced LT to HT switching at these
temperatures would further complicate such estimations. Nonetheless,
changes in the intensity of one phonon as a function of, for
instance, temperature can give information on relative quantities of
the given phonon.
\subsubsection*{Temperature dependence}
Table \ref{CN-vibrations} illustrates the effect of the variation of
valence of either of the metal ions in an Fe-CN-M (cyano) bridge on
the vibrational stretching frequency of the CN-moiety ($\nu_{\textrm{CN}}$). The
predominant trend is an increase in $\nu_{\textrm{CN}}$\ with increasing oxidation
state of either of the metal ions, where the valence of the C-bound
metal ion appears to have a larger effect than that of the N-bound
metal ion.\cite{nak86,mil99} Thus, across the temperature induced CT
transition in these materials, when the local environment changes
from Fe$^{\textrm{III}}$-CN-Mn$^{\textrm{II}}$ to
Fe$^{\textrm{II}}$-CN-Mn$^{\textrm{III}}$ in the cooling run, a net
downshift in vibrational frequencies is expected.
\begin{figure*}[htb]
\centering
\includegraphics[width=17.5 cm]{fig7}
\caption{Temperature dependence of the Raman spectrum of samples A
(Rb$_{0.97}$Mn[Fe(CN)$_6$]$_{0.98}$$\cdot$1.03H$_2$O), B (Rb$_{0.81}$Mn[Fe(CN)$_6$]$_{0.95}$$\cdot$1.24H$_2$O) and C (Rb$_{0.70}$Cu$_{0.22}$Mn$_{0.78}$[Fe(CN)$_6$]$_{0.86}$$\cdot$2.05H$_2$O) across the hysteresis range in a cooling
run. Multiple lorenztian contributions (color filled peaks, red =
HT, blue = LT, orange = Fe$^{\textrm{III}}$-CN-Cu$^{\textrm{II}}$)
were summed to obtain a fit (solid yellow line) to the data (black
circles). Spectra are normalized to the total integrated scattering
intensity in the 2000-2250 cm$^{-1}$ window.} \label{Tdep}
\end{figure*}
The evolution of the parallel polarization Raman spectrum of the
three samples across the hysteresis range in a cooling run is shown
in figure \ref{Tdep}. At a temperature of 250 K, all samples still
show the spectrum typical of the HT phase, which is discussed above.
Upon cooling the samples through their respective CT transitions,
however, the intensity of the HT (red) lines goes down and
simultaneously, the intensity of the broad signal at lower
wavenumbers increases and evolves into several new peaks at
$\sim$2202, $\sim$2125, $\sim$2108, $\sim$2089 and $\sim$2080
cm$^{-1}$ (blue lines). In addition, the HT lines are slightly
shifted to higher frequencies, due to the contraction of the
lattice. This suggests that, although no CT has occurred in the
inactive fraction of the samples, the lattice does contract. Even
though no quantitative Fe\superscript{III}/Fe\superscript{II} ratio
can be extracted from the Raman spectra, the presence of both HT
(red) and LT (blue) lines seem to indicate an incomplete CT
transition, in accordance with the XPS results. Consequently,
multiple different Fe-CN-M environments are present in the LT phase
of the samples, which explains the large number of lines observed in
their Raman spectra. Next to the residual HT
Fe$^{\textrm{III}}$-CN-Mn$^{\textrm{II}}$ configuration (red lines)
and the LT Fe$^{\textrm{II}}$-CN-Mn$^{\textrm{III}}$ configuration,
also Fe$^{\textrm{II}}$-CN-Mn$^{\textrm{II}}$ and
Fe$^{\textrm{III}}$-CN-Mn$^{\textrm{III}}$ configurations are
present in the LT phase. The latter configuration is confidently
assigned to the 2202 cm$^{-1}$ line, since it is the only
configuration in which $\nu_{\textrm{CN}}$\ is expected to increase (Table
\ref{CN-vibrations}). Less straightforward is the assignment of the
other configurations since their cyano vibrations are expected to
occur in overlapping frequency ranges. In addition, the modes
arising from these configurations are also expected to have split
into multiple normal modes due to the crystal symmetry. For
comparison, when assuming the space group $I$\={4}$m2$ (that of
Rb$_{x}$Mn[Fe(CN)$_6$]$_{\frac{(2+x)}{3}}$$\cdot$zH$_2$O\ in the LT phase\cite{ver06,mor02,kat03}), the CN
vibrational mode splits up into 2 A$_{1}$, a B$_{1}$, a B$_{2}$ and
an E mode, of which three (2 A$_{1}$ and B$_{1}$) would be observed
in parallel polarization spectra. Nevertheless, based on the
expected frequencies (table \ref{CN-vibrations}) and the expected
symmetry splitting the 2125, 2108 and 2089 cm$^{-1}$ lines are
assigned to symmetry split normal modes of the
Fe$^{\textrm{II}}$-CN-Mn$^{\textrm{III}}$ LT configuration, in
agreement with Cobo \textit{et al.}\cite{cobo07}, while the 2080
cm$^{-1}$ line is again tentatively assigned to the
Fe$^{\textrm{II}}$-CN-Mn$^{\textrm{II}}$ configuration. Due to the
fact that the Fe$^{\textrm{II}}$-CN-Mn$^{\textrm{II}}$ and
Fe$^{\textrm{III}}$-CN-Mn$^{\textrm{III}}$ configurations occur at
the interface between a metal ion that has and a metal ion that has
not undergone CT (generally in an environment with local
inhomogeneities), the symmetry splitting of the modes assigned to
these configurations is lost in the inhomogeneously broadened width
of their corresponding lines (2080 and 2202 cm$^{-1}$). One may also
expect the Raman spectrum of sample C in the LT phase to show
vibrations arising from the Fe$^{\textrm{II}}$-CN-Cu$^{\textrm{II}}$
($\approx$ 2100 cm$^{-1}$) and
Fe$^{\textrm{III}}$-CN-Cu$^{\textrm{II}}$ ($\approx$ 2175 cm$^{-1}$)
configurations, however, due to their low intensity and frequency
overlap with more intense lines, it is not possible to resolve these
lines in the present data. Overall, the temperature dependence of
the Raman spectrum is in general good agreement with the XPS
measurements, nicely demonstrating the temperature-induced CT
transition. In addition to the changes in the vibrational spectra,
all samples show a pronounced color change across the CT transition.
The samples are substantially darker in their LT phase and are more
susceptible to laser-induced degradation.
\subsubsection*{Photoactivity}
\begin{figure}[htb]
\centering
\includegraphics[width=7.5cm]{fig8.eps}
\caption{Raman spectra of all three samples at 100 K and 90 K below
the hysteresis in a cooling run. In all samples, the spectra at 90 K
show an increase in intensity of the HT lines and a decrease of the
LT lines with respect to the corresponding 100 K spectra. This is
attributed to the photoactivity of the material (see text). All
spectra are normalized to the total integrated scattering intensity
in the 2000-2250 cm$^{-1}$ spectral window} \label{90K}
\end{figure}
Remarkably, at a temperature of 90 K, the Raman spectra of the
samples show the material has regained some intensity in the HT
(red) lines at the cost of the LT (blue) lines (see fig \ref{90K}),
breaking the general trend of an increasing LT fraction with
decreasing temperature. Also, the double peak structure in sample C
again shows a clear shoulder on the high frequency side, arising
from the Fe$^{\textrm{III}}$-CN-Cu$^{\textrm{II}}$ cyano bridge.
These features are explained in terms of the photoactivity of the
material upon excitation of the sample using the Raman laser probe
(532 nm). Around 90 K, Rb$_{x}$Mn[Fe(CN)$_6$]$_{\frac{(2+x)}{3}}$$\cdot$zH$_2$O\ has been shown to be photo-excited
into a metastable state upon 532 nm laser
excitation.\cite{mo03,cobo07} The resulting metastable phase is
described as 'HT-like', meaning that the predominant valence
configuration is Fe$^{\textrm{III}}$-CN-Mn$^{\textrm{II}}$, which is
consistent with the present and earlier\cite{cobo07} Raman spectra
at these temperatures. A local laser induced heating effect is
excluded, since the effect is not observed just below the
corresponding hysteresis loops. Also, the metastable phase is stable
in absence of laser irradiation and only relaxes to
Fe$^{\textrm{II}}$-CN-Mn$^{\textrm{III}}$ ground state above a
certain relaxation temperature (see below). In addition to the
spectral changes, the sample is also observed to undergo a change in
its optical properties under 532 nm excitation at this temperature:
the excited material takes the substantially lighter HT appearance
(the inverse of the color change seen when cooling through the CT
transition). A similar effect is seen around 90 K: spectra recorded
at 100, 80, 70, and 50 K also show increased intensity in the HT
lines. The effect becomes increasingly less pronounced as the
temperature deviates more from 90 K, which appears to be the
temperature of maximum efficiency in photo-conversion. As the
photo-conversion is accompanied by a color change, the
(meta)stability of the photo-excited state is easily monitored
visually, observing the sample color under a microscope.
Consequently, the photo-excited 'HT-like' state was found to be
persistent after excitation for at least 2 hours (without laser
irradiation) at 90 K, showing no signs of relaxation to the darker
LT ground state. During a subsequent slow heating ($\sim$ 0.5
K/min.) process the photo-excited state was visually monitored and
found to relax to the LT ground state at a temperature of $\sim$ 123
K (see movie clip in Supporting Information), consistent with the
relaxation temperature reported by Tokoro \emph{et
al.}\cite{tokoro03} (120 K). A more elaborate study of the
photo-conversion of the material as a function of temperature,
excitation wavelength and intensity is required to elucidate the
nature of this fascinating metastable photo-induced phase, the
conversion mechanism involved and the striking temperature
dependence of the effect.
\section{Conclusions}
In conclusion, this work demonstrates the temperature-induced charge
transfer transition in different Prussian Blue Analogue samples
through a number of different experimental techniques, revealing the
substantially reduced conversion factor of the surface material with
respect to the bulk material. All three techniques, magnetic
susceptibility measurements, XPS and Raman spectroscopy, show the
thermally induced charge transfer transition, which can be described
as Fe\superscript{III}($t^{5}_{2g}$,
S=$\nicefrac{1}{2}$)-CN-Mn\superscript{II}($t^{3}_{2g}e^{2}_{g}$,
S=$\nicefrac{5}{2}$) \textbf{$\rightarrow$}
Fe\superscript{II}($t^{6}_{2g}$,
S=$0$)-CN-Mn\superscript{III}($t^{3}_{2g}e^{1}_{g}$, S=$2$).
Magnetic measurements indicate the bulk material shows a high degree
of conversion (near maximal) in sample A (Rb$_{0.97}$Mn[Fe(CN)$_6$]$_{0.98}$$\cdot$1.03H$_2$O), while the conversion
fraction is lower in sample B (Rb$_{0.81}$Mn[Fe(CN)$_6$]$_{0.95}$$\cdot$1.24H$_2$O). This is according expectation,
as sample A is much closer to a Rb:Mn:Fe stoichiometry of 1:1:1.
However, X-ray photoemission spectroscopy reveals a substantially
lower HT $\rightarrow$ LT conversion at the sample surface of all
samples, the fraction of metal centers not undergoing the charge
transfer transition is by far dominant at the surface, even in the
highly stoichiometric sample A. This shows the \textit{intrinsic}
incompleteness of such systems to be due to surface reconstruction.
Additionally, the CT transition is found to be much more smooth and
continuous at the surface of the samples, due to the fact that
cooperativity is effectively eliminated when the HT to LT conversion
fraction is very low.
Though substitution of a fraction of the
Mn\superscript{II} ions by Cu\superscript{II} ions (in sample C, Rb$_{0.70}$Cu$_{0.22}$Mn$_{0.78}$[Fe(CN)$_6$]$_{0.86}$$\cdot$2.05H$_2$O)
is shown to reduce the degree of LT to HT conversion, the reduction
is comparable to the fraction of Cu ions being substituted; for
sample C (Rb$_{0.70}$Cu$_{0.22}$Mn$_{0.78}$[Fe(CN)$_6$]$_{0.86}$$\cdot$2.05H$_2$O), which contains 22 \% of Cu on the Mn-positions, still
76 \% of the maximum possible Fe\superscript{III}($t^{5}_{2g}$,
$S=\nicefrac{1}{2}$)-CN-Mn\superscript{II}($t^{3}_{2g}e^{2}_{g}$,
$S=\nicefrac{5}{2}$) \textbf{$\rightarrow$} Fe\superscript{II}
($t^{6}_{2g}$, $S=0$)-CN- Mn\superscript{III}($t^{3}_{2g}e^{1}_{g}$,
$S=2$) conversion is observed, which is comparable to the
percentages found in sample B, which has no Cu incorporated in the
lattice. Thus, the random substitution has little to no effect on
the charge transfer capability of individual metal clusters. In
fact, a simple numerical analysis shows local
Fe[-CN-Mn]\subscript{5}[-CN-Cu] and even
Fe[-CN-Mn]\subscript{4}[-CN-Cu]\subscript{2} clusters are not
deactivated regarding charge transfer. Temperature dependent Raman
spectroscopy is in agreement with above results, clearly displaying
the charge transfer transition to be incomplete in all samples.
Summarizing, these results show that the maximum total degree of HT
$\rightarrow$ LT conversion in these systems, found for highly
stoichiometric samples is intrinsically limited by the fact that
surface reconstruction deactivates metal clusters at the surface of
the material regarding charge transfer.
At temperatures of 50-100 K, a remarkable photoactivity of the
material is observed. Raman spectra in this temperature interval
show the material to be photo-excited from the LT state, into a
metastable "HT-like" state, meaning that the predominant valence
configuration in this state is
Fe\superscript{III}Mn\superscript{II}. This photo-conversion, which
appears to be most efficient at 90 K, is accompanied by substantial
color changes and is found to be stable below a relaxation
temperature of $\sim$ 123 K. How this state is related to the
photo-excited (meta)stable phase at very low temperatures is not
clear at the moment, further investigations are required to
determine its exact nature and fascinating temperature dependence.
\\
\textbf{Acknowledgment.} The authors would like to thank Roland
Hubel for technical support during the XPS measurements at the IWF
in Dresden. This work is part of the research programme of the
'Stichting voor Fundamenteel Onderzoek der Materie (FOM)', which is
financially supported by the 'Nederlandse Organisatie voor
Wetenschappelijk Onderzoek (NWO)'
\\ |
0810.1114 | \section{#1}\setcounter{equation}{0}}
\newcommand{\vir}{\raisebox{0.75mm}{,}}
\setcounter{tocdepth}{5}
\numberwithin{equation}{section}
\begin{document}
\enlargethispage{3cm}
\thispagestyle{empty}
\begin{center}
{\bf NONCOMMUTATIVE COORDINATE ALGEBRAS}
\end{center}
\vspace{0.3cm}
\begin{center}
Michel DUBOIS-VIOLETTE
\footnote{Laboratoire de Physique Th\'eorique, UMR 8627\\
Universit\'e Paris XI,
B\^atiment 210\\ F-91 405 Orsay Cedex\\
Michel.Dubois-Violette$@$u-psud.fr}\\
\end{center}
\vspace{0,5cm}
\vspace{1cm}
\begin{center}
{\sl Dédié à Alain Connes}
\end{center}
\vspace{1cm}
\begin{center}
{\sl Il est bon de lire entre les lignes, cela fatigue moins les yeux}.\\
Sacha Guitry
\end{center}
\begin{abstract}
We discuss the noncommutative generalizations of polynomial algebras which after appropriate completions can be used as coordinate algebras in various noncommutative settings, (noncommutative differential geometry, noncommutative algebraic geometry, etc.). These algebras have finite presentations and are completely characterized and classified by their (noncommutative) volume forms.
\end{abstract}
\tableofcontents
\section*{Introduction}
The universal skeletons for coordinate algebras in classical geometry (differential geometry, algebraic geometry, etc.) are polynomial algebras. The appropriate function algebras are obtained by completions with respect to the adapted topologies
and either by gluing process or by taking quotient algebras.\\
Our aim here is to discuss the noncommutative generalizations of polynomial algebras which can be used similarily in various noncommutative setting, noncommutative differential geometry \cite{ac:1980}, \cite{ac:1986a}, \cite{ac:1994}, noncommutative algebraic geometry, etc. as well as in the applications in physics.\\
At the very beginning, one has to face the question of what class of algebras should we consider as generalization of the algebras of polynomial functions on finite-dimensional vector spaces. It seems clear that one must stay within the class of the $\mathbb N$-graded algebras which are connected, generated in degree 1 with finite presentations. There is a minimal choice which is the class of quadratic algebras which are Koszul (see below) of finite global dimension, which have polynomial growth and satisfy a version of Poincaré duality refered to as the Gorenstein property in \cite{art-sch:1987}. A bigger class is the class of regular algebras in the sense of \cite{art-sch:1987} which shall be refered to as the class of AS-regular algebras in the following. We shall consider here a bigger class in that we shall drop the condition of polynomial growth included in the AS-regularity condition. We shall refer to this bigger class of algebras as regular algebras. Although polynomial growth is a very natural condition for noncommutative coordinate algebras (and from the point of view of deformation theory), it turns out that for our analysis we do not need it and that by imposing polynomial growth one eliminates algebras which in spite of the fact that they have not an interpretation of (noncommutative) coordinate algebra are very interesting and are furthermore relevant for physics. Of course, at any stage one can restrict attention to the subclass of algebras with polynomial growth, (or which are quadratic, etc.). For global dimensions $D=2$ and $D=3$, the regular algebras are $N$-homogeneous and Koszul. We shall recall what this Koszul property means. This is a very desirable property that one can formulate only for $N$-homogeneous algebras for the moment (i.e. algebras with relations of degree $N$). This is why, for global dimensions $D\geq 4$ we shall impose $N$-homogeneity and Koszulity.\\
In the following we shall review various concepts and results. We shall in particular give a survey of the results of \cite{mdv:2005}, \cite{mdv:2007} in which we shall insist on the conceptual points and drop technical proofs.We shall illustrate the mains points by a lot of examples. A central result is that the algebras under consideration are completely specified by multilinear forms on finite-dimensional vector spaces. Given such an algebra, the corresponding multilinear form, which is unique up to a nonvanishing scale factor, plays the role of the (noncommutative) volume form. Furthermore isomorphic algebras correspond to multilinear forms which are in the same orbit of the corresponding linear group ($GL(g)$ for $g$ generators). The determination of the moduli space of these algebras is of course of mathematical interest by itself. Concerning physics, the classification of these algebras can become of great importance since in a noncommutative geometrical approach to the quantum theory of space and gravitation one should expect the occurrence at some approximation of a superposition of noncommutative geometries.\\
It is worth noticing that the results of \cite{mdv:2007} have been recently generalized to the quiver case in \cite{boc-sch-wem:2008}. The correspondence between \cite{mdv:2007} and \cite{boc-sch-wem:2008} should read : multilinear forms or volumes $\leftrightarrow$ superpotentials.\\
Finally one should point out that this article is not only a survey but that it also contains new results and concepts.\\
Let us give some indications on the notations. Throughout the paper $\mathbb K$ denotes a field, all vector spaces and algebras are over $\mathbb K$, the dual of a vector space $E$ is denoted by $E^\ast$ and the symbol $\otimes$ denotes the tensor product over $\mathbb K$. Without other specifications, an algebra will always be an associative unital algebra. A graded algebra will be a $\mathbb N$-graded algebra $\cala=\oplus_{n\in \mathbb N} \cala_n$. Such a graded algebra is said to be connected whenever $\cala_0=\mathbb K\mbox{\rm 1\hspace {-.6em} l}$. Given a $(r,s)$-matrix $A$, we denote by $A^t$ its transposed $(s,r)$-matrix. We use the Einstein summation convention of repeated up down indices in the formulas.
\section{Regular algebras}
The aim of this section is to make explicit the general class of algebras that we wish to investigate and to set up some notations.
\subsection{Graded algebras}
The algebras that we shall consider will be connected $\mathbb N$-graded algebras which are finitely generated in degree 1 and finitely presented with homogeneous relations of degrees $\geq 2$. These algebras are the objects of the category $\mathbf{GrAlg}$, the morphisms of this category being the homogeneous algebra homomorphisms of degree 0.\\
An algebra ${\cala}\in \mathbf{GrAlg}$ is of the form $\cala=A(E,R)=T(E)/[R]$ where $E=\cala_1$ is finite-dimensional, $R=\oplus_{n\geq 2} R_n$
a finite-dimensional graded subspace of $T(E)$ such that (independence)
\[
R_n\cap [\oplus_{m<n} R_m]=\{0\}
\]
for any $n$ $(R_n=\{0\}$ for $n<2)$ and where $[F]$ denotes for any subset $F\subset T(E)$ the two-sided ideal generated by $F$. The graded vector space $R$ is the space of independent relations of $\cala$. \\
By chosing a basis $(x^\lambda)_{\lambda\in \{1,\dots,g\}}$ of $E$ and an homogeneous basis $(f_\alpha)_{\alpha\in \{1,\dots,r\}}$ of $R$ one can also write
\[
\cala =\mathbb K\langle x^1,\dots,x^g\rangle/[f_1,\dots,f_r]
\]
where $f_\alpha \in E^{\otimes^{N_\alpha}}$, $N_\alpha\geq 2$. Notice that $r(={\mbox{dim}} R)$ is well defined (i.e. only depends on $\cala$).\\
If $R$ is concentrated in degree $N\ (\geq 2)$ i.e. if $R\subset E^{\otimes^N}$ then $\cala$ will be said to be a $N$-homogeneous algebra. The $N$-homogeneous algebras form a full subcategory $\mathbf{H_N{\mbox{\bf Alg}}}$ of $\mathbf{GrAlg}$, \cite{ber:2001a}, \cite{ber-mdv-wam:2003}.
\subsection{Dimension}\label{Dim}
Let $\cala\in \mathbf{GrAlg}$ be as above so
$\cala=\mathbb K\langle x^1,\dots,x^g\rangle/[f_1,\dots,f_r]$ and one can define $M_{\alpha\lambda}\in E^{\otimes^{N_{\alpha}-1}}$ by setting
$f_\alpha=M_{\alpha\lambda} \otimes x^\lambda \in E^{\otimes^{N_\alpha}}$. Then the presentation of $\cala$ by generators and relations is equivalent to the exactness of the sequence of left $\cala$-modules \cite{art-sch:1987}
\begin{equation}
\cala^r \stackrel{M}{\rightarrow} \cala^g \stackrel{x}{\rightarrow} \cala \stackrel{\varepsilon}{\rightarrow} \mathbb K \rightarrow 0
\label{pr}
\end{equation}
where $M$ means right multiplication (in $\cala$) by the matrix $(M_{\alpha\lambda})$, $x$ means right multiplication by the column $(x^\lambda)$ and $\varepsilon$ is the projection onto $\cala_0=\mathbb K$. In more intrinsic notations the exact sequence (\ref{pr}) reads for $\cala=A(E,R)$
\begin{equation}
\cala\otimes R \rightarrow \cala\otimes E \stackrel{m}{\rightarrow} \cala\stackrel{\varepsilon}{\rightarrow} \mathbb K\rightarrow 0
\label{PR}
\end{equation}
where $m$ is the multiplication of $\cala$ and the first arrow is as in (\ref{pr}). The exact sequence (\ref{PR}) corresponding to the presentation of $\cala$ extends as a minimal projective resolution
\[
\dots \rightarrow {\mathcal E}_n \rightarrow {\mathcal E}_{n-1}\rightarrow {\mathcal E}_0\rightarrow \mathbb K \rightarrow 0
\]
of the left $\cala$-module $\mathbb K$ which is in fact a free resolution \cite{car:1958}
\begin{equation}
\dots \rightarrow \cala\otimes E_n\rightarrow \cala\otimes E_{n-1}\rightarrow \dots \rightarrow \cala\rightarrow \mathbb K \rightarrow 0
\label{mr}
\end{equation}
and it follows from the very definition of ${\mbox{Ext}}_\cala(\mathbb K,\mathbb K)$ that one can make the identifications
\begin{equation}
E^\ast_n={\mbox{Ext}}^n_\cala(\mathbb K,\mathbb K)
\label{ext}
\end{equation}
which read $R^\ast={\mbox{Ext}}^2_\cala(\mathbb K,\mathbb K)$ and $E^\ast={\mbox{Ext}}^1_\cala(\mathbb K, \mathbb K)$ for $n=2$ and $n=1$. The Yoneda algebra ${\mbox{Ext}}_\cala(\mathbb K,\mathbb K)$ is the cohomology of a graded differential algebra from which it follows that it carries a canonical $A_\infty$-structure \cite{kad:1980}, \cite{lu-pal-wu-zha:2006}. It turns out that one can reconstruct the graded algebra $\cala$ from the $A_\infty$-algebra ${\mbox{Ext}}_\cala(\mathbb K, \mathbb K)$ \cite{kel:2001}, \cite{lu-pal-wu-zha:2006}. Thus the $A_\infty$-algebra ${\mbox{Ext}}_\cala(\mathbb K, \mathbb K)$ is a natural dual of the graded algebra $\cala$. In the case of a $N$-homogeneous algebra $\cala$, there is another natural dual of $\cala$ which is its Koszul dual $\cala^!$ \cite{ber-mdv-wam:2003} (see below). In the case of a Koszul algebra these two notions are strongly connected and coincide in the quadratic case $(N=2)$, \cite{ber-mar:2006}.\\
The length of the resolution (\ref{mr}) is the projective dimension of the left module $\mathbb K$. It is classical \cite{car:1958}, \cite{art-tat-vdb:1990} that the left global dimension of $\cala$ (for $\cala\in \mathbf{GrAlg}$) coincides with the projective dimension of $\mathbb K$ as left module and that it also coincides with the right global dimension (and with the projective dimension of $\mathbb K$ as right module). Furthermore it has been shown recently \cite{ber:2005} that this dimension also coincides with the Hochschild dimension of $\cala$ in homology as well as in cohomology. So for an algebra $\cala\in \mathbf{GrAlg}$ there is a unique definition of the dimension from a homological point of view which will be refered to as {\sl its global dimension} in the sequel. In the following, we shall only consider algebras in $\mathbf{GrAlg}$ with finite global dimensions.\\
It is worth noticing here that there is another dimension for $\cala\in \mathbf{GrAlg}$ which is the Gelfand-Kirillov dimension but since in the following polynomial growth plays no role (and therefore will not be assumed) we shall only consider the global dimension for our general analysis.
\subsection{Poincaré duality}
We now assume that $\cala=A(E,R)\in \mathbf{GrAlg}$ is of finite global dimension $D$. The (free) resolution (\ref{mr}) of $\mathbb K$ reads then
\[
0\rightarrow \cala\otimes E_D\rightarrow \dots \rightarrow \cala\otimes E_1\rightarrow \cala \rightarrow \mathbb K \rightarrow 0
\]
with $E_1=\cala_1=E$.\\
By applying the functor ${\mbox{Hom}}_\cala(\bullet,\cala)$ to the chain complex
\[
0\rightarrow \cala\otimes E_D\rightarrow \dots \rightarrow \cala\otimes E \rightarrow \cala\rightarrow 0
\]
of (free) left $\cala$-modules, one obtains a cochain complex ${\mathcal E}'$
\[
0\rightarrow {\mathcal E}'_0\rightarrow {\mathcal E}'_1\rightarrow \dots \rightarrow {\mathcal E}'_D\rightarrow 0
\]
of right $\cala$-modules. The cohomology $H({\mathcal E}')$ of this complex is by definition ${\mbox{Ext}}_\cala(\mathbb K,\cala)$ that is one has
\begin{equation}
H^n({\mathcal E}')={\mbox{Ext}}^n_\cala(\mathbb K, \cala)
\label{extA}
\end{equation}
for any $n\in \mathbb N$.\\
By definition $\cala$ is said to be Gorenstein if one has
${\mbox{Ext}}^D_\cala(\mathbb K,\cala)=\mathbb K$ and ${\mbox{Ext}}^n_\cala(\mathbb K, \cala)=0$
for $n\not= D$. This means that
\[
0\rightarrow {\mathcal E}'_0\rightarrow \dots \rightarrow {\mathcal E}'_D\rightarrow \mathbb K \rightarrow 0
\]
is a free resolution of $\mathbb K$ as right $\cala$-module. This resolution is then a minimal projective resolution of the right $\cala$-module $\mathbb K$ which implies the isomorphisms
\[
E^\ast_n \simeq E_{D-n}
\]
of vector spaces and therefore
\begin{equation}
{\mbox{dim}} (E_n)={\mbox{dim}} (E_{D-n})
\label{DP}
\end{equation}
for $0\leq n\leq D$.\\
Thus the Gorenstein property is a variant of the Poincaré duality property.
\subsection{Regularity}
Let $\cala=A(E,R)$ be a graded algebra of $\mathbf{GrAlg}$, $\cala$ will be said to be {\sl regular} if it is of finite global dimension, $g\ell{\mbox{dim}} (\cala)=D<\infty$, and is Gorenstein. This definition of regularity is directly inspired from the one of \cite{art-sch:1987} which will be refered to as AS-regularity, the only difference is that we have dropped the condition of polynomial growth since we do not need it for the analysis in the sequel and since it would eliminate very interesting examples. \\
This is the class of algebras that we would like to analyse and we shall do it for low global dimensions $D=2$ and $D=3$. For higher global dimension, we shall restrict a little the class of algebras that we will consider. In order to understand this let us recall the following result \cite{ber-mar:2006}.
\begin{proposition}\label{R2-3}
Let $\cala$ be a regular algebra of global dimension $D$.\\
$\mathrm{(i)}$ If $D=2$ then $\cala$ is quadratic and Koszul.\\
$\mathrm{(ii)}$ If $D=3$ then $\cala$ is $N$-homogeneous with $N\geq 2$ and Koszul.
\end{proposition}
Thus for $D<4$, regularity implies $N$-homogeneity (with $N=2$ for $D=2$) and Koszulity. We shall explain later what is the Koszul property. This is a very desired property that one can formulate for the moment only for homogeneous algebras. This is why we shall restrict attention in the following to regular algebras which are $N$-homogeneous (with $N\geq 2$) and Koszul. In view of the above proposition this is not a restriction for regular algebras of global dimension $D=2$ and $D=3$ however one knows examples of regular algebras in global dimension 4 and more which are not homogeneous.
\section{Global dimension $D=2$}\label{globdim2}
This section is devoted to the description of the regular algebras of global dimension $D=2$.
\subsection{General results}
Let us use the notations of the beginning of $\S \ref{Dim}$ so let $\cala=\mathbb K\langle x^1,\dots,x^g \rangle/[f_1,\dots,f_r]$ and consider the exact sequence (\ref{pr}) corresponding to the presentation of $\cala$. The algebra $\cala$ has global dimension $D=2$ if and only if (\ref{pr}) extends as an exact sequence
\[
0\rightarrow \cala^r\stackrel{M}{\rightarrow} \cala^g \stackrel{x}{\rightarrow} \cala \stackrel{\varepsilon}{\rightarrow} \mathbb K \rightarrow 0
\]
i.e. as a free resolution of $\mathbb K$ of length $D=2$.\\
Assume now that $D=2$ and that $\cala$ is Gorenstein. Then the Gorenstein property implies that $r=1$, that degree $(M)$ = degree $(x)$ = 1 so $M=(B_{\rho\lambda}x^\rho)$ and that the matrix $(B_{\lambda\mu})\in M_g(\mathbb K)$ is invertible. The above free resolution of $\mathbb K$ reads then
\begin{equation}
0\rightarrow \cala \stackrel{x^tB}{\rightarrow} \cala^g \stackrel{x}{\rightarrow} \cala\stackrel{\varepsilon}{\rightarrow} \mathbb K \rightarrow 0
\label{res2}
\end{equation}
with obvious notations.\\
Conversely, let $b$ be a nondegenerate bilinear form on $\mathbb K^g$ with matrix elements $B_{\lambda\mu}$ in the canonical basis and let $\cala$ be the (quadratic) algebra generated by $g$ generators $x^\lambda$ with relation $B_{\lambda\mu}x^\lambda x^\mu=0$, then $\cala$ is Gorenstein of global dimension $D=2$. One has the following theorem \cite{mdv:2007}.
\begin{theorem}\label{REG2}
Let $b$ be a nondegenerate bilinear form on $\mathbb K^g$ $(g\geq 2)$ with components $B_{\mu\nu}=b(e_\mu,e_\nu)$ in the canonical basis $(e_\lambda)$ of $\mathbb K^g$. Then the quadratic algebra $\cala$ generated by the elements $x^\lambda$ $(\lambda\in \{1,\dots,g\})$ with the relation $B_{\mu\nu}x^\mu x^\nu=0$ is regular of global dimension $D=2$. Conversely any regular algebra of global dimension $D=2$ is of the above kind for some $g\geq 2$ and some nondegenerate bilinear form $b$ on $\mathbb K^g$. Furthermore two such algebras $\cala$ and $\cala'$ are isomorphic if and only if $g=g'$ and $b'=b\circ L$ for some $L\in GL(g,\mathbb K)$.
\end{theorem}
The last part of this theorem is almost obvious and gives a description of the moduli space of the regular algebras of global dimension $D=2$.\\
The right action $b\mapsto b\circ L$ of the linear group on bilinear forms is a particular case of the right action of the linear group $GL(V)$ on multilinear forms on a vector space $V$ defined for a $n$-linear form $w$ by
\begin{equation}
w\circ L (v_1,\dots,v_n)=w(Lv_1,\dots,Lv_n)
\label{GLA}
\end{equation}
for any $v_k\in V$, $k\in \{1,\dots,n\}$.\\
For reasons which will become clear, the algebra $\cala$ (regular of global dimension $D=2$) associated to the nondegenerate bilinear form $b$ on $\mathbb K^g$ as in Theorem \ref{REG2} will be denoted $\cala(b,2)$ in the following.
\subsection{Poincaré series and polynomial growth}\label{Poi}
Let $\cala$ be a regular algebra of global dimension $D=2$. Then the exact sequence (\ref{res2}) splits as
\[
0\rightarrow \cala_{n-2}\stackrel{x^t B}{\rightarrow}\cala^g_{n-1}\stackrel{x}{\rightarrow} \cala_n\rightarrow 0
\]
for $n\not= 0$ with of course $\cala_0=\mathbb K$ and $\cala_n=0$ for $n<0$. It follows that the Poincaré series $P_\cala(t)$ of $\cala$ is given by
\begin{equation}
P_\cala (t)=\frac{1}{1-gt+t^2}
\label{poinca2}
\end{equation}
in view of the Euler-Poincaré formula.\\
For $g=2$ one has
\[
P_\cala(t)=\left(\frac{1}{1-t}\right)^2
\]
so $\cala$ has then polynomial growth (with $GK{\mbox{dim}}=2$) while for $g>2$ one has
\[
P_\cala(t)=\frac{1}{(1-k^{-1}t)(1-kt)}
\]
with
\[
k=\frac{1}{2}(g+\sqrt{g^2-4})>1
\]
so $\cala$ has then exponential growth.\\
Let us now discuss the case of the regular algebras of global dimension 2 with $g=2$ generators i.e. which have polynomial growth. In view of Theorem \ref{REG2} these algebras are classified by the $GL(2,\mathbb K)$-orbits of nondegenerate bilinear forms on $\mathbb K^2$. Assuming that $\mathbb K$ is algebraically closed, it is easy to classify these $GL(2,\mathbb K)$-orbits of nondegenerate bilinear forms according to the rank $\mathbf{rk}$ of their symmetric parts \cite{mdv-lau:1990} :\\
\noindent (0) $\mathbf{rk}=0$ - there is only one orbit which is the orbit of the bilinear form $b=\varepsilon$ with matrix of components
\[
B=\left(
\begin{array}{cc}
0 & -1\\
1 & 0
\end{array}
\right)
\]
which corresponds to the relations $x^1x^2-x^2x^1=0$ so $\cala$ is isomorphic to the polynomial algebra $\mathbb K[x^1,x^2]$, \\
\noindent (1) $\mathbf{rk}=1$ - there is only one orbit which is the orbit of the bilinear form $b$ with matrix of components
\[
B=\left(
\begin{array}{cc}
0 & -1\\
1 & 1
\end{array}
\right)
\]
which corresponds to the relations $x^1x^2-x^2x^1-(x^2)^2=0$,\\
\noindent (2) $\mathbf{rk}=2$ - there is a 1-parameter family of orbits which are the orbits of the bilinear forms $b=\varepsilon_q$ with matrices of components
\[
B=\left(
\begin{array}{cc}
0 & -1\\
q & 0
\end{array}
\right)
\]
for $q\in \mathbb K$ with $q^2-q\not= 0$ modulo $q\sim q^{-1}$ which corresponds to the relations $x^1x^2-qx^2x^1=0$.\\
The case (0) corresponds to the ordinary plane, the case (1) corresponds to the Jordanian plane and the cases (2) correspond to the Manin planes. One thus recovers the usual description of the algebras which are regular in the sense of \cite{art-sch:1987} i.e. AS-regular of global dimension 2, \cite{irv:1979},
\cite{art-sch:1987}.
\subsection{Hecke symmetries}\label{Hec}
Any linear mapping
\[
R:\mathbb K^g \otimes \mathbb K^g \rightarrow \mathbb K^g \otimes \mathbb K^g
\]
is characterized by its components $R^{\mu\nu}_{\lambda\rho}$ defined by
\[
R(e_\lambda \otimes e_\rho)=R^{\mu\nu}_{\lambda\rho} e_\mu \otimes e_\nu
\]
in the canonical basis $(e_\lambda)$ of $\mathbb K^g$.\\
Let $b$ be a nondegenerate bilinear form on $\mathbb K^g$ with components $B_{\lambda\rho}=b(e_\lambda,e_\rho)$ and let $K^{\mu\nu}$ be the components of a bilinear form on the dual vector space of $\mathbb K^g$ in the dual basis of $(e_\lambda)$. Define then the endomorphism $R$ of $\mathbb K^g \otimes \mathbb K^g$ by setting
\begin{equation}
R^{\mu\nu}_{\lambda\rho}=\delta^\mu_\lambda \delta^\nu_\rho + K^{\mu\nu} B_{\lambda\rho}
\label{defR}
\end{equation}
for $\mu,\nu, \lambda,\rho \in \{1,\dots,g\}$. Assume now that the above $R$ defined by (\ref{defR}) satisfies the Yang-Baxter equation
\begin{equation}
(I\otimes R) (R\otimes I) (I\otimes R)=(R\otimes I)(I\otimes R)(R\otimes I)
\label{YB}
\end{equation}
on $(\mathbb K^g)^{\otimes^3}$ where $I$ denotes the identity mapping of $\mathbb K^g$ onto itself. One verifies that (\ref{YB}) is equivalent to
\begin{equation}
\left\{
\begin{array}{l}
KBK^tB^t + (1+{\mbox{tr}} (KB^t)) \mbox{\rm 1\hspace {-.6em} l} = 0 \\
K^t B^t KB + (1+ {\mbox{tr}} (KB^t)) \mbox{\rm 1\hspace {-.6em} l} = 0
\label{eqYB}
\end{array}
\right.
\end{equation}
where $K$ and $B$ are the matrices $(K^{\mu\nu})$ and $(B_{\lambda\rho})$ of $M_g(\mathbb K)$ and where the product is the matrix product. Equations (\ref{eqYB}) imply then that one has
\begin{equation}
(R-\mbox{\rm 1\hspace {-.6em} l})(R-(1+{\mbox{tr}} (K B^t))\mbox{\rm 1\hspace {-.6em} l})=0
\label{Hk}
\end{equation}
which means that $R$ is a Hecke symmetry in the terminology of \cite{gur:1990}.\\
Given the nondegenerate bilinear for $b$, one can always solve (\ref{eqYB}).
For instance
\begin{equation}
K=qB^{-1}
\label{sol0}
\end{equation}
with $q\in \mathbb K$ such that
\begin{equation}
q +q^{-1}+{\mbox{tr}} (B^{-1}B^t)=0
\label{q}
\end{equation}
is a solution of (\ref{eqYB}). The corresponding Hecke symmetries will be called {\sl the standard Hecke symmetries associated with} (the nondegenerate bilinear form) $b$ while more generally the Hecke symmetries associated with the solutions of (\ref{eqYB}) will be said to be {\sl associated with} $b$. There are generically two standard Hecke symmetries corresponding to the two roots of Equation (\ref{q}).\\
Notice that (\ref{eqYB}) implies that $K\not=0$ so if $R$ is a Hecke symmetry associated to $b$, the defining relation of $\cala(b,2)$ namely $B_{\mu\nu}x^\mu x^\nu=0$ is equivalent to the quadratic relations
\begin{equation}
x^\mu x^\nu=R^{\mu\nu}_{\lambda\rho} x^\lambda x^\rho
\label{Hcom}
\end{equation}
for $\mu,\nu\in \{1,\dots,g\}$.\\
In the case $g=2$ with $b=\varepsilon_q$ i.e.
\[
B=\left(\begin{array}{c c}
0 & -1\\
q & 0
\end{array}
\right)
\]
with $q\not=0$ which includes cases (0) and (2) of \S
\ref{Poi}, one can take
\[
K=\left( \begin{array}{cc}
0 & 1\\
-p & 0
\end{array}
\right)
,\ p\in \mathbb K
\]
as solution of (\ref{eqYB}). Equation (\ref{Hk}) reads then
\[
(R-\mbox{\rm 1\hspace {-.6em} l})(R+pq)=0
\]
and for $p=q,R$ is a standard Hecke symmetry for $b=\varepsilon_q$. In the classical situation $q=1$, i.e. for $\cala=\mathbb K [x^1,x^2]$, both standard Hecke symmetries coincide and reduce to the flip
\[
x\otimes y \mapsto y\otimes x
\]
of $\mathbb K^2\otimes \mathbb K^2$.
\subsection{Actions of quantum groups}
There are quantum groups acting on the noncommutative planes corresponding to the regular algebras of global dimension $D=2$. For the Manin planes corresponding to the $\cala(\varepsilon_q,2)$ these are the quantum groups $SL_q(2)$, $GL_q(2)$ and $GL_{p,q}(2)$ \cite{man:1987}, \cite{man:1988}.\\
For the noncommutative plane corresponding to $\cala(b,2)$ where $b$ is a nondegenerate bilinear form on $\mathbb K^g$, the generalization of $SL_q(2)$ is the quantum group of the nondegenerate bilinear form $b$, \cite{mdv-lau:1990}.
Let us recall the definition of this object. Let ${\mathcal H}(b)$ be the unital associative algebra generated by the $g^2$ elements $u^\mu_\nu$ ($\mu,\nu \in \{1,\dots, g\}$) with the relations
\begin{equation}
B_{\lambda\rho} u^\lambda_\mu u^\rho_\nu=B_{\mu\nu} \mbox{\rm 1\hspace {-.6em} l}
\label{Binv1}
\end{equation}
and
\begin{equation}
B^{\mu\nu} u^\lambda_\mu u^\rho_\nu=B^{\lambda\rho} \mbox{\rm 1\hspace {-.6em} l}
\label{Binv2}
\end{equation}
where the $B^{\mu\nu}$ are the matrix elements of the inverse matrix $B^{-1}$ of the matrix $B$ of the components $B_{\mu\nu}=b(e_\mu,e_\nu)$ of $b$. One verifies easily that there is a unique structure of Hopf algebra on ${\mathcal H}(b)$ with coproduct $\Delta$, counit $\varepsilon$ and antipode $S$ such that
\begin{eqnarray}
\Delta (u^\mu_\nu)& = &u^\mu_\rho \otimes u^\rho_\nu\label{cp}\\
\varepsilon(u^\mu_\nu) & = & \delta^\mu_\nu \label{cu}\\
S(u^\mu_\nu) & = & B^{\mu\lambda}B_{\rho\nu} u^\rho_\lambda\label{anti}
\end{eqnarray}
the product and the unit being the original ones on ${\mathcal H}(b)$.\\
There is a canonical algebra-homomorphism $\Delta_L:\cala(b,2)\rightarrow {\mathcal H}(b)\otimes \cala(b,2)$ such that
\[
\Delta_L(x^\lambda)=u^\lambda_\mu \otimes x^\mu
\]
for $\lambda\in \{1,\dots,g\}$. This equips $\cala(b,2)$ with a structure of ${\mathcal H}(b)$-comodule. The dual object of ${\mathcal H}(b)$ is the quantum group of the nondegenerate bilinear form $b$. The analysis of the category of representations of this quantum group has been done in \cite{bic:2003b}.
To the coaction $\Delta_L$ of ${\mathcal H}(b)$ on $\cala(b,2)$ corresponds an action of this quantum group on the noncommutative plane corresponding to $\cala(b,2)$.\\
The (quadratic) homogeneous part of the relations (\ref{Binv1}) and (\ref{Binv2}) reads
\begin{equation}
u^\mu_\alpha u^\nu_\beta R^{\alpha\beta}_{\lambda\rho}=R^{\mu\nu}_{\alpha\beta} u^\alpha_\lambda u^\beta_\rho
\label{Bhom}
\end{equation}
where $R$ is a standard Hecke symmetry of $b$. In fact (\ref{Bhom}) together with (\ref{cp}) and (\ref{cu}) define a bialgebra with counit for any $R$. In the case where $R$ is a standard Hecke symmetry then $B^{\mu\nu}B_{\rho\lambda} u^\lambda_\mu u^\rho_\nu$ is in the center and the Hopf algebra ${\mathcal H}(b)$ corresponding to the quantum group of $b$ is the quotient of the bialgebra by the ideal generated by the element
\[
B^{\mu\nu} B_{\rho\lambda}u^\lambda_\mu u^\rho_\nu-g \mbox{\rm 1\hspace {-.6em} l}
\]
of the center. In fact ${\mathcal H}(b)$ is a quotient of a bigger Hopf algebra associated with the homogeneous relations (\ref{Bhom}) which is the generalization of the Hopf algebra corresponding to $GL_q(2)$ in the case $b=\varepsilon_q$, ($g=2$). More generally if $R$ is arbitrary an Hecke symmetry associated with $b$, there is a Hopf algebra associated with the quadratic relations (\ref{Bhom}) which coacts on $\cala(b,2)$ and corresponds to the generalization of $GL_{p,q}(2)$.
\section{Global dimension $D=3$}\label{globdim3}
In this section we shall analyse regular algebras of global dimension $D=3$ and describe some representative examples. For global dimensions $D\geq 3$ what replace the bilinear forms of the global dimension $D=2$ (last section) are multilinear forms so we start this section with a discussion on multilinear forms.
\subsection{Multilinear forms}
Let $V$ be a vector space with ${\mbox{dim}} (V)\geq 2$, $Q$ be an element of the linear group $GL(V)$ and $m$ be an integer with $m\geq 2$. Then a $m$-linear form $w$ on $V$ (i.e. a linear form on $V^{\otimes^m}$) will be said to be $Q$-{\sl cyclic} if one has
\begin{equation}
w(X_1,\dots,X_m)=w(QX_m,X_1,\dots,X_{m-1})
\label{Qcycl}
\end{equation}
for any $X_1,\dots,X_m\in V$.\\
Let $w$ be $Q$-cyclic then one has
\[
w(X_1,\dots,X_m)=w(QX_k,\dots,QX_m,X_1,\dots, X_{k-1})
\]
for $1\leq k\leq m$ so in particular one has
\[
w(X_1,\dots,X_m)=w(QX_1,\dots,QX_m)
\]
for any $X_1,\dots,X_m\in V$ which also reads $w=w\circ Q$ and means that $w$ is invariant by $Q$.\\
Let now $w$ be an artibrary $Q$-invariant $m$-linear form on $V$, then the $m$-linear form $\pi_Q(w)$ on $V$ defined by
\[
\pi_Q(w) (X_1,\dots,X_m)=\frac{1}{m}\sum^m_{k=1} w(QX_k,\dots,QX_m,X_1,\dots,X_{k-1})
\]
for any $X_1,\dots,X_m\in V$ is $Q$-cyclic and this defines a projection $\pi_Q$ of the space of $Q$-invariant $m$-linear forms onto the space of $Q$-cyclic $m$-linear forms on $V$. This projection is $GL(V)$-equivariant in the sense that if $w$ is $Q$-invariant (resp. $Q$-cyclic) then $w\circ L$ is $L^{-1}QL$-invariant (resp. $L^{-1}QL$-cyclic) for any $L\in{\mbox{GL}}(V)$.\\
The $m$-linear form $w$ on $V$ will be said to be {\sl preregular} if it satisfies the following conditions (i) and (ii) :\\
(i) $w(X,X_1,\dots,X_{m-1})=0$ for any $X_1,\dots,X_{m-1}\in V$ implies $X=0$,\\
(ii) there is a $Q_w\in GL(V)$ such that $w$ is $Q_w$-cyclic.\\
Condition (i) implies that $Q_w$ is unique under (ii) and Condition (ii) and (i) imply that $w$ satisfies the following condition (i') which is stronger than (i) : \\
(i') $w$ $(X_1,\dots, X_k,X,X_{k+1},\dots, X_{m-1})=0$ for any $X_1,\dots, X_{m-1} \in V$ implies
\[
X=0,\ \mbox{for any}\ k\in \{0,\dots, m-1\}.
\]
A $m$-linear form $w$ on $V$ satisfying (i') will be said to be 1-{\sl site-nondegenerate}.\\
The set of preregular $m$-linear forms on $V$ is invariant by the action of $GL(V)$ and one has
\begin{equation}
Q_{w\circ L}=L^{-1}Q_wL
\label{Prinv}
\end{equation}
for any preregular $m$-linear form $w$ on $V$.\\
A bilinear form $b$ on $\mathbb K^g$ is preregular if and only if it is nondegenerate ; one has then $Q_b=(B^{-1})^tB$ where $B$ is the matrix of components of $b$.\\
The condition of preregularity will be involved throughout the paper. We now introduce a stronger condition which is involved specifically in the description of the regular algebras of global dimension $D=3$. Let $N$ be an integer with $N\geq 2$, then a $(N+1)$-linear form $w$ on $V$ will be said to be 3-{\sl regular} if it is preregular and satisfies the following condition (iii) :\\
(iii)\hspace{1cm} If $L_0$ and $L_1$ are endomorphisms of $V$ satisfying
\[
w(L_0X_0,X_1,X_2,\dots,X_N)=w(X_0,L_1 X_1,X_2,\dots,X_N)
\]
for any $X_0,\dots,X_n\in V$, then $L_0=L_1=k\mbox{\rm 1\hspace {-.6em} l}$ for some $k\in \mathbb K$.\\
The set of 3-regular $(N+1)$-linear forms is also invariant by $GL(V)$.\\
Condition (iii) is a sort of 2-sites nondegeneracy condition. Consider the stronger condition (iii') :\\
(iii')\hspace{1cm} $\sum_i w (Y_i,Z_i,X_1,\dots,X_{N-1})=0$ for any $X_1,\dots,X_{N-1}\in V$ implies
\[
\sum_i Y_i\otimes Z_i=0.
\]
It is clear that (iii') $\Rightarrow$ (iii), however it is a strictly stronger condition. For instance let $\varepsilon$ be the completely antisymmetric $(N+1)$-linear form on $\mathbb K^{N+1}$ with $\varepsilon(e_0,\dots,e_N)=1$. Then $\varepsilon$ is 3-regular but one has
\[
\varepsilon (Y,Z,X_1,\dots,X_{N-1})+\varepsilon (Z,Y,X_1\dots, X_{N-1})=0
\]
identically and this does not imply $Y\otimes Z + Z\otimes Y=0$.
\subsection{General results for $D=3$}
Let $w$ be a preregular $(N+1)$-linear form on $\mathbb K^g$ with components $W_{{\lambda_0}\dots {\lambda_N}}=w(e_{\lambda_0},\dots,e_{\lambda_N})$ in the canonical basis ($e_\lambda$) of $\mathbb K^g$ and let $\cala(w,N)$ be the $N$-homogeneous algebra generated by the $g$ elements $x^\lambda$ ($\lambda\in\{1,\dots,g\}$) with the $g$ relations
\begin{equation}
W_{\lambda\lambda_1\dots\lambda_N}x^{\lambda_1}\dots x^{\lambda_N}=0
\label{Re3}
\end{equation}
for $\lambda\in \{1,\dots,g\}$. In other words one has
$\cala(w,N)=A(E,R)$ with $E=\oplus_\lambda \mathbb K x^\lambda$
and $R=\sum_\lambda \mathbb K W_{\lambda\lambda_1\dots \lambda_N} x^{\lambda_1}\otimes \dots \otimes x^{\lambda_N}$. Condition (i) implies that ${\mbox{dim}}(R)=g$ that is that the latter sum is direct and that the relations (\ref{Re3}) are independent. \\
Let us now use again the notations of the beginning of \S \ref{Dim} so let $\cala\in \mathbf{GrAlg}$ with $\cala=\mathbb K\langle x^1,\dots,x^y\rangle/[f_1,\dots,f_r]$ and consider the exact sequence (\ref{pr}) corresponding to the presentation of $\cala$. Then $\cala$ has global dimension $D=3$ if and only if (\ref{pr}) extends as an exact sequence
\[
0\rightarrow \cala^s\rightarrow \cala^r \stackrel{M}{\rightarrow} \cala^g \stackrel{x}{\rightarrow}\cala \stackrel{\varepsilon}{\rightarrow} \mathbb K \rightarrow 0
\]
i.e. as a free resolution of $\mathbb K$ of length $D=3$. Assume now that $\cala$ is regular. Then the Gorenstein property (Poincaré duality) implies immediately that $r=g$, that $s=1$, that the above resolution reads with an appropriate choice of the relations $f_\lambda$
\begin{equation}
0\rightarrow \cala\stackrel{x^t}{\rightarrow} \cala^g \stackrel{M}{\rightarrow} \cala^g\stackrel{x}{\rightarrow} \cala \stackrel{\varepsilon}{\rightarrow} \mathbb K \rightarrow 0
\label{res3}
\end{equation}
and that $w=x^\lambda\otimes f_\lambda$ is homogeneous, say of degree $N+1$, and is preregular \cite{art-sch:1987}. So $\cala=\cala(w,N)$ as above. In fact one has the following theorem
\cite{mdv:2007}.
\begin{theorem}\label{REG3}
Let $\cala$ be a regular algebra of global dimension $D=3$. Then $\cala=\cala(w,N)$ for some $N\geq 2$, some $g\geq 2$ and some 3-regular $(N+1)$-linear form $w$ on $\mathbb K^g$.
\end{theorem}
The Poincaré series of $\cala=\cala(w,N)$ as in the above theorem (i.e. $\cala$ regular with $D=3$) is given by
\begin{equation}
P_\cala(t)=\frac{1}{1-gt+gt^N-t^{N+1}}
\label{poinca3}
\end{equation}
in view of (\ref{res3}).\\
If one compares this theorem with Theorem \ref{REG2} for $D=2$, one sees that there are two missing items : first there is no converse of the statement in Theorem \ref{REG3} and second there is no characterization of the isomorphism classes. Concerning the first point, it was conjectured in \cite{mdv:2007} that given a 3-regular $(N+1)$-linear form $w$ on $\mathbb K^g$ then $\cala(w,N)$ is a regular algebra with $D=3$, but unfortunately this is wrong and we shall give counter-examples (see below). This means that one has to find some slightly stronger condition than 3-regularity for $w$ (for $D=3$).\\
Concerning the second point the following result holds (independently of the regularity of the algebras) \cite{mdv:2007}.
\begin{proposition}\label{Iso3}
Let $w$ be a 3-regular $(N+1)$-linear form on $\mathbb K^g$ and let $w'$ be a 3-regular $(N'+1)$-linear form on $\mathbb K^{g'}$. Then $\cala(w,N)$ and $\cala(w',N')$ are isomorphic if and only if $g'=g$, $N'=N$ and $w'=w\circ L$ for some $L\in GL(g,\mathbb K)$.
\end{proposition}
The conditions $g'=g$ and $N'=N$ are clear but the 3-regularity is really involved in the proof of this proposition (see in \cite{mdv:2007}).\\
Following \cite{art-sch:1987} one deduces from (\ref{poinca3}) that a regular algebra of global dimension $D=3$ has polynomial growth if and only if $g=3$ and $N=2$ or $g=2$ and $N=3$; Otherwise it has exponential growth (for $g\geq 2$ and $N\geq 2$).
\subsection{Examples and counter-examples}
All AS-regular algebras of global dimension $D=3$ give of course examples and our notations $w, M, Q_w$ come from \cite{art-sch:1987}. In fact, the classification of the regular algebras of global dimension $D=3$ with polynomial growth is based on the possible Jordan decompositions of the corresponding $Q_w$'s. Let us give some representative examples.\\
(a) {\sl The 3-dimensional Sklyanin algebra} \cite{ode-fei:1989}, \cite{ode:2002}. This is the algebra $\cala$ generated by 3 elements $x, y, z$ with relations
\begin{equation}
\left\{
\begin{array}{l}
xy-qyx=pz^2\\
yz-qzy=px^2\\
zx-qxz=py^2
\end{array}
\right.
\label{Sk3}
\end{equation}
where $p,q\in \mathbb K$ with $(p,q)\not= (0,0)$ and $(p^3+1,q^3+1)\not= (0,0)$.\\
This algebra is AS-regular with $D=3$. One has $\cala=\cala(w,2)$ with
\begin{equation}
\begin{array}{lll}
w & = & x \otimes y \otimes z + y \otimes z \otimes x + z \otimes x \otimes y\\
& - & q(x \otimes z\otimes y + y\otimes x\otimes z + z \otimes y \otimes x)\\
& - & p (x\otimes x \otimes x + y \otimes y \otimes y + z \otimes z \otimes z)
\end{array}
\label{wSk3}
\end{equation}
where we have identified the 3-linear form $w$ on $\mathbb K^3$ with the corresponding element of $(\mathbb K^{3\ast})^{\otimes^3}$. One verifies that $w$ is 3-regular and one has
\begin{equation}
Q_w=\mbox{\rm 1\hspace {-.6em} l}
\label{QSk3}
\end{equation}
for the corresponding element of $GL(3,\mathbb K)$.\\
\noindent (b) {\sl The $q$-deformed 3-dimensional polynomial algebra}.
This is the algebra $\cala$ generated by 3 elements $x,y,z$ with relations
\begin{equation}
\left\{
\begin{array}{l}
xy=qcyx\\
yz=qazy\\
zx=qbxz
\end{array}
\right.
\label{qdef3}
\end{equation}
with $q,a,b,c\in \mathbb K$, $abc=1$ and $q\not=0$. This algebra is AS-regular with $D=3$ and one has $\cala=\cala(w,2)$ with
\begin{equation}
w=bx\otimes y\otimes z+cy\otimes z\otimes x+az\otimes x\otimes y-q(abx\otimes z\otimes y+bcy\otimes x\otimes z+caz\otimes y\otimes x)
\label{wqdef3}
\end{equation}
with the same conventions as above. One verifies that $w$ is 3-regular and one has
\begin{equation}
Q_w=\left(
\begin{array}{ccc}
b/c & 0 & 0\\
0 & c/a & 0\\
0 & 0 & a/b
\end{array}
\right )
\label{Qqdef3}
\end{equation}
\noindent (c) {\sl Type E quadratic AS-algebra \cite{art-sch:1987}}.
This is the algebra $\cala$ generated by 3 elements $x,y,z$ with relations
\begin{equation}
\left\{
\begin{array}{l}
x^2+\zeta^{-1}yz + \zeta\ zy =0\\
y^2+\zeta^{-4}zx + \zeta^4 xz=0\\
z^2+\zeta^{-7} xy + \zeta^7yx=0
\end{array}
\right.
\label{E}
\end{equation}
where $\zeta\in \mathbb K$ is a primitive 9th root of 1, $\zeta^9=1$.\\
This algebra is AS-regular with $D=3$ and $\cala=\cala(w,2)$ with
\begin{equation}
\begin{array}{lll}
w & = & x\otimes z\otimes x + y\otimes x\otimes y + z\otimes y\otimes z\\
& + & \zeta\ z\otimes x\otimes x +\zeta^{-1}x\otimes x\otimes z\\
& + & \zeta^4 x\otimes y\otimes y + \zeta^{-4}y\otimes y \otimes x\\
& + & \zeta^7 y\otimes z\otimes z + \zeta^{-7} z\otimes z \otimes y
\end{array}
\label{wE}
\end{equation}
which defines a 3-regular 3-linear form on $\mathbb K^3$.\\
One has
\begin{equation}
Q_w=\left(
\begin{array}{ccc}
\zeta & 0 & 0\\
0 & \zeta^4 & 0\\
0 & 0 & \zeta^7
\end{array}
\right)
\label{QE}
\end{equation}
for the corresponding element of $GL(3,\mathbb K)$.\\
It is worth noticing here that the algebras of Case (a) and Case (b) are deformations of the polynomial algebra $\mathbb K[x,y,z]$ while this is not the case here. In fact the algebra with relations (\ref{E}) is quite rigid.\\
\noindent (d) {\sl Counter-example to the converse of Theorem \ref{REG3}}.
Let $\cala$ be the algebra generated by 3 elements $x,y,z$ with relations
\begin{equation}
\left\{
\begin{array}{l}
x^2+yz=0\\
y^2+zx=0\\
xy=0
\label{Cex}
\end{array}
\right.
\end{equation}
Then $\cala=\cala(w,2)$ where the 3-linear form $w$ on $\mathbb K^3$ is given by
\begin{equation}
w=x\otimes x\otimes x + y \otimes y \otimes y + x\otimes y\otimes z + y\otimes z \otimes x + z\otimes x \otimes y
\label{wCex}
\end{equation}
with the same conventions as before. One verifies that $w$ is again 3-regular and one has $Q_w=\mbox{\rm 1\hspace {-.6em} l}$. However $\cala$ is not regular of global dimension $D=3$. Indeed the candidate for (\ref{res3}) is
\[
0\rightarrow \cala \stackrel{x^t}{\rightarrow} \cala^3\stackrel{M}{\rightarrow} \cala^3\stackrel{x}{\rightarrow} \cala \stackrel{\varepsilon}{\rightarrow} \mathbb K \rightarrow 0
\]
with $x^t=(x,y,z)$ and
\[
M=\left(
\begin{array}{ccc}
x & 0 & y\\
z & y & 0\\
0 & x & 0\\
\end{array}
\right)
\]
but this complex is {\sl not} exact in second position : One has
\[
(yz,0,0)\in \ker (M)
\]
while $(yz,0,0)$ is not in the image of $x^t$.\\
This algebra is discussed in \cite{art-sch:1987} and there is a similar one which is cubic with 2 generators.\\
\noindent (e) {\sl The Yang-Mills algebra \cite{ac-mdv:2002b}}.
The Yang-Mills algebra is the cubic algebra $\cala$ generated by $g$ elements $\nabla_\lambda$ ($\lambda\in \{1,\dots,g\}$) with relations
\begin{equation}
g^{\lambda\mu}[\nabla_\lambda,[\nabla_\mu,\nabla_\nu]]=0
\label{YM}
\end{equation}
for $\nu\in \{1,\dots,g\}$, where the $g^{\lambda\mu}$ are the components of a symmetric nondegenerate bilinear form on $\mathbb K^g$. The use here of covariant instead of contravariant notations-conventions has a physical origin. This algebra is regular of global dimension $D=3$.
One has $\cala=\cala(w,3)$ where $w$ is the 4-linear form on $\mathbb K^g$ with components
\begin{equation}
W^{\alpha_1\alpha_2\alpha_3\alpha_4}=g^{\alpha_1\alpha_2}g^{\alpha_3\alpha_4}+g^{\alpha_2\alpha_3}g^{\alpha_4\alpha_1}-2g^{\alpha_1\alpha_3}g^{\alpha_2\alpha_4}
\label{wYM}
\end{equation}
for $\alpha_k\in \{1,\dots,g\}$. This 4-linear form on $\mathbb K^g$ is 3-regular, in fact it satisfies the strong condition (iii'), and one has $Q_w=\mbox{\rm 1\hspace {-.6em} l}$.\\
\noindent (f) {\sl The super Yang-Mills algebra \cite{ac-mdv:2007}}.
There is a ``super" version of the Yang-Mills algebra which is the cubic algebra $\tilde \cala$ generated by $g$ elements $S_\lambda\ (\lambda\in \{1,\dots,g\})$ with relations
\begin{equation}
g^{\lambda\mu}[S_\lambda,[S_\mu,S_\nu]_+]=0
\label{SYM}
\end{equation}
for $\nu\in\{1,\dots,g\}$, where the $g^{\lambda\mu}$ are as above and $[A,B]_+=AB+BA$.\\
This algebra is again regular of global dimension 3 and $\tilde \cala=\cala(\tilde w,3)$ where $\tilde w$ is the 4-linear form on $\mathbb K^g$ with components
\begin{equation}
\tilde W^{\alpha_1\alpha_2\alpha_3\alpha_4}=g^{\alpha_2\alpha_3}g^{\alpha_4\alpha_1}-g^{\alpha_1\alpha_2}g^{\alpha_3\alpha_4}
\label{wSYM}
\end{equation}
for $\alpha_k\in \{1,\dots,g\}$. This $\tilde w$ is 3-regular (and satisfies (iii')) and $Q_{\tilde w}=-\mbox{\rm 1\hspace {-.6em} l}$. Notice that the equations (\ref{SYM}) are equivalent to
\begin{equation}
[S_\lambda,g^{\mu\nu}S_\mu S_\nu]=0
\label{CS2}
\end{equation}
i.e. to the fact that $g^{\mu\nu}S_\mu S_\nu$ is central.\\
Before leaving this section, it is worth noticing that the Yang-Mills algebra is by its very definition the universal enveloping algebra of a graded Lie algebra. In the case $g=2$ this is an AS-regular algebra considered in \cite{art-sch:1987} which is the universal enveloping algebra of the graded 3-dimensional Lie algebra with basis ($\nabla_1,\nabla_2$) in degree 1 and $C$ in degree 2 with Lie bracket defined by
\[
[\nabla_1,\nabla_2]=C,[\nabla_1,C]=0, [\nabla_2,C]=0
\]
In the case $g>2$, the Yang-Mills algebra has exponential growth.\\
Similar considerations apply to the super Yang-Mills where the above Lie algebra is replaced by a super Lie algebra.
\section{Homogeneous algebras}
The aim of this section is to describe properties of $N$-homogeneous algebras and to introduce and discuss the Koszul property \cite{ber:2001a}, \cite{ber-mdv-wam:2003}.
\subsection{Koszul duality}
Let $\cala\in {\mathbf H_N}{\mbox{\bf Alg}}$ be a $N$-homogeneous algebra, that is $\cala=A(E,R)$ with $R\subset E^{\otimes^N}$. One defines the (Koszul) {\sl dual} $\cala^!$ of $\cala$ to be the $N$-homogeneous algebra
\begin{equation}
\cala^!=\cala(E^\ast, R^\perp)
\label{Kd}
\end{equation}
where $R^\perp \subset E^{\ast\otimes^N}=(E^{\otimes^N})^\ast$ is the annihilation of $R$, i.e. the subspace
\[
R^\perp = \{\omega\in (E^{\otimes^N})^\ast\vert \omega(x)=0,\ \forall x\in R\}
\]
of $(E^{\otimes^N})^\ast$ identified with $E^{\ast\otimes^N}$
(there a canonical identification of $(E^{\otimes^N})^\ast$ with $E^{\ast\otimes^N}$ since $E$ is finite-dimensional). One has canonically
\begin{equation}
(\cala^!)^!=\cala
\label{Kdd}
\end{equation}
and to any morphism $f:\cala\rightarrow \cala'=A(E',R')$ of $\mathbf{H_N}{\mbox{\bf Alg}}$ corresponds a morphism $f^!:\cala'{^!}\rightarrow \cala^!$ which is induced by
the transposed of the restriction $f \restriction E: E\rightarrow E'$ of $f$ to $E$. The correspondence $(\cala\mapsto \cala^!,f\mapsto f^!)$ defines a contravariant involutive functor $((f^!)^!=f)$.
\subsection{The Koszul $N$-complex $K(\cala)$}.
Let $\cala=A(E,R)$ be a $N$-homogeneous algebra with dual $\cala^!=\oplus_n \cala^!_n$ and consider the dual vector spaces $\cala^{!\ast}_n$ of the $\cala^!_n$. One has
\begin{equation}
\left\{
\begin{array}{l}
\cala^{!\ast}_n=E^{\otimes^n}\ \ \text{for}\ \ n<N\\
\cala^{!\ast}_n=\cap_{r+s=n-N} E^{\otimes^r}\otimes R \otimes E^{\otimes^s}\ \ \text{for}\ \ n\geq N
\end{array}
\right.
\label{dstar}
\end{equation}
so that for any $n\in \mathbb N$ one has $\cala^{!\ast}_n\subset E^{\otimes^n}$. Let us define then the sequence of homomorphisms of (free) left $\cala$-modules
\begin{equation}
\dots \stackrel{d}{\rightarrow} \cala \otimes \cala^{!\ast}_{n+1} \stackrel{d}{\rightarrow} \cala\otimes \cala^{!\ast}_n\stackrel{d}{\rightarrow}\dots \stackrel{d}{\rightarrow} \cala\rightarrow 0
\label{KNC}
\end{equation}
where $d:\cala\otimes \cala^{!\ast}_{n+1}\rightarrow \cala\otimes \cala^{!\ast}_n$ is induced by the map
\[
a\otimes (e_0\otimes e_1 \otimes\dots \otimes e_n)\mapsto ae_0 \otimes (e_1\otimes \dots \otimes e_n)
\]
of $\cala\otimes E^{\otimes^{n-1}}$ into $\cala\otimes E^{\otimes^n}$. Then one has
\begin{equation}
d^N=0
\label{NCd}
\end{equation}
since $\cala^{!\ast}_n\subset R\otimes E^{\otimes^{n-N}}$ for $n\geq N$. Thus (\ref{KNC}) defines a $N$-complex which will be refered to as {\sl the Koszul $N$-complex of $\cala$} and denoted by $K(\cala)$.\\
As for any $N$complex \cite{mdv:1998a} one obtains from $K(\cala)$ of family $C_{p,r}(K(\cala))$ of ordinary complexes, called {\sl the contractions of} $K(\cala)$, by putting together alternatively $p$ and $N-p$ arrows $d$ of $K(\cala)$. The complex $C_{p,r}(K(\cala))$ is defined as
\begin{equation}
\dots \stackrel{d^{N-p}}{\rightarrow} \cala\otimes \cala^{!\ast}_{Nk+r} \stackrel{d^p}{\rightarrow} \cala\otimes \cala^{!\ast}_{Nk-p+r} \stackrel{d^{N-p}}{\rightarrow} \cala\otimes \cala^{!\ast}_{N(k-1)+r}\stackrel{d^p}{\rightarrow} \dots
\label{Cpr}
\end{equation}
for $0\leq r<p\leq N-1$ \cite{ber-mdv-wam:2003}, (one verifies that all such complexes are exhausted by these couples $(p,r)$). For the homology of these complexes one has the following result \cite{ber-mdv-wam:2003}.
\begin{proposition}\label{CPR}
Let $\cala=A(E,R)$ be a $N$-homogeneous algebra with $N\geq 3$. Assume that $(p,r)$ is distinct from $(N-1,0)$ and that $C_{p,r}(K(\cala))$ is exact at degree $k=1$. Then $R=0$ or $R=E^{\otimes^N}$.
\end{proposition}
In other words except for $C_{N-1,0}(K(\cala))$ a nontrivial acyclicity for $C_{p,r}(K(\cala))$ leads to the trivial algebras $\cala=T(E)$ or $\cala=T(E)^!$.
\subsection{Koszul complexes and Koszul property}
The last proposition points out the complex $C_{N-1,0}(K(\cala))$ which will be denoted by $\calk(\cala,\mathbb K)$ and refered to as {\sl the Koszul complex of $\cala$}. It coincides with the Koszul complex originality introduced in \cite{ber:2001a} without mention to the $N$-complex $K(\cala)$. Of course for a quadratic algebra $\cala$, i.e. for $N=2$, one has $K(\cala)=\calk (\cala,\mathbb K)$ and this coincides with the definition of \cite{pri:1970} (see also \cite{man:1987}, \cite{man:1988}).\\
A $N$-homogeneous algebra $\cala$ will be said to be a {\sl Koszul algebra} whenever its Koszul complex $\calk(\cala,\mathbb K)$ is acyclic in positive degrees, (i.e. $H_n(\calk (\cala,\mathbb K))=0$ for $n\geq 1$). This is the generalization given in
\cite{ber:2001a} of the definition of \cite{pri:1970} to $N$-homogeneous algebras. There are very good reasons explained in \cite{ber:2001a} for this generalization. We content ourselves here to observe that, among the contractions of $K(\cala)$, the Koszul complex $\calk(\cala,\mathbb K)$ is distinguished by the fact that it terminates as a projective resolution of $\mathbb K$. Indeed the presentation of $\cala=A(E,R)$ is equivalent to the exactness of the sequence
\[
\cala\otimes R \stackrel{d^{N-1}}{\rightarrow} \cala\otimes E \stackrel{d}{\rightarrow} \cala \stackrel{\varepsilon}{\rightarrow} \mathbb K \rightarrow 0
\]
as observed before and, on the other hand one has $\cala^{!\ast}_1=E$ and $\cala^{!\ast}_N=R$ so $\calk(\cala,\mathbb K)$ terminates as
\[
\dots \stackrel{d}{\rightarrow} \cala\otimes R \stackrel{d^{N-1}}{\rightarrow} \cala\otimes E \stackrel{d}{\rightarrow} \cala\rightarrow 0
\]
thus if $\cala$ is a Koszul algebra, one has a free resolution of $\mathbb K$ which is then in fact a minimal projective resolution of the trivial left $\cala$-module $\mathbb K$ given by
\begin{equation}
\calk(\cala,\mathbb K)\stackrel{\varepsilon}{\rightarrow} \mathbb K \rightarrow 0
\label{KRes}
\end{equation}
which is refered to as the Koszul resolution of (the left $\cala$-module) $\mathbb K$.\\
Notice that if $\cala$ is a regular algebra of global dimension 2 (resp. 3) then (\ref{res2}) (resp. (\ref{res3})) are the Koszul resolutions of $\mathbb K$ (with a slight abuse of language) so that $\cala$ is then a Koszul algebra as announced in Proposition \ref{R2-3}. One has the following result \cite{mdv-pop:2002}.
\begin{proposition}\label{PSKN}
Let $\cala$ be a Koszul $N$-homogeneous algebra. One has
\[
P_\cala(t)Q_\cala(t)=1
\]
where the series $Q_\cala(t)$ is defined by
\[
Q_\cala(t)=\sum_{n\in \mathbb N} ({\mbox{dim}} (\cala^!_{Nn})t^{Nn} - {\mbox{dim}} (\cala^!_{Nn+1})t^{Nn+1})
\]
and where $P_\cala(t)=\sum_n {\mbox{dim}}(\cala_n)t^n$
is the Poincaré series of $\cala$.
\end{proposition}
In fact the Koszul $N$-complex splits into sub-$N$complexes for the total degree
\[
K(\cala)=\oplus K^{(n)}(\cala)
\]
which induces a splitting of the Koszul complex into finite-dimensional subcomplexes
\[
\calk(\cala,\mathbb K)=\oplus \calk^{(n)}(\cala,\mathbb K)
\]
and the proposition follows from the Euler-Poincaré formula applied to each components.\\
Notice that in the quadratic case, one has $Q_\cala(t)=P_{\cala^!}(-t)$. \\
If $\cala$ is a Koszul $N$-homogeneous algebra, one has clearly
\begin{equation}
\cala^!_{Nn}\simeq {\mbox{Ext}}^{2n}_\cala(\mathbb K,\mathbb K),\cala^!_{Nn+1}\simeq {\mbox{Ext}}^{2n+1}_\cala(\mathbb K,\mathbb K)
\label{ExtN}
\end{equation}
and therefore by setting
\begin{equation}
Y_\cala(t)=\sum_{n\in \mathbb N} ({\mbox{dim}} ({\mbox{Ext}}^{2n}_\cala(\mathbb K,\mathbb K))t^{Nn}-{\mbox{dim}}({\mbox{Ext}}^{2n+1}_\cala(\mathbb K,\mathbb K))t^{Nn+1})
\label{PYN}
\end{equation}
one has $P_\cala(t)Y_\cala(t)=1$. In \cite{kri:2007} it is shown that, conversely if $\cala$ is a $N$-homogeneous algebra such that one has
\begin{equation}
P_\cala(t)Y_\cala(t)=1
\label{NumK}
\end{equation}
then $\cala$ is Koszul. This gives an interesting numerical criterion for Koszulity which has to be compared with the fact that there are $N$-homogeneous algebras $\cala$ satisfying $P_\cala(t)Q_\cala(t)=1$ which are not Koszul, (of course then (\ref{ExtN}) do not hold).\\
In (\ref{KNC}) the factors $\cala$ are considered as left $\cala$-modules. By considering $\cala$ as a right $\cala$-module and by exchanging the factors, one obtains a $N$-complex $\tilde K(\cala)$ of right $\cala$-modules.
\begin{equation}
\dots\stackrel{\tilde d}{\rightarrow} \cala^{!\ast}_{n+1} \otimes \cala \stackrel{\tilde d}{\rightarrow} \cala^{!\ast}_n\otimes \cala \stackrel{\tilde d}{\rightarrow}\dots \stackrel{\tilde d}{\rightarrow} \cala\rightarrow 0
\label{RNC}
\end{equation}
where $\tilde d:\cala^{!\ast}_{n+1}\otimes \cala\rightarrow \cala^{!\ast}_n\otimes \cala$ is induced by the mapping $(e_1\otimes \dots\otimes e_{n+1})\otimes a \mapsto (e_1\otimes \dots \otimes e_n)\otimes e_{n+1}a$ of $E^{\otimes^{n+1}}\otimes \cala$ into $E^{\otimes^n}\otimes \cala$. The fact that $\tilde d^N=0$ follows from $\cala^{!\ast}_N\subset E^{\otimes^{n-N}}\otimes R$ for $n\geq N$. Let us consider the sequences $(L,R)$
\begin{equation}
\dots \stackrel{d_L,d_R}{\rightarrow} \cala\otimes \cala^{!\ast}_{n+1} \otimes \cala \stackrel{d_L,d_R}{\rightarrow}\cala \otimes \cala^{!\ast}_n \otimes \cala\rightarrow \dots \stackrel{d_L,d_R}{\rightarrow} \cala\otimes\cala\rightarrow 0
\label{BBNC}
\end{equation}
where $d_L=d\otimes I$ and $d_R=I\otimes \tilde d$, $I$ being the identity mapping of $\cala$ onto itself. One has $d^N_L=d^N_R=0$ and $d_L$ and $d_R$ are homomorphisms of $(\cala,\cala)$-bimodule, i.e. of left $\cala\otimes \cala^{opp}$-modules. The two $N$-differentials $d_L$ and $d_R$ commute so one has
\[
(d_L-d_R)\left(\sum^{N-1}_{p=0}d^p_L d^{N-p-1}_R\right)=\left(\sum^{N-1}_{p=0}d^p_L d^{N-p-1}_R\right)(d_L-d_R)=d^N_L-d^N_R=0.
\]
It follows that one defines a complex of free $\cala\otimes \cala^{opp}$-modules $\calk(\cala,\cala)$ by setting
\begin{equation}
\left\{
\begin{array}{l}
\calk_{2m}(\cala,\cala)=\cala\otimes \cala^{!\ast}_{Nm}\otimes \cala\\
\\
\calk_{2m+1}(\cala,\cala)=\cala\otimes \cala^{!\ast}_{Nm+1}\otimes \cala
\end{array}
\right.
\label{KCB1}
\end{equation}
with differential $\delta'$ defined by
\begin{equation}
\left\{
\begin{array}{l}
\delta'=d_L-d_R:\calk_{2m+1}(\cala,\cala)\rightarrow \calk_{2m}(\cala,\cala)\\
\\
\delta'=\sum^{N-1}_{p=0}d^p_Ld^{N-p-1}_R:\calk_{2(m+1)}(\cala,\cala)\rightarrow \calk_{2m+1}(\cala,\cala)
\end{array}
\right.
\label{KCB2}
\end{equation}
which will be refered to as {\sl the bimodule Koszul complex of} $\cala$.\\
It turns out that $\calk(\cala,\cala)$ is acyclic in positive degrees if and only if $\calk(\cala,\mathbb K)$ is acyclic in positive degrees that is if and only if $\cala$ is a Koszul algebra. On the other hand one has the obvious exact sequence of bimodules
\[
\cala\otimes E\otimes \cala \stackrel{\delta'}{\rightarrow} \cala\otimes \cala \stackrel{m}{\rightarrow} \cala \rightarrow 0
\]
where $m$ denotes the product of $\cala$. This means that $H_0(\calk(\cala,\cala))=\cala$ and therefore whenever $\cala$ is Koszul one has a free resolution
\[
\calk(\cala,\cala)\stackrel{m}{\rightarrow}\cala\rightarrow 0
\]
of the left $\cala\otimes \cala^{opp}$-module $\cala$ which is a minimal projective resolution of $\cala$ and will be refered to as the Koszul resolution of $\cala$.
\subsection{Small complex and Poincaré duality for Koszul algebras}
Let $\cala$ be a $N$-homogeneous Koszul algebra and let $\calm$ be a $(\cala,\cala)$-bimodule considered as a right $\cala\otimes\cala^{opp}$-module. Then by interpreting the Hochschild homology $H(\cala,\calm)$ of $\cala$ with values in $\calm$ as ${\mbox{Tor}}^{\cala\otimes \cala^{opp}}(\calm,\cala)$ \cite{car-eil:1973}, one sees that the homology of the complex $\calm \otimes_{\cala\otimes \cala^{opp}}\calk(\cala,\cala)$ is the $\calm$-valued Hochschild homology of $\cala$. We shall refer to this latter complex as the {\sl small Hochschild complex} of the Koszul algebra $\cala$ with coefficients in $\calm$ and denote it by ${\mathcal S}(\cala,\calm)$. It reads
\begin{equation}
\dots \stackrel{\delta}{\rightarrow} \calm \otimes \cala^{!\ast}_{N(m+1)}\stackrel{\delta}{\rightarrow}\calm \otimes \cala^{!\ast}_{Nm+1}\stackrel{\delta}{\rightarrow}\calm\otimes \cala^{!\ast}_{Nm}\stackrel{\delta}{\rightarrow} \dots
\label{SKC}
\end{equation}
where $\delta$ is obtained from $\delta'$ by applying the factors $d_L$ to the right of $\calm$ and the factors $d_R$ to the left of $\calm$.\\
By construction the lengths of the complexes $\calk(\cala,\mathbb K)$ and $\calk(\cala,\cala)$ coincide. Assume that $\cala$ is a Koszul algebra, then this implies that the projective dimension of the trivial $\cala$-module $\mathbb K$ coincides with the Hochschild dimension of $\cala$ which is a particular case of the general result of
\cite{ber:2005}.\\
The Koszul complex $\calk(\cala,\mathbb K)$ is a chain complex since its differential is of degree -1, the same is true for $\calk(\cala,\cala)$. By applying the functor ${\mbox{Hom}}_\cala(\bullet, \cala)$ to the chain complex of free left $\cala$-modules $\calk(\cala,\mathbb K)$ one obtains the cochain complex ${\mathcal L}(\cala,\mathbb K)$ of free right $\cala$-modules
\[
0\rightarrow {\mathcal L}^0(\cala,\mathbb K)\rightarrow \dots \rightarrow {\mathcal L}^n(\cala,\mathbb K)\rightarrow \dots
\]
where ${\mathcal L}^n(\cala,\mathbb K)= {\mbox{Hom}}_\cala(\calk_n(\cala,\mathbb K),\cala)$.
Assume that $\cala$ is Koszul of global dimension $D$. Then ${\mathcal L}^n(\cala,\mathbb K)=0$ for $n>D$ and $\cala$ is Gorenstein if and only if $H^n({\mathcal L}(\cala,\mathbb K))=0$ for $n<D$ and $H^D({\mathcal L}(\cala,\mathbb K))=\mathbb K$. When $\cala$ is Koszul of global dimension $D$ and Gorenstein, this implies a precise form of the Poincaré duality between the Hochschild homology and the Hochschild cohomology of $\cala$, \cite{ber-mar:2006},
\cite{vdb:1998}, \cite{vdb:2002}. In the case of a regular algebra $\cala=\cala(w,N)$ of global dimension 3, it reads for an $\cala$-bimodule $\calm$
\begin{equation}
H_n(\cala,\calm)=H^{3-n}(\cala,\calm)
\label{HPd3}
\end{equation}
for $0\leq n\leq 3$ when $Q_w=\mbox{\rm 1\hspace {-.6em} l}$, (when $Q_w\not = \mbox{\rm 1\hspace {-.6em} l}$ it induces an automorphism $\sigma_w$ of $\cala$ and one has to twist by $\sigma_w$ the left multiplication of $\calm$ by $\cala$ on the right-hand side of (\ref{HPd3})).\\
The complex ${\mathcal L}(\cala, \mathbb K)$ is also a contraction of a natural $N$-complex $L(\cala)$. This $N$-complex $L(\cala)$ is the cochain $N$-complex of free right $\cala$-modules obtained by applying the functor ${\mbox{Hom}}_\cala(\bullet,\cala)$ to the Koszul $N$-complex $K(\cala)$ (which is a chain $N$-complex of free left $\cala$-modules). The right $\cala$-module $L^n(\cala)$ identifies canonically with $\cala^!_n\otimes \cala$ while the $N$-differential of $L(\cala)$ is then the left multiplication by $x^\ast_\lambda\otimes x^\lambda$ in $\cala^! \otimes \cala$ where $(x^\ast_\lambda)$ is the dual basis of $(x^\lambda)$ ($E=\oplus_\lambda \mathbb K x^\lambda$). One has ${\mathcal L}(\cala,\mathbb K)=C_{1,0}(L(\cala))$, i.e. ${\mathcal L}^0(\cala,\mathbb K)=\cala=L^0(\cala),{\mathcal L}^1(\cala,\mathbb K)=L^N(\cala)$, etc.
\subsection{Examples of Koszul algebras}
All regular algebras of global dimensions $D=2$ and $D=3$ are Koszul so in particular the examples of regular algebras of Sections 2 and 3 are examples of Koszul algebras.
We shall describe regular Koszul algebras of higher global dimension $D$ in Section 5. Let us give here some examples of Koszul algebras which are not generically regular.\\
\noindent (a) {\sl Koszul duals of quadratic algebras}.
It is well known and not hard to show that if $\cala$ is a quadratic algebra, then its Koszul dual $\cala^!$ is Koszul if and only if $\cala$ is Koszul. Even if $\cala$ is regular, $\cala^!$ is generically not regular.\\
For instance the exterior algebra $\wedge \mathbb K^g$ is the Koszul dual of the algebra of polynomial functions on $\mathbb K^g$ which is regular and Koszul of global dimension $g$, however $\wedge\mathbb K^g$ is not of finite global dimension.\\
It is worth noticing here that if $\cala$ is a $N$-homogeneous algebra with $N>2$, then the Koszulity of $\cala$ does not imply the Koszulity of its Koszul dual $\cala^!$, (this is due to the jumps in degrees in the Koszul resolution). For instance the Koszul dual $\cala^!$ of the Yang-Mills algebra $\cala$ (\S 3.3, example (e)) is such that $P_{\cala^!}(t)Q_{\cala^!}(t)\not=1$ (by direct computation) so it is not Koszul in view of Proposition \ref{PSKN}.\\
\noindent (b) {\sl Degenerate bilinear form} \cite{ber:2008}. In the following $b$ is a bilinear form on $\mathbb K^g$ with $g\geq 2$, $B=(B_{\mu\nu})$ is the matrix of components $B_{\mu\nu}=b(e_\mu,e_\nu)$ of $b$ in the canonical basis of $\mathbb K^g$ and $\cala=\cala(b,2)$ is the quadratic algebra generated by $g$ elements $x^\lambda$ with the relation
\[
B_{\mu\nu} x^\mu x^\nu=0
\]
i.e. we generalize the notation of Section 2 to cases where $b$ can be degenerate. In
\cite{ber:2008} one finds the following results (Propositions 5.4 and 5.5 in \cite{ber:2008}) which contains Theorem \ref{REG2}.
\begin{proposition}\label{Roland}
Assume that $b\not=0$, then $\cala=\cala(b,2)$ has the following properties :\\
1) $\cala$ is Koszul,\\
2) $\cala$ has global dimension $D=2$ except in the case where $b$ is symmetric of rank 1 in which case $D=\infty$,\\
3) $\cala$ is Gorenstein if and only if $b$ is nondegenerate.
\end{proposition}
Thus for $b$ degenerate one has a lot of examples of Koszul algebras which are not regular. In \cite{ber:2008} there is a similar statement for $N$-homogeneous algebras with one relation ($r=1$) which although slightly more involved permits the construction of examples (see e.g. Example (c) in the next section \S 5.3).\\
(c) {\sl The self-duality algebra} \cite{ac-mdv:2002b}. In the case $g=4$ and $g^{\lambda\mu}=\delta^{\lambda\mu}$, the Yang-Mills algebra (Example (e) in \S\ 3.3) admits the 2 nontrivial quotients $\cala^{(+)}$ and $\cala^{(-)}$ where $\cala^{(\varepsilon)}$ $(\varepsilon=\pm)$ is the quadratic algebra generated by the 4 elements $\nabla_\lambda$ ($\lambda\in\{1,2,3,4\}$) with relations
\begin{equation}
[\nabla_4,\nabla_k]=\varepsilon[\nabla_\ell, \nabla_m]
\label{SD}
\end{equation}
for any cyclic permutation $(k,\ell,m)$ of (1,2,3). Let us fix $\varepsilon=+$ and call $\cala^{(+)}$ the self-duality algebra (the study of $\cala^{(-)}$ is similar). In
\cite{ac-mdv:2002b} it was shown that this algebra is Koszul of global dimension $D=2$ and that the Koszul resolution reads
\begin{equation}
0\rightarrow (\cala^{(+)})^3\rightarrow (\cala^{(+)})^4\rightarrow \cala^{(+)}\stackrel{\varepsilon}{\rightarrow}\mathbb K \rightarrow 0
\label{resSD}
\end{equation}
from which it follows that
\begin{equation}
P_{\cala^{(+)}}(t)=\frac{1}{(1-t)(1-3t)}
\label{poincaSD}
\end{equation}
so $\cala^{(+)}$ has exponential growth and is not Gorenstein.\\
It follows from the definition that $\cala^{(+)}$ is the universal enveloping algebra of a Lie algebra which is the semi-direct product of the free Lie algebra $L(\nabla_1,\nabla_2,\nabla_3)$ by the derivation $\delta$ given by
\begin{equation}
\delta(\nabla_k)=[\nabla_\ell, \nabla_m]
\label{DerL}
\end{equation}
for any cyclic permutation $(k,\ell,m)$ of (1,2,3). Formula (\ref{poincaSD}) as well as all the above properties of $\cala^{(+)}$ follow also directly from this structure.\\
\noindent (d) {\sl The super self-duality algebra} \cite{ac-mdv:2007}. In a similar way as in the last example, for $g=4$ and $g^{\lambda\mu}=\delta^{\lambda\mu}$, the super Yang-Mills algebra (Example (f) in \S\ 3.3) admits the 2 nontrivial quotients $\tilde \cala^{(+)}$ and $\tilde\cala^{(-)}$ where $\tilde\cala^{(\varepsilon)}$ ($\varepsilon=\pm$) is the quadratic algebra generated by the 4 elements $S_\lambda$ ($\lambda\in \{1,2,3,4\}$) with relations
\begin{equation}
i[S_4,S_k]_+=\varepsilon[S_\ell,S_m]
\label{SSD}
\end{equation}
for any cyclic permutation $(k,\ell,m)$ of (1,2,3). Let us fix $\varepsilon=+$ and call $\tilde\cala^{(+)}$ the super self-duality algebra. This algebra is again a Koszul algebra of global dimension 2 which is not Gorenstein and has Poincaré series given by
\begin{equation}
P_{\tilde A^{(+)}}(t)=\frac{1}{(1-t)(1-3t)}
\label{PSSD}
\end{equation}
so has also exponential growth. This algebra has direct relations with the 4-dimensional Sklyanin algebra (see in \cite{ac-mdv:2007}).
\section{Arbitrary global dimension $D$}
In the previous sections, we have seen that the regular algebras of global dimensions $D=2$ and $D=3$ are $N$-homogeneous (with $N=2$ for $D=2$) and Koszul. This very desirable property permits to write explicit canonical resolutions. On the other hand one can formulate for the moment this Koszul property only for $N$-homogeneous algebras. This is why in this section we shall restrict attention to Koszul homogeneous algebras and our aim is then to formulate the generalization of Theorem \ref{REG3} for arbitrary global dimension $D$. Notice however that for global dimensions $D\geq 4$, regularity does not imply $N$-homogeneity. It is worth mentioning here that for $D=4$ the AS-regular algebras, i.e. the regular algebras with polynomial growth, have been recently classified \cite{lu-pal-wu-zha:2004}.\\
We shall need a class of $N$-homogeneous algebras associated with preregular multilinear forms that we now describe.
\subsection{Homogeneous algebras associated to multilinear forms}
In this subsection $m$ and $N$ are integers with $m\geq N\geq 2$ and $w$ is a preregular $m$-linear form on $\mathbb K^g$ ($g\geq 2)$ with components $W_{\lambda_1\dots\lambda_m}=w(e_{\lambda_1},\dots,e_{\lambda_m})$ in the canonical basis $(e_\lambda)$ of $\mathbb K^g$. Let $\cala=\cala(w,N)$ be the $N$-homogeneous algebra generated by the elements $x^\lambda$ ($\lambda\in \{1,\dots,g\}$) with relation
\begin{equation}
W_{\lambda_1\dots\lambda_{m-N}\mu_1\dots \mu_N}x^{\mu_1}\dots x^{\mu_N}=0
\label{ReD}
\end{equation}
for $\lambda_k\in \{1,\dots,g\}$. Thus one has $\cala=A(E,R)$ with $E=\oplus_\lambda\mathbb K x^\lambda$ and
\[
R=\sum_{\lambda_k} \mathbb K W_{\lambda_1\dots \lambda_{m-N}}\mu_1\dots \mu_Nx^{\mu_1}\otimes \dots\otimes x^{\mu_N}\subset E^{\otimes^N}.
\]
Notice that this generalizes the definitions of Section \ref{globdim2} (which is the case $m=N=2$) and Section \ref{globdim3} (which is the case $m=N+1$).\\
Let us define the subspaces ${\mathcal W}_n\subset E^{\otimes^n}$ for $m\geq n\geq 0$ by
\begin{equation}
\left\{
\begin{array}{l}
{\mathcal W}_n=E^{\otimes^n}\ \ \ \text{for}\ \ \ N-1\geq n\geq 0\\
{\mathcal W}_n=\sum_{\lambda_k}\mathbb K W_{\lambda_1\dots \lambda_{m-n}\mu_1\dots\mu_n}x^{\mu_1}\otimes \dots \otimes x^{\mu_n}\ \ \ \text{for}\ \ \ m\geq n\geq N
\end{array}
\right.
\label{SubK}
\end{equation}
so in particular ${\mathcal W}_1=E$ and ${\mathcal W}_N=R$. The twisted cyclicity of $w$ (property (ii) of \S 3.1) and (\ref{dstar}) implies the following proposition.
\begin{proposition}\label{SubNC}
The sequence
\begin{equation}
0\rightarrow \cala\otimes {\mathcal W}_m\stackrel{d}{\rightarrow} \cala\otimes {\mathcal W}_{m-1}\stackrel{d}{\rightarrow} \dots \stackrel{d}{\rightarrow} \cala\rightarrow 0
\label{SubNK}
\end{equation}
is a sub-$N$-complex of the Koszul $N$-complex $K(\cala)$ of $\cala$.
\end{proposition}
In fact one has ${\mathcal W}_n \subset \cala^{!\ast}_n$ and $d(\cala\otimes {\mathcal W}_{n+1})\subset \cala\otimes {\mathcal W}_n$. In particular one has ${\mathcal W}_m=\mathbb K w\subset \cala^{!\ast}_m$ so $w$ is a linear form on $\cala^!_m$. We define then the linear form $\omega_w$ on the algebra $\cala^!$ by setting
\begin{equation}
\omega_w =w\circ p_m
\label{omegaw}
\end{equation}
where $p_m:\cala^!\rightarrow \cala^!_m$ is the canonical projection onto the degree $m$. With $E=\oplus_\lambda \mathbb K x^\lambda$, $w$ is canonically a $m$-linear form on $E^\ast$ and $Q_w$ an element of $GL(E^\ast)$. With these identifications one has the following theorem \cite{mdv:2007}.
\begin{theorem}\label{MOD}
The element $Q_w$ of $GL(E^\ast)$ induces an automorphism $\sigma_w$ of the $N$-homogeneous algebra $\cala^!=A(E^\ast, R^\perp)$ and one has
\begin{equation}
\omega_w(xy)=\omega_w(\sigma_w(y)x)
\label{pmod}
\end{equation}
for any $x,y \in \cala^!$. The subset of $\cala^!$
\[
{\mathcal I}=\{ y\in \cala^!\vert \omega_w(xy)=0,\ \ \forall x\in \cala^!\}
\]
is a two-sided ideal of $\cala^!$ and the quotient algebra ${\mathcal F}(w,N)=\cala^!/{\mathcal I}$ equipped with the linear form induced by $\omega_w$ is a graded Frobenius algebra.
\end{theorem}
To prove this theorem, one first verifies by using the $Q_w$-invariance of $w$ that one has $Q^{\otimes^N}_w R^\perp \subset R^\perp$ which implies the existence of $\sigma_w$. Then (\ref{pmod}) is just a translation of the $Q_w$-cyclicity of $w$. By definition ${\mathcal I}$ is a left ideal and (\ref{pmod}) implies that it is also a right ideal. The quotient ${\mathcal F}=\cala^!/{\mathcal I}$ is a finite-dimensional graded algebra and the pairing induced by $(x,y)\mapsto \omega_w(xy)$ is nondegenerate and is a Frobenius pairing on ${\mathcal F}$.
\begin{corollary}\label{AUT}
Considered as an element of $GL(E)$, the transposed $Q^t_w=Q^w$ of $Q_w$ induces an automorphism $\sigma^w$ of the $N$-homogeneous algebra $\cala=A(E,R)$.
\end{corollary}
Let us end this subsection by noting that, at this level of generality and for $N=2$ (i.e. in the quadratic case), the multilinear form $w$ induces a (twisted) noncommutative $m$-form for $\cala$. For this let $^w\cala$ be the $(\cala,\cala)$-bimodule which coincides with $\cala$ as right $\cala$-module and is such that the structure of left $\cala$-module is given by the left multiplication by $(-1)^{(m-1)n}(\sigma^w)^{-1}(a)$ for $a\in \cala_n$. One has the following result \cite{mdv:2007}.
\begin{proposition}\label{FORM}
In the case $N=2$ that is for $\cala=\cala(w,2)$, $\mbox{\rm 1\hspace {-.6em} l} \otimes w$ is canonically a non trivial $^w\cala$-valued Hochschild $m$-cycle on $\cala$.
\end{proposition}
In this statement, $\mbox{\rm 1\hspace {-.6em} l}$ is interpreted as an element of $^w\cala$ while $w\in E^{\otimes^m}$ is interpreted as an element of $\cala^{\otimes^m}$ ($E=\cala_1\subset \cala$) so that $\mbox{\rm 1\hspace {-.6em} l} \otimes w$ is a $^w\cala$-valued Hochschild $m$-chain.
\subsection{General results for Koszul-Gorenstein algebras}
For the $N$-homo\-geneous algebras which are Koszul of finite global dimension $D$ and which are Gorenstein, (a particular class of regular algebras if $D\geq 4$), one has the following theorem \cite{mdv:2007}.
\begin{theorem}\label{KGD}
Let $\cala$ be a $N$-homogeneous algebra which is Koszul of finite global dimension $D$ and Gorenstein. Then $\cala=\cala(w,N)$ for some preregular $m$-linear form on $\mathbb K^g$ for some $g$. If $N\geq 3$ then $m=Np+1$ and $D=2p+1$ for some $p\geq 1$ while for $N=2$ one has $m=D$.
\end{theorem}
For the proof we refer to \cite{mdv:2007}.\\
Under the assumptions of Theorem \ref{KGD} the Koszul resolution of the trivial left $\cala$-module $\mathbb K$ reads
\[
0\rightarrow \cala\otimes {\mathcal W}_m \stackrel{d}{\rightarrow} \cala \otimes {\mathcal W}_{m-1} \stackrel{d^{N-1}}{\rightarrow} \dots \stackrel{d}{\rightarrow}\cala\otimes {\mathcal W}_N \stackrel{d^{N-1}}{\rightarrow}\cala\otimes E\stackrel{d}{\rightarrow}\cala \stackrel{\varepsilon}{\rightarrow}\mathbb K \rightarrow 0
\]
or, by setting
\begin{equation}
\left\{
\begin{array}{l}
\nu_N(2k)=Nk\\
\nu_N(2k+1)=Nk+1
\end{array}
\right.
\label{nuN}
\end{equation}
for $k\in \mathbb N$,
\begin{equation}
0\rightarrow \cala\otimes {\mathcal W}_{\nu_N(D)}\stackrel{d'}{\rightarrow} \dots \stackrel{d'}{\rightarrow} \cala\otimes {\mathcal W}_{\nu_N(k)} \stackrel{d'}{\rightarrow} \cala\otimes {\mathcal W}_{\nu_N(k-1)}\dots \stackrel{d'}{\rightarrow}\cala\stackrel{\varepsilon}{\rightarrow}\mathbb K\rightarrow 0
\label{KResD}
\end{equation}
where $d'$ is defined by
\begin{equation}
\left\{
\begin{array}{l}
d'=d^{N-1}:\cala\otimes {\mathcal W}_{\nu_N(2k)}\rightarrow \cala\otimes {\mathcal W}_{\nu_N(2k-1)}\\
d'=d : \cala\otimes {\mathcal W}_{\nu_N(2k+1)}\rightarrow \cala\otimes {\mathcal W}_{\nu_N(2k)}
\end{array}
\right.
\label{diffW}
\end{equation}
for $k\in \mathbb N$.\\
Notice that one has
\begin{equation}
{\mbox{dim}}({\mathcal W}_{\nu_N(k)})={\mbox{dim}} ({\mathcal W}_{\nu_N(D-k)})
\label{PoinDual}
\end{equation}
for $0\leq k\leq D$. In particular $\cala\otimes {\mathcal W}_{\nu_N(D)}=\cala\otimes w$ so one sees that $\mbox{\rm 1\hspace {-.6em} l} \otimes w$ is the generator of the top module of the Koszul resolution which again corresponds to the interpretation of $\mbox{\rm 1\hspace {-.6em} l}\otimes w$ as a volume form.\\
It is worth noticing here that it has been already shown in \cite{bon-pol:1994} that the quadratic algebras which are Koszul and regular are determined by multilinear form ($D$-linear for global dimension $D$) which correspond to volume forms in this noncommutative setting.\\
Let us come back on a more general situation. Assume that $D$ and $N$ are given integers with $D\geq 2$ and $N\geq 2$ and that $N=2$ whenever $D$ is an even integer. Let then $w$ be a preregular $m$-linear form on $\mathbb K^g$ with $m=D$ for $N=2$ and $m=Np+1$ for $D=2p+1$ and consider the $N$-homogeneous algebra $\cala=\cala(w,N)$. The complex
\begin{equation}
0\rightarrow \cala \otimes {\mathcal W}_{\nu_N(D)}\stackrel{d'}{\rightarrow} \dots \stackrel{d'}{\rightarrow} \cala\otimes {\mathcal W}_{\nu_N(k)}\stackrel{d'}{\rightarrow} \dots \stackrel{d'}{\rightarrow} \cala\rightarrow 0
\label{CWD}
\end{equation}
is still well defined, with $\nu_N$ as in (\ref{nuN}) and $d'$ as in (\ref{diffW}), and is a subcomplex of the Koszul complex $\calk(\cala,\mathbb K)$ of $\cala$ in view of Proposition \ref{SubNC}. It is clear that if this complex is acyclic in positive degree, it coincides with the Koszul complex of $\cala$ and that $\cala$ is then Koszul of global dimension $D$ and Gorenstein. Thus as remarked in \cite{boc-sch-wem:2008} one has the following result which gives a sort of converse of Theorem \ref{KGD}.
\begin{proposition}\label{CKGD}
Let $\cala=\cala(w,N)$ be as above then $\cala$ is Koszul of global dimension $D$ and Gorenstein if and only if the complex $\mathrm{(\ref{CWD})}$ is acyclic in positive degrees.
\end{proposition}
A weaker assumption on the complex (\ref{CWD}) is to assume that it coincides with the Koszul complex. In the case where $D=3$, one has the following proposition \cite{mdv:2007}.
\begin{proposition}\label{Eq3R}
Let $w$ be a preregular $(N+1)$-linear form on $\mathbb K^g$ and let $\cala=\cala(w,N)$ then the following conditions are equivalent.\\
\noindent $\mathrm{(a)}$ $\cala^{!\ast}_{N+1}=\mathbb K w$.\\
\noindent $\mathrm{(b)}$ The complex $\mathrm{(\ref{CWD})}$ coincides with the Koszul complex $\calk(\cala,\mathbb K)$ of $\cala$.\\
\noindent $\mathrm{(c)}$ $w$ is 3-regular.\\
\end{proposition}
Let us consider $\cala=\cala(w,N)$ with Koszul dual $\cala^!=\oplus_n \cala^!_n$ and let us define the graded algebra
\begin{equation}
\cala'=\cala'(w,N)=\oplus_n \cala'_n
\label{Aprime1}
\end{equation}
to be $\cala^!$ for $N=2$ and to be defined for $N>2$ by
\begin{equation}
\cala'_n=\cala^!_{\nu_N(n)}
\label{Aprime2}
\end{equation}
for $n\in \mathbb N$ with product $(x,y)\mapsto x\bullet y$ defined by
\begin{equation}
x\bullet y = \pi(xy)
\label{Aprime3}
\end{equation}
where $\pi:\cala^!\rightarrow \cala'$ is the canonical projection of $\cala^!$ onto $\cala'=\oplus_n \cala^!_{\nu_N(n)}\subset \cala^!$ defined by setting $\pi(\cala^!_k)=0$ whenever $k$ is not in $\nu_N(\mathbb N)$. Thus this product is defined for two homogeneous elements $x$ and $y$ by
\[
xy=0
\]
whenever $x$ and $y$ are both of odd degree and
\[
xy = \text{product in}\ \cala^!
\]
otherwise. It is clear that this product is associative. One has the following result.
\begin{theorem}\label{APR}
Assume that $D,N$ and $w$ are as above that is $N=2$ for $D$ even and $w$ is a preregular $m$-linear form on $\mathbb K^g$ with $m=D$ for $N=2$ and $m=Np+1$ for $D=2p+1$. Then the following conditions are equivalent.\\
\noindent $\mathrm{(a)}$ $\cala'(w,N)$ equipped with the linear form induced by $\omega_w$ is a Frobenius algebra.\\
\noindent $\mathrm{(b)}$ The complex $\mathrm{(\ref{CWD})}$ coincides with the Koszul complex $\calk(\cala,\mathbb K)$ of $\cala(w,N)$.\\
\end{theorem}
\noindent \underbar{Proof}.
The proof of this proposition is almost tautological since conditions (a) and (b) are both equivalent to ${\mathcal W}_{\nu_N(n)}=\cala^{!\ast}_{\nu_N(n)}={\cala'_n}^\ast$ for $n\in \mathbb N$. $\square$\\
This is of course directly inspired by \cite{ber-mar:2006} and implies Theorem 1.2 of
\cite{ber-mar:2006} since when $\cala(w,N)$ is Koszul one has $\cala^!_{\nu_N(n)}={\mbox{Ext}}^n_\cala(\mathbb K, \mathbb K)$ and the product of $\cala'(w,N)$ is essentially the Yoneda product (\cite{ber-mar:2006}, Proposition 3.1). Let us recall this theorem 1.2 of \cite{ber-mar:2006} which is an important result.
\begin{theorem}\label{BerM}
Let $\cala$ be a $N$-homogeneous algebra which is Koszul of finite global dimension. Then $\cala$ is Gorenstein if and only if the Yoneda algebra $E(\cala)={\mbox{Ext}}_\cala(\mathbb K,\mathbb K)$ is Frobenius.
\end{theorem}
As pointed out before this follows from Theorem \ref{KGD} and Theorem \ref{APR} by using the fact that one has $\cala'=E(\cala)$ whenever $\cala$ is Koszul.\\
\noindent \underbar{Remarks}.\\
1) One sees that, with $D,N$ and $w$ as in Theorem \ref{APR}, one has two natural Frobenius algebras associated with $\cala(w,N)$. The first one is the algebra ${\mathcal F}(w,N)=\cala^!/{\mathcal I}$ of Theorem \ref{MOD} the other one is the algebra ${\mathcal F}'(w,N)=\cala'/{\mathcal I}'$ where
\[
{\mathcal I}'=\{ y\in \cala' \vert \omega_w(x\bullet y)=0,\ \ \forall x\in \cala'\}
\]
is a two-sided ideal since $\sigma_w$ induces an automorphism of $\cala'$ satisfying $\omega_w(x\bullet y)=\omega_w(\sigma_w(y)\bullet x)$. These two Frobenius algebras coincide for $N=2$ but are different for $N>2$.\\
2) $D,N$ and $w$ being as in Theorem \ref{APR}, it is tempting in view of Proposition \ref{Eq3R} to say that $w$ is $D$-{\sl regular} whenever the equivalent conditions (a) and (b) are satisfied. In fact Condition (a) contains several nondegeneracy conditions. This notion involves both $D$ and $N$ as above.
\subsection{Examples}
Of course one has already all the examples of Section 3. Let us give two quadratic examples and a class of $N$-homogeneous examples.\\
\noindent (a) {\sl The extended 4-dimensional Sklyanin algebra} \cite{ac-mdv:2002a}, \cite{ac-mdv:2003}, \cite{ac-mdv:2008}.
In connection with a problem of $K$-homology, the following quadratic algebra $\cala_{\mathbf u}$ has been been introduced in \cite{ac-mdv:2002a} and analyzed in details in \cite{ac-mdv:2003}, \cite{ac-mdv:2008}. The algebra $\cala_{\mathbf u}$ is the quadratic algebra generated by 4 elements $x^\lambda$ ($\lambda\in \{0,1,2,3\}$) with relations
\begin{equation}
\cos (\varphi_0-\varphi_k)[x^0,x^k]=i\sin (\varphi_\ell -\varphi_m)[x^\ell,x^m]_+
\label{ASk1}
\end{equation}
\begin{equation}
\cos(\varphi_\ell-\varphi_m)[x^\ell,x^m]=i\sin(\varphi_0-\varphi_k)[x^0,x^k]_+
\label{ASk2}
\end{equation}
for any cyclic permutation $(k,\ell,m)$ of (1,2,3). The parameter ${\mathbf u}$ is the element
${\mathbf u}=\left(e^{i(\varphi_1-\varphi_0)},e^{i(\varphi_2-\varphi_0)},e^{i(\varphi_3-\varphi_0)}\right)$ of $T^3$. Thus there are a priori 3 scalar parameters $\varphi_1-\varphi_0,\varphi_2-\varphi_0$ and $\varphi_3-\varphi_0$. However for generic values of these parameters one can show that $\cala_{\mathbf u}$ only depends on two scalar parameters and that then by an appropriate linear change of generators it reduces to the 4-dimensional Sklyanin algebra introduced in \cite{skl:1982} and studied in \cite{smi-sta:1992} from the point of view of general regularity.\\
The algebra $\cala_{\mathbf u}$ is Koszul of global dimension $D=4$ and is Gorenstein whenever none of the 6 relations (\ref{ASk1}), (\ref{ASk2}) becomes trivial and one then has the nontrivial Hochschild cycle (in $Z(\cala,\cala)$)
\[
\begin{array}{lll}
w=\tilde{ch}_{\frac{3}{2}}(U_{\mathbf u})& = &-\sum_{\alpha, \beta,\gamma,\delta}\varepsilon_{\alpha \beta \gamma\delta} \cos (\varphi_\alpha-\varphi_\beta+\varphi_\gamma-\varphi_\delta)x^\alpha \otimes x^\beta \otimes x^\gamma \otimes x^\delta \\
\\
& + & i\sum_{\mu,\nu}\sin(2(\varphi_\mu-\varphi_\nu))x^\mu\otimes x^\nu \otimes x^\mu \otimes x^\nu
\end{array}
\]
which defines a 4-linear form on $\mathbb K^4$ which is preregular with
\[
Q_w=-\mbox{\rm 1\hspace {-.6em} l}
\]
i.e. $w$ is graded-cyclic. One verifies that one has then $\cala_{\mathbf u}=\cala(w,2)$ and that $\mbox{\rm 1\hspace {-.6em} l}\otimes w$ is a Hochschild 4-cycle, i.e. $\mbox{\rm 1\hspace {-.6em} l} \otimes w \in Z_4(\cala,\cala)$.\\
\noindent (b) {\sl The $q$-deformed $D$-dimensional polynomial algebra}. This is the algebra $\cala$ generated by $D$ elements $x^\lambda$ ($\lambda\in \{1,\dots,D\}$) with relations
\begin{equation}
x^\mu x^\nu=q^{\mu\nu} x^\nu x^\mu
\label{qdefD}
\end{equation}
for $\mu,\nu\in \{1,\dots,D\}$ where the $q^{\mu\nu}\in \mathbb K$ satisfy
\begin{equation}
q^{\mu\nu}q^{\nu\mu}=1,\ \ \ q^{\lambda\lambda}=1
\label{qrel}
\end{equation}
for any $\lambda,\mu,\nu\in \{1,\dots,D\}$.\\
This algebra is Koszul of global dimension $D$ and Gorenstein. One has $\cala=\cala(w,2)$ with
\begin{equation}
w=\sum_{\pi\in {\mathfrak {S}}_D} \chi(\pi)x^{\pi(1)}\otimes \dots \otimes x^{\pi(D)}
\label{wqdefD}
\end{equation}
where ${\mathfrak {S}}_D$ is the group of permutations of $\{1,\dots,D\}$ and where $\chi:{\mathfrak {S}}_D\rightarrow \mathbb K$ is given by $\chi(\pi)=\prod_{(\mu\nu)}(-q^{\mu\nu})$ with
$\Pi_{(\mu\nu)}$ corresponding to the standard embedding
\[
{\mathfrak {S}}_D\hookrightarrow\{\prod_{(\mu\nu)} b^{\mu\nu}, \mu<\nu\} \subset {\mathfrak B}_D
\]
of ${\mathfrak {S}}_D$ into the group of braids ${\mathfrak B}_D$.\\
One has then
\begin{equation}
(Q_w)^\mu_\nu = \left(\prod_{\lambda\not= \mu}(-q^{\lambda\mu})\right)\delta^\mu_\nu
\label{QqdefD}
\end{equation}
for the matrix element of the corresponding $Q_w\in GL(D,\mathbb K)$.\\
\noindent (c) {\sl Precommutative examples} \cite{ber:2001a},\cite{ber-mar:2006}. Let the integers $g$ and $N$ be such that $g\geq N\geq 2$ and let $\varepsilon$ be the completely antisymmetric $g$-linear form on $\mathbb K^g$ with $\varepsilon(e_1,\dots,e_g)=1$. Consider the $N$-homogeneous algebra $\cala=\cala(\varepsilon, N)$ i.e. the algebra generated by $g$ elements $x^\lambda$ ($\lambda\in \{1,\dots,g\}$) with the relations
\[
\varepsilon_{\lambda_1\dots \lambda_{g-N}\ \mu_1\dots \mu_N}x^{\mu_1}\dots x^{\mu_N}=0
\]
where $\varepsilon_{\lambda_1\dots \lambda_g}=\varepsilon(e_{\lambda_1},\dots,e_{\lambda_g})$. It is clear that $\varepsilon$ is preregular with
\[
Q_\varepsilon= (-1)^{g-1}\mbox{\rm 1\hspace {-.6em} l}
\]
as associated element of $GL(g,\mathbb K)$.\\
It was shown in \cite{ber:2001a} where this algebra was introduced that $\cala(\varepsilon,N)$ is a Koszul algebra of finite global dimension and it was shown in
\cite{ber-mar:2006} that it is Gorenstein if and only if either $N=2$ or $N>2$ and $g=Np+1$ for some integer $p\geq 1$. For $N=2$ this reduces to the algebra polynomial functions on $\mathbb K^g$ while for $N>2$ and $g=Np+1$ this is a regular algebra of global dimension $D=2p+1$. In the latter case, the ideal ${\mathcal I}$ of Theorem \ref{MOD} is generated by the quadratic elements $\alpha\beta + \beta \alpha$ of $\cala(\varepsilon,N)^!$
so that the quotient Frobenius algebra ${\mathcal F} (\varepsilon, N)=\cala^!/{\mathcal I}$ reduces to the exterior algebra $\wedge\mathbb K^g$ which is precisely the Koszul dual algebra of the quadratic algebra of polynomial functions on $\mathbb K^g$. Thus by this process one recovers the quadratic relations implying the original $N$-homogeneous ones.\\
Notice that for $N>2$ the algebra $\cala(\varepsilon, N)$ has exponential growth \cite{ber:2001a}.\\
In \cite{hai-kri-lor:2008} a twisted version of this example associated with a Hecke symmetry is introduced and analyzed with similar results. This paper \cite{hai-kri-lor:2008} contains even a super version of these examples. See also \cite{gur:1990} (and \cite{wam:1993}) for the quadratic case associated with a Hecke symmetry.
\noindent \underbar{Remark}. In contrast to the previous example for $N>2$, in the cases of the Yang-Mills algebra and the super Yang-Mills algebra the ideal ${\mathcal I}$ of Theorem \ref{MOD} vanishes, that is the Koszul duals are then Frobenius. The reason is that in these cases the 3-regular multilinear forms (4-linear) $w$ given respectively by (\ref{wYM}) and (\ref{wSYM}) satisfy the stronger condition (iii') of \S 3.1.
\subsection{Classical limit versus infinitesimal preregularity}
We now consider perturbations of the algebra $\mathbb K[x^1,\dots,x^g]$ of polynomial functions on $\mathbb K^g$. More precisely one has $\mathbb K[x^1,\dots,x^g]=\cala(\varepsilon,2)$ where $\varepsilon$ is the $g$-linear form on $\mathbb K^g$ which is completely antisymmetric with $\varepsilon_{1\ 2\dots g}=1$, where $\varepsilon_{\lambda_1\dots\lambda_g}=\varepsilon(e_{\lambda_1},\dots, e_{\lambda_g})$ are the components of $\varepsilon$ in the canonical basis $(e_\lambda)$ of $\mathbb K^g$. Let $w_t$ be a 1-parameter family of preregular $g$-linear forms on $\mathbb K^g$ with $w_0=\varepsilon$ and let us investigate what happens formally at first order in $t$. One writes
\begin{equation}
\left\{
\begin{array}{l}
w_t=\varepsilon + t \dot w + o(t^2)\\
Q_{w_t}=(-1)^{g-1}\mbox{\rm 1\hspace {-.6em} l} + t \dot Q + o(t^2)
\end{array}
\right.
\label{1-prer}
\end{equation}
and the first order $Q_{w_t}$-cyclicity reads
\begin{equation}
\dot W_{\lambda_1\dots\lambda_g}=\dot Q^\lambda_{\lambda_g} \varepsilon_{\lambda\lambda_1\dots \lambda_{g-1}}+(-1)^{g-1} \dot W_{\lambda_g\lambda_1\dots \lambda_{g-1}}
\label{1cycl}
\end{equation}
with $\dot W_{\lambda_1\dots \lambda_g}=\dot w(e_{\lambda_1},\dots, e_{\lambda_g})$.
This equation implies
\begin{equation}
{\mbox{tr}} (\dot Q)=\dot Q^\lambda_\lambda=0
\label{detQ}
\end{equation}
which suggests ${\mbox{det}} (Q_{w_t})=1$ for a finite version. So a natural question is the following : Does a quadratic AS-regular algebra $\cala(w,2)$ is such that ${\mbox{det}}(Q_w)=1$? By looking at Example (c) of \S 3.3, one can see that the answer is no. Notice however that the quadratic AS-algebra of type $E$ is isolated. |
0810.1300 | \section{Introduction and outline}
The development of the theory of infinite-dimensional integrable systems
was a remarkable advance of mathematical physics over the last forty years.
One of the key properties of such systems
is that they can be written as the compatibility condition of an
overdetermined linear system, called the Lax pair.
In turn, the existence of Lax pair is deeply related to many other
features of these systems.
Among them is the inverse scattering transform (IST),
a nonlinear analogue of the Fourier transform which can be used
to solve the initial value problem (IVP).
The IST was successfully used in the late 1960's and early 1970's
to solve IVPs on infinite domains
or with periodic or quasi-periodic boundary conditions (BCs)
for a variety of
nonlinear partial differential equations (PDEs), differential-difference
fully discrete, integro-differential equations, etc.\
(e.g., see Refs.~\cite{AblowitzClarkson,AS1981,BBEIM1994,FT1987} and
references therein).
Following the solution of IVPs, a natural issue was
the solution of initial-boundary value problems (IBVPs).
After some early results \cite{JMP16p1054,NLTY2p37,JMP32p99,PhysD35p167}, however,
the issue remained essentially open for over twenty years.
Recently, renewed interest in the problem
has lead to a number of developments
(e.g., see Refs.\ \cite{JPA30p3505,JPA23p2507,IP16p1813,JETPL74p481,%
PRSLA453p1411,JMP41p4188,IMA67p559,CMP230p1,JNLMP10p47,CPAM58p639,%
NLTY18p1771,PRSLA456p805,JMPv41p414,IP22p209,FAA21p86}
and references therein).
Particularly important among these is the method developed by A.~S.\ Fokas
\unskip~\cite{PRSLA453p1411,JMP41p4188,IMA67p559,CMP230p1,JNLMP10p47,CPAM58p639,NLTY18p1771,PRSLA456p805}.
\unskip\break
Fokas'\ method, which is a significant extension of the IST,
is based on the simultaneous spectral analysis of
both parts of the Lax pair.
A crucial role is also played by a relation called
global algebraic relation
that couples all known and unknown boundary values.
Indeed, it is the
analysis of the global relation that allows one to express the
unknown boundary datum in terms of known ones plus the initial datum.
Importantly, the method also yields a new approach to IBVPs for
linear PDEs,
which allows the solution of new kinds of problems.
At the same time,
the effort to extend the properties of integrable nonlinear PDEs
to discrete integrable systems
has been an ongoing theme in the last thirty years
(e.g., see Refs.~\cite{AblowitzClarkson,NLTY13p889,JMP17p1011,APT2003
PRB9p1924,PTP51p703
PLA207p263
JETP40p269,jphysa37p11819
PR18p1}
and references therein).
The purpose of this work is to show that, \textit{mutatis mutandis},
an approach similar to that for PDEs can also be used to solve IBVPs
for linear and integrable nonlinear differential-difference equations
(DDEs).\break
We demonstrate this claim by solving IBVPs for
the discrete analogue of the linear and nonlinear Schr\"odinger equations
on the natural numbers.
Note that the integrable discrete nonlinear Schr\"odinger (IDNLS)
equation is an important model
since it arises in a number of physical and mathematical contexts
(e.g., see references in Ref.~\cite{APT2003}).
The outline of this work is the following.
In section~\ref{s:DLS} we solve the IBVP
on the natural numbers for the discrete linear Schr\"odinger (DLS) equation,
namely the linear DDE
\begin{equation}
i\.q_n + \frac{q_{n+1}-2q_n+q_{n-1}}{h^2}= 0\,
\label{e:DLS}
\end{equation}
where
$q_n=q_n(t)\in{\mathbb{C}}$,
$n\in{\mathbb{N}}$,
$\.f\equiv df/dt$ denotes time derivative
and~$h$ is the lattice spacing.
Then, in sections~\ref{s:IDNLS} and~\ref{s:idnlsdata}
we consider the IBVP for the
integrable nonlinear counterpart of~\eref{e:DLS},
namely the IDNLS equation or
Ablowitz-Ladik (AL) equation\unskip~\cite{JMP16p598,JMP17p1011},
\begin{equation}
i\.q_n+ \frac{q_{n+1}-2q_n+q_{n-1}}{h^2}-\nu |q_n|^2(q_{n+1}+q_{n-1})= 0\,
\label{e:IDNLS}
\end{equation}
(where as usual the cases $\nu=-1$ and $\nu=1$ will be called respectively
focusing and defocusing).
In particular, in section~\ref{s:idnlsdata}
we discuss the elimination of the unknown boundary datum,
the linearizable boundary conditions,
and we write down the soliton solutions.
Finally, in order to appreciate the similarities and differences between the
method in the discrete versus the continuum case,
in section~\ref{s:continuum} we review the solution of
IBVPs for the continuum limits of both equations,
namely the linear and nonlinear Schr\"odinger equations,
and we discuss explicitly the correspondence between the method
in the discrete case versus the continuum limit.
The proof of various statements in the text is confined to the Appendix,
which also contains
a list of notations and frequently used formulae.
In both the linear and the nonlinear problem
we will require the initial datum to be absolutely summable
and the boundary datum~$q_0(t)$ to be smooth,
even though the method can be formulated under weaker conditions.
The constant~$h$ can be eliminated from~\eref{e:DLS} and~\eref{e:IDNLS}
via the rescalings $t'=t/h^2$ and $q'_n(t)=hq_n(t)$.
Thus, for simplicity we will consider the rescaled problems throughout
(thus effectively setting $h=1$);
however, we will will omit the primes
except when considering the limit $h\to0$
to recover the solution of the continuum cases.
The indended meaning should be clear from the context.
Also, for brevity we will occasionally omit functional dependences
when doing so does not cause ambiguity.
\section{Discrete linear Schr\"odinger equation}
\label{s:DLS}
Here we solve the linear problem~\eref{e:DLS},
which serves to introduce some of the tools that will be used in
the nonlinear case.
In section~\ref{s:linearLaxpair} we derive a Lax pair for~\eref{e:DLS}.
Then, in section~\ref{s:1.3} we solve the IVP and in section~\ref{s:1.4}
IBVPs via spectral methods.
\paragraph{IVP and IBVP for DLS via Fourier methods.}
Let us briefly review the solution of
the IVP and the IBVP via Fourier methods.
Doing so we will serve to introduce quantities that will also be used later.
Consider first the IVP,
namely~\eref{e:DLS} with $n\in{\mathbb{Z}}$ and with $q_n(0)$ given.\break
We require that the initial datum $q_n(0)$
decays rapidly enough as $n\to\pm\infty$ to belong to $\ell^1({\mathbb{Z}})$,
the space of sequences $\{a_n\}_{n\in{\mathbb{Z}}}$ such that
$\mathop{\textstyle\truesum}\limits\nolimits_{n=-\infty}^\infty|a_n|<\infty$.
Introduce the transform pair as
\numparts
\label{e:Fourierpair}
\begin{eqnarray}
\^q(k,t)= \mathop{\textstyle\truesum}\limits_{n=-\infty}^\infty q_n(t)/z^n=
\mathop{\textstyle\truesum}\limits_{n=-\infty}^\infty {\rm e}^{-ink}q_n(t)\,,
\\
q_n(t)= \frac1{2\pi i}\mathop{\textstyle\trueoint}\limits_{|z|=1}\!z^{n-1}{\^q(z,t)}\,{\rm d} z=
\frac1{2\pi}\,\, \mathop{\textstyle\trueint}\limits_{\!\!-\pi}^{\,\,\pi} {\rm e}^{ink} \^q(k,t)\,{\rm d} k\,,
\end{eqnarray}
\endnumparts
where $z={\rm e}^{ik}$, and the contour $|z|=1$ is oriented counterclockwise.
The transformation $k\to z$ maps $k\in{\mathbb{R}}$ into~$|z|=1$
and $\mathop{\rm Im}\nolimits\,k\gl0$ into~$|z|\lg1$
(with $k=\pm i\infty$ corresponding respectively to $z=0$ and $z=\infty$).
Use of~\eref{e:Fourierpair} yields
the solution of the IVP in Ehrenpreis form as
\begin{equation}
q_n(t)=
\frac1{2\pi i} \mathop{\textstyle\trueoint}\limits_{|z|=1}\!z^{n-1}{\rm e}^{-i\omega(z)t}\,{\^q(z,0)}\,{\rm d} z=
\frac1{2\pi}\,\, \mathop{\textstyle\trueint}\limits_{\!\!-\pi}^{\,\,\pi} {\rm e}^{i(nk-\omega(k)t)}
\^q(k,0)\,{\rm d} k\,,
\label{e:DLSsoln}
\end{equation}
where the linear dispersion relation is
\begin{equation}
\omega(z)=2-(z+1/z)= 2(1-\cos\,k)\,.
\label{e:DLSdisprel}
\end{equation}
Now consider the IBVP, namely~\eref{e:DLS}
with $n\in{\mathbb{N}}$ and $t\in{\mathbb{R}}^+$,
with $q_n(0)$ and $q_0(t)$~given.
We assume $q_n(0)\in\ell^1({\mathbb{N}})$ and
$q_0(t)\in{\mathcal C}({\mathbb{R}}^+_0)$.
Introduce the Fourier sine series and its inverse
as
\[
\^q\o{s}(z,t)= \mathop{\textstyle\truesum}\limits_{n=1}^\infty q_n(t)(1/z^n-z^n)\,,\qquad\!\!
q_n(t)= \frac1{4\pi i} \mathop{\textstyle\trueoint}\limits_{|z|=1}(z^n-1/z^n)\,\^q\o{s}(z,t)\,{\rm d} z/z\,,
\nonumber
\]
Use of this pair yields the solution of the IBVP as
\begin{eqnarray}
\fl
q_n(t)= \frac1{4\pi i}
\mathop{\textstyle\trueoint}\limits_{|z|=1}(z^n-1/z^n)/z\,\,{\rm e}^{-i\omega(z)t}\,\^q\o{s}(z,0)\,{\rm d} z
- \frac1{4\pi}\mathop{\textstyle\trueoint}\limits_{|z|=1}(z^n-1/z^n)/z\,\,{\rm e}^{-i\omega(z)t}\^g(z,t)\,{\rm d} z\,,
\label{e:dLSIBVPsoln}
\\
\noalign{\noindent where}
\^g(z,t)= (z-1/z)\mathop{\textstyle\trueint}\limits_0^t {\rm e}^{i\omega(z)t'}\,q_0(t')\,{\rm d} t'\,.
\nonumber
\end{eqnarray}
\subsection{A Lax pair for the discrete linear Schr\"odinger equation}
\label{s:linearLaxpair}
A Lax pair formulation,
first discovered for nonlinear PDEs~\cite{CPAM21p467},
is also possible for linear PDEs,
and in fact it is the key to solving a wide class of IBVPs
\cite{JMP41p4188,PRSLA456p805}.
Here we show how a Lax pair for the DLS equation~\eref{e:DLS}.
can be obtained by taking the linear limit of the the Lax pair of the
IDNLS equation~\eref{e:IDNLS}.
(As in the continuum limit,
an algorithmic way also exists to obtain the Lax pair associated to
any linear discrete evolution equation.
The corresponding formalism will be presented elsewhere.)
It is well-known that the IDNLS~\eref{e:IDNLS} is a reduction of the
Ablowitz-Ladik (AL) system~\eref{e:AL} \cite{JMP17p1011}. A Lax pair
for~\eref{e:AL} is given by the overdetermined linear
system~\eref{e:ALLP}. To obtain the linear limit of~\eref{e:ALLP},
let $\_Q_n=O(\epsilon)$, and take $\Phi_n(z,t)=\@v_n(z,t)=
(v_{1,n},v_{2,n})^t$ to be a two-component vector. The leading order
solution of \eref{e:ALLP} is then $\@v_n(z,t)=
\_Z^n{\rm e}^{i(z-1/z)^2\sigma_3t/2}\@v_o$, where
$\@v_o=(v_{1,o},v_{2,o})^t$ is an arbitrary constant vector.
Choosing $v_{2,o}=1$ and keeping terms up to $O(\epsilon)$ then
yields the following \textit{scalar} linear system for $v_{1,n}$:
\numparts \begin{eqnarray} v_{1,n+1} - z\,v_{1,n} = q_nz^{-n}{\rm e}^{-i(z-1/z)^2 t/2}\,,
\label{e:Lp2.1}
\\
\.v_{1,n} - \txtfrac i2(z-1/z)^2 v_{1,n} = i
(zq_n-q_{n-1}/z)z^{-n}{\rm e}^{-i(z-1/z)^2 t/2}\,. \label{e:Lp2.2} \end{eqnarray}
\endnumparts Enforcing the compatibility of~\eref{e:Lp2.1}
and~\eref{e:Lp2.2} now yields the discrete linear Schr\"odinger
equation~\eref{e:DLS}. To eliminate the dependence on $z^n$ from the
right-hand side (RHS) of~\eref{e:Lp2.1}, we now perform the
rescaling $z'=z^2$ and $\phi_n= z^{n-1}{\rm e}^{i(z-1/z)^2 t/2}v_{1,n}$.
Dropping primes for simplicity, we then obtain the following Lax
pair for~\eref{e:DLS}: \begin{eqnarray} \phi_{n+1} - z\,\phi_n = q_n\,, \qquad
\.\phi_n + i\omega(z) \phi_n = i (q_n-q_{n-1}/z)\,,
\label{e:LaxpairL} \end{eqnarray} where $\omega(z)$ is given
by~\eref{e:DLSdisprel} as before. Indeed, although it may not be
obvious at this point, the meaning of the variable $z$
in~\eref{e:LaxpairL} coincides exactly with that of~$z$
in~\eref{e:Fourierpair}.
The rescaling $z'=z^2$ between the linear and the nonlinear problem
is the discrete analogue of the rescaling $k'=2k$ in the continuum limit.
Such rescaling will reflect on the location of the jumps
in the Riemann-Hilbert problem (RHP) for the IBVP in the nonlinear problem,
which will differ from the corresponding locations in the linear problem.
\subsection{IVP for DLS via spectral analysis of the Lax pair}
\label{s:1.3}
We now solve the IVP for~\eref{e:DLS} using spectral methods.
Doing so will introduce some of the ideas that will be useful
for the IBVP and nonlinear case.
Making use of the integrating factor $z^n{\rm e}^{-i\omega(z)t}$
[with $\omega(z)$ as in~\eref{e:DLSdisprel}],
we introduce the modified eigenfunction
\begin{equation}
\psi_n(z,t)= z^{-n}\,e^{i\omega(z)t}\phi_n(z,t)\,,
\label{e:Psidef}
\end{equation}
which
satisfies the following modified Lax pair:
\begin{equation}
\psi_{n+1}-\psi_n= {\rm e}^{i\omega(z)t}q_n/z^{n+1}\,,\qquad
\.\psi_n= {\rm e}^{i\omega(z)t}i(q_n-q_{n-1}/z)/z^n
\label{e:LaxpairL0}
\end{equation}
Of course the above linear system is also compatible if $q_n(t)$
satisfies~\eref{e:DLS}.
It is then easy to define~$\phi_n\o{1,2}(z,t)$ as
the solutions of~\eref{e:LaxpairL}
which vanish as $n\to\mp\infty$, respectively:
\begin{eqnarray}
\phi_n\o1(z,t)= \mathop{\textstyle\truesum}\limits_{m=-\infty}^{n-1}q_m(t)\,z^{n-m-1},
\qquad
\phi_n\o2(z,t)= -\mathop{\textstyle\truesum}\limits_{m=n}^\infty q_m(t)\,z^{n-m-1}.
\label{e:PhiIVPL}
\end{eqnarray}
Note that $\phi_n\o1(z,t)$ is analytic as a function of~$z$ for~$|z|<1$
and continuous on $|z|=1$,
while $\phi_n\o2(z,t)$ is analytic for~$|z|>1$ and bounded for $|z|=1$.
The jump conditions obtained by evaluating $\phi_n\o{1,2}(z,t)$ on $|z|=1$
then yield a scalar RHP:
$\phi_n\o1(z,t)-\phi_n\o2(z,t)= z^{n-1}\^q(z,t)\,$,
where $\^q(z,t)$ is given by~\eref{e:Fourierpair}.
However,
the difference $\phi_n\o1-\phi_n\o2$ solves the \textit{homogeneous} version
of~\eref{e:LaxpairL},
and hence it depends on $n$ and $t$ only through the factor
$z^n\,{\rm e}^{-i\omega(z)t}$.
Evaluating~\eref{e:PhiIVPL} at $(n,t)=(0,0)$
we can then rewrite the jump condition as:
\begin{eqnarray}
\phi_n\o1(z,t)-\phi_n\o2(z,t)= z^{n-1}{\rm e}^{-i\omega(z)t}\^q(z,0)\,,
\qquad |z|=1\,.
\label{e:RHPL}
\end{eqnarray}
Equations~\eref{e:PhiIVPL} imply $\phi_n\o1(0,t)= q_{n-1}(t)\ne0$,
and $\phi_n\o2(z,t)\to0$ as $z\to\infty$.
Thus, the RHP defined by~\eref{e:RHPL} is trivially solved by
applying standard Cauchy projectors, namely:
\begin{equation}
\phi_n(z,t)= \frac1{2\pi i}\mathop{\textstyle\trueoint}\limits_{|\zeta|=1} \zeta^{n-1}\,
{\rm e}^{-i\omega(\zeta)t}\,\frac{\^q(\zeta,0)}{\zeta-z}\,{\rm d}\zeta\,,
\label{e:RHPLsoln}
\end{equation}
where the contour is oriented counterclockwise, as usual.
Then, inserting~\eref{e:RHPLsoln} into the LHS of the first
of~\eref{e:LaxpairL},
one obtains the solution of the IVP as~\eref{e:DLSsoln}.
The continuum limit of \eref{e:DLSsoln} yields the solution of
the linear Schr\"odinger equation.
Indeed, reinstating the lattice spacing~$h$,
the solution of the IVP for the DLS~\eref{e:DLSsoln} is
\numparts
\label{e:DLSsln2}
\begin{eqnarray}
q_n(t)=
\frac1{2\pi}\,\, \mathop{\textstyle\trueint}\limits_{\!\!-\pi/h}^{\,\,\pi/h} {\rm e}^{i(nkh-\omega(k)t)}
\^q(k,0)\,{\rm d} k\,,
\\
\noalign{\noindent where now $\omega(k)= 2(1-\cos\,kh)/h^2$ and}
\^q(k,t)= h\mathop{\textstyle\truesum}\limits_{n=-\infty}^\infty {\rm e}^{-inkh}q_n(t)\,.
\end{eqnarray}
\endnumparts
Then, taking the limit $h\to0$ of~\eref{e:DLSsln2}
with $x_n=nh$ fixed,
one obtains~\eref{e:LSIVPsoln} and the first of~\eref{e:FTpair}.
\subsection{IBVP for DLS via spectral analysis of the Lax pair}
\label{s:1.4}
We now use spectral methods to solve the IBVP for the DLS,
namely~\eref{e:DLS} for $n\in{\mathbb{N}}$ and $t\in{\mathbb{R}}^+$,
with $q_n(0)$ and $q_0(t)$ given, where
as before we assume $q_n(0)\in\ell^1({\mathbb{N}})$
and $q_0(t)\in{\mathcal C}({\mathbb{R}}^+_0)$.
Before we do so, however, we address the issue of the
well-posedness of the linear system~\eref{e:LaxpairL}.
In the continuum limit,
the $t$-part of the Lax pair evaluated at $x=0$
depends on $q(0,t)$ and $q_x(0,t)$, only one of which is given.
Use of the global relation allows one to obtain
the unknown BC in terms of the given one.
In the discrete case,
evaluation of the $t$-part of the Lax pair for $n=0$
requires the knowledge of $q_{-1}(t)$.
Thus, \textit{the role of the unknown boundary datum in the discrete case
is played by the fictitious function $q_{-1}(t)$.}
In analogy with the continuum limit,
the solution method proceeds as though this function is given;
a posteriori we will then show that this unknown boundary datum is
determined in terms of known initial-boundary data via the global relation.
A similar problem arises with Fourier methods,
where one must define an appropriate transform so that the unknown
boundary data do not appear in the expression for the solution.
A similar situation also occurs in IBVPs
for Burgers' equation \cite{NLTY2p37,JMP32p99}, where the solution
depends on an unknown function that must be determined a posteriori.
There, similarly to nonlinear PDEs solvable by the IST,
the IBVP is reduced to a
nonlinear integro-differential equation~\cite{JMP32p99},
which can be linearized for special kinds of BCs~\cite{NLTY2p37}.
\begin{figure}[t!]
\smallskip
\centerline{\includegraphics[width=0.995\textwidth]{figs/distinguished.eps}}
\caption{The distinguished points for the eigenfunctions
$\phi_n\o1$, $\phi_n\o2$ and $\phi_n\o3$.} \label{f:IBVPzeropts}
\end{figure}
\paragraph{Eigenfunctions and analyticity.}
As in the continuum case \cite{JMP41p4188,CMP230p1,CPAM58p639},
to solve the IBVP
we consider \textit{simultaneous} solutions of both
the $x$-part and the $t$-part of the Lax pair.
To do this we again use $\psi_n(z,t)$, defined in~\eref{e:Psidef}.
Integrating~\eref{e:LaxpairL0},
we then define three eigenfunctions
uniquely determined in terms of their normalizations:
namely, $\phi_n\o{j}(z,t)$ for $j=1,2,3$,
so that $\phi_n\o{j}(z,t)=0$ respectively at $(n,t)=(0,0)$,
as $(n,t)\to(\infty,t)$ and at $(n,t)=(0,T)$
(cf.\ Fig.~\ref{f:IBVPzeropts}):
\numparts
\label{e:PhiIBVP}
\begin{eqnarray}
&\phi_n\o1(z,t)= \mathop{\textstyle\truesum}\limits_{m=0}^{n-1}q_m(t)\,z^{n-m-1}
+ iz^n\mathop{\textstyle\trueint}\limits_0^t {\rm e}^{-i\omega(z)(t-t')}\big(q_0(t')-q_{-1}(t')/z\big)\,{\rm d} t',
\\
&\phi_n\o2(z,t)= - \mathop{\textstyle\truesum}\limits_{m=n}^\infty q_m(t)\, z^{n-m-1},
\\
&\phi_n\o3(z,t)= \mathop{\textstyle\truesum}\limits_{m=0}^{n-1}q_m(t)\,z^{n-m-1}
- iz^n\mathop{\textstyle\trueint}\limits_t^T {\rm e}^{-i\omega(z)(t-t')}\big(q_0(t')-q_{-1}(t')/z\big)\,{\rm d} t'.
\end{eqnarray}
\endnumparts
We introduce the domains $D_\pm=\{z\in{\mathbb{C}}:\mathop{\rm Im}\nolimits \omega(z)\gl0\}$,
which will also be convenient to decompose as
$D_\pm=D_{\pm\#in}\cup D_{\pm\#out}$, where
$D_{\pm\#in}$ and $D_{\pm\#out}$ are respectively
the portions of $D_\pm$ inside and outside the unit disk
(cf.~Fig.~\ref{f:DpmL}),
namely
\begin{eqnarray}
D_{+\#in}=\{z\in{\mathbb{C}}:|z|<1\,\wedge\,\mathop{\rm Im}\nolimits\,z>0\}\,,
\qquad
D_{+\#out}=\{z\in{\mathbb{C}}:|z|>1\,\wedge\,\mathop{\rm Im}\nolimits\,z<0\}\,,
\nonumber
\\
D_{-\#in}=\{z\in{\mathbb{C}}:|z|<1\,\wedge\,\mathop{\rm Im}\nolimits\,z<0\}\,,
\qquad
D_{-\#out}=\{z\in{\mathbb{C}}:|z|>1\,\wedge\,\mathop{\rm Im}\nolimits\,z>0\}\,.
\nonumber
\end{eqnarray}
We then note that:
\begin{itemize}
\ite
$\phi_n\o2$ coincides with the eigenfunction in the IVP,
hence it is analytic for $|z|>1$ and continuous and bounded for $|z|\ge1$,
and $\phi_n\o2(z,t)\to0$ as $z\to\infty$;
\ite
$\phi_n\o1$ and $\phi_n\o3$
are analytic in the punctured complex $z$-plane ${\mathbb{C}}^{\,[\raise0.08ex\hbox{\scriptsize$\slash$}\kern-0.34em0]}$;
\ite
for all $t>0$ it is ${\rm e}^{i\omega(z)t}\to0$ as $z\to0,\infty$ in $D_+$
and ${\rm e}^{-i\omega(z)t}\to0$ as $z\to0,\infty$ in $D_-$;
as a result,
$\phi_n\o1$ and $\phi_n\o3$ are bounded
respectively for $z\in \=D_{-\#in}$ and $z\in \=D_{+\#in}$.
\end{itemize}
Note that~\eref{e:PhiIBVP} do not define
$\phi_0\o1(z,t)$ and $\phi_0\o3(z,t)$ at $z=0$.
In \ref{s:asymptotics}, however, we compute the asymptotics
of these eigenfunctions as $z\to0$,
and we show that
$\phi_0\o1(z,t)=O(1)$ as $z\to0$ with $\mathop{\rm Im}\nolimits z\le0$ and
$\phi_0\o3(z,t)=O(1)$ as $z\to0$ with $\mathop{\rm Im}\nolimits z\ge0$.
\begin{figure}[t!]
\smallskip
\rightline{\includegraphics[width=0.405\textwidth]{figs/dlsregions3.eps}\qquad
\includegraphics[width=0.405\textwidth]{figs/dlscontours.eps}}
\caption{(Left) The regions $D_+$ (shaded) and $D_-$ (white) of the $z$-plane where $\mathop{\rm Im}\nolimits[\omega(z)]\gl0$.
(Right) The contours $C_{1,2}$, $C_{2,3}$ and $C_{3,1}$
that define the Riemann-Hilbert problem in the linear case (see text for details).}
\label{f:DpmL}
\end{figure}
\paragraph{Jump conditions and Riemann-Hilbert problem.}
The difference between eigenfunctions at $|z|=1$
and $z\in[-1,1]$ yields a scalar RHP whose solution
will enable us to reconstruct the potential in terms of the scattering data.
As before, the difference between any eigenfunctions
solves the homogeneous version of~\eref{e:LaxpairL}.
Evaluating these differences at $(n,t)=(0,0)$
we then obtain the jumps as
(of course any two of the jumps uniquely determine
the third one):
\numparts
\label{e:Phijumps}
\begin{eqnarray}
\fl
\phi_n\o1(z,t) - \phi_n\o2(z,t)= z^{n-1}{\rm e}^{-i\omega(z)t}\,\^q(z,0)
&|z|=1~\wedge~\mathop{\rm Im}\nolimits\,z\le0\,,
\label{e:Phijumps12}
\\
\fl
\phi_n\o1(z,t) - \phi_n\o3(z,t)= z^{n-1}{\rm e}^{-i\omega(z)t}\,\^F(z,T)
&\!\!\mathop{\rm Im}\nolimits\,z=0~\wedge~|z|\le1\,,
\label{e:Phijumps13}
\\
\fl
\phi_n\o3(z,t) - \phi_n\o2(z,t)= z^{n-1}{\rm e}^{-i\omega(z)t}\,\big(\^q(z,0) - \^F(z,T)\big),\qquad
&|z|=1~\wedge~\mathop{\rm Im}\nolimits\,z\ge0\,,
\label{e:Phijumps23}
\end{eqnarray}
\endnumparts
with $\^F(z,t)= i(z\^f_0(z,t)-\^f_{-1}(z,t))$,
and where $\^q(z,t)$ and $\^f_n(z,t)$ are respectively the
$z$-transforms of the initial and boundary data; namely:
\label{e:DLSztransforms}
\begin{eqnarray}
\^q(z,t)= \mathop{\textstyle\truesum}\limits_{m=0}^\infty q_m(t)/z^m\,,
\qquad
\^f_n(z,t)= \mathop{\textstyle\trueint}\limits_0^t {\rm e}^{i\omega(z)t'} q_n(t')\,{\rm d} t'\,.
\end{eqnarray}
Note that $\^q(z,t)$ is analytic for $|z|>1$ and continuous and bounded
for $|z|\ge1$,
while the $\^f_n(z,t)$ are analytic $\forall z\ne0$ and
continuous and bounded for $z\in \=D_+$.
Moreover, $\^q(z,t)\to q_0(t)$ as $z\to\infty$,
while $\^f_n(z,t)\to0$ as $z\to0,\infty$ in~$D_+$.
Finally, integration by parts shows that
\begin{equation}
\^f_n(z,t)= iz\,\big(\,{\rm e}^{i\omega(z)t}q_n(t)-q_n(0)\,\big) +O(z^2)
\label{e:fnasymp@z=0}
\end{equation}
as $z\to0$ in~$\partial D_+$ (i.e., along the real $z$-axis).
As shown in \ref{e:ztransfinverse},
\eref{e:DLSztransforms} are inverted by
\begin{eqnarray}
q_n(t)= \frac1{2\pi i}\mathop{\textstyle\trueoint}\limits_{|z|=1}z^{n-1}\^q(z,t)\,{\rm d} z\,,
\qquad
q_n(t)= \frac1{2\pi} \mathop{\textstyle\trueint}\limits_{\partial D_{\!+\#out}}\!\!
\omega'(z){\rm e}^{-i\omega(z)t}\^f_n(z,T)\,{\rm d} z\,,
\nonumber\\[-1ex]
\label{e:LSinvztransf}
\end{eqnarray}
for all $0<t<T$, where $\omega'(z)=d\omega/dz$ and
$\partial D_{\!+\#out}$ is oriented so that $\mathop{\rm Re}\nolimits z$ is decreasing.
Note that
$\^F(z,T)/z$ remains bounded as $z\to0$
along the real $z$-axis [cf.\ \ref{s:asymptotics}].
Thus, the RHS of~\eref{e:Phijumps13} with $n=0$ does not have a pole at $z=0$.
The solution of the RHP defined by~\eref{e:Phijumps} is therefore
simply obtained using standard Cauchy projectors over the
unit circle:
\begin{eqnarray}
\phi_n(z,t)= \frac1{2\pi i}\, \mathop{\textstyle\trueoint}\limits_{|\zeta|=1}
\zeta^{n-1}{\rm e}^{-i\omega(\zeta)t}\, \frac{\^q(\zeta,0)}{\zeta-z}\,{\rm d}\zeta
- \frac1{2\pi i} \mathop{\textstyle\trueoint}\limits_{\partial D_{\!+\#in}}
\zeta^{n-1}{\rm e}^{-i\omega(\zeta)t}\,
\frac{\^F(\zeta,T)}
{\zeta-z}\,{\rm d}\zeta\,,
\nonumber\\[-1.4ex]
\label{e:IBVPRHPslnL}
\end{eqnarray}
where $|\zeta|=1$ is taken counterclockwise and
$\partial D_+$ is oriented so as to leave the domain to its left, as usual.
Inserting~\eref{e:IBVPRHPslnL} into the first of~\eref{e:LaxpairL}
then yields the reconstruction formula:
\begin{eqnarray}
q_n(t)= \frac1{2\pi i}\!\mathop{\textstyle\trueoint}\limits_{|z|=1}
z^{n-1}{\rm e}^{-i\omega(z)t}\,\^q(z,0)\,{\rm d} z
- \frac1{2\pi i}\!\mathop{\textstyle\trueint}\limits_{\partial D_{\!+\#in}} z^{n-1}{\rm e}^{-i\omega(z)t}\,
\^F(z,T)\,{\rm d} z\,.
\label{e:IBVPsoln}
\end{eqnarray}
Of course
the right-hand side of~\eref{e:IBVPsoln} still depends on
the undetermined value $q_{-1}(t)$ via its transform $\^f_{-1}(z,T)$.
We next show how to eliminate this unknown using the global relation.
\paragraph{Global relation and symmetries.}
The global relation, which couples all initial and boundary values,
is obtained in a similar way as in the continuum problem
by integrating~\eref{e:LaxpairL0}
around the edges of the domain $\mathbb{N}_0\times[0,T]$,
namely for $(n,t)$ from $(0,0)$ to $(0,T)$, from there to $(\infty,T)$,
and then to $(\infty,0)$ and back to $(0,0)$:
\begin{eqnarray}
i\mathop{\textstyle\trueint}\limits_0^t {\rm e}^{i\omega(z)t'}\big(q_0(t')-q_{-1}(t')/z\big)\,{\rm d} t'
+ {\rm e}^{i\omega(z)t}\mathop{\textstyle\truesum}\limits_{m=0}^\infty q_m(t)/z^{m+1}
= \mathop{\textstyle\truesum}\limits_{m=0}^\infty q_m(0)/z^{m+1}\,.
\nonumber\\[-1.6ex]
\label{e:global}
\end{eqnarray}
Equation~\eref{e:global} holds where all of its terms are defined,
that is, for all $|z|\ge1$.
In terms of the $z$-transforms:
\begin{eqnarray}
i\big[z\^f_0(z,t) - \^f_{-1}(z,t)\big] + {\rm e}^{i\omega(z)t}\^q(z,t)
= \^q(z,0)\,.
\label{e:g2}
\end{eqnarray}
Now note that $\omega(z)$ is invariant under the transformation $z\to1/z$,
and therefore so are the functions $\^f_n(z,t)$.
Moreover, $z\in D_{+\#out}$ implies $1/z\in D_{+\#in}$ and viceversa.
Hence, \eref{e:g2} with $z\to1/z$ gives, for all $0<|z|\le1$:
\begin{eqnarray}
i\big[(1/z)\^f_0(z,t) - \^f_{-1}(z,t)\big] + {\rm e}^{i\omega(z)t}\^q(1/z,t)
= \^q(1/z,0)\,.
\label{e:g3}
\end{eqnarray}
We can then solve for $\^f_{-1}(z,t)$, obtaining, for all $0<|z|\le1$:
\begin{equation}
\^f_{-1}(z,t) = \^f_0(z,t)/z
- i\big(\, {\rm e}^{i\omega(z)t}\^q(1/z,t)- \^q(1/z,0)\,\big)\,.
\label{e:fm1LS}
\end{equation}
\paragraph{Solution of the IBVP.}
Of course the RHS of~\eref{e:fm1LS} contains
${\rm e}^{i\omega(z)T}\^q(1/z,T)$, which is (apart from the changes
$t\to T$ and $z\to1/z$) the transform of the solution
we are trying to recover.
When this terms is inserted in~\eref{e:IBVPsoln}, however,
the resulting integrand is $z^{n-1}{\rm e}^{i\omega(z)(T-t)}\^q(1/z,t)$,
which is analytic and bounded in $D_{+\#in}$,
and whose integral over $\partial D_{+\#in}$ is therefore zero.
[This is analogous to what happens in the continuum limit;
cf.\ section~\ref{s:continuum}.]\,\
Importantly, the result also holds for $n=0$, since ${\rm e}^{i\omega(z)(T-t)}$
decays exponentially for all $t<T$ as $z\to0$ in $D_{+\#in}$.
We then have
\begin{eqnarray}
\fl
q_n(t)= \frac1{2\pi i}\, \mathop{\textstyle\trueoint}\limits_{|z|=1}
z^{n-1}{\rm e}^{-i\omega(z)t}\,\^q(z,0)\,{\rm d} z
+ \frac1{2\pi} \mathop{\textstyle\trueint}\limits_{\partial D_{+\#in}} z^{n-1}{\rm e}^{-i\omega(z)t}\,
\big[i\^q(1/z,0)-(z-1/z)\^f_0(z,T)\big]\,{\rm d} z\,.
\nonumber\\[-2ex]
\label{e:LSIBVPsoln0}
\end{eqnarray}
Equation~\eref{e:LSIBVPsoln0} provides the solution of the IBVP in
Ehrenpreis form \cite{Ehrenpreis1970,Palamodov1970,Henkin1990},
since the only dependence of the RHS on~$n$ and~$t$ is
via the terms $z^n{\rm e}^{-i\omega(z)t}$, as in the IVP. Performing the
change of variable $z'=1/z$ we can write the second term in the RHS
of~\eref{e:LSIBVPsoln0} as an integral over $\partial D_{+\#out}$.
Then, since the resulting integrand,
${\rm e}^{-i\omega(z)t}\^q(z,0)/z^{n+1}$ is analytic on $D_{-\#out}$,
for that portion we can deform the contour $\partial D_{+\#out}$ onto the
circle $|z|=1$ and combine the result with the first integral
in~\eref{e:LSIBVPsoln0}, obtaining the equivalent representation
\begin{equation}
\fl
q_n(t)= \frac1{2\pi i}\, \mathop{\textstyle\trueoint}\limits_{|z|=1}
\big(z^n-z^{-n}\big)/z\,\,{\rm e}^{-i\omega(z)t}\,\^q(z,0)\,{\rm d} z
- \frac1{2\pi} \mathop{\textstyle\trueint}\limits_{\partial D_{+\#out}} (z-1/z)\,z^{-n-1}{\rm e}^{-i\omega(z)t}\,
\^f_0(z,T)\,{\rm d} z\,,
\label{e:LSIBVPsoln}
\end{equation}
where, as before, $\partial D_{+\#out}$ is oriented so that $\mathop{\rm Re}\nolimits z$ is decreasing.
\paragraph{Continuum limit.}
The representation~\eref{e:LSIBVPsoln} is the discrete analogue of
the solution in the continuum limit.
To see this, one can reinstate the lattice spacing $h$ and follow
the same steps as above.
When expressed in terms of~$k$, the solution of the IBVP then becomes:
\begin{eqnarray}
q_n(t)=
\frac2\pi\,\, \mathop{\textstyle\trueint}\limits_0^{\,\,\pi/h} {\rm e}^{-i\omega(k)t}\sin(nkh)\,\^q\o{s'}(k,0)\,{\rm d} k
+ \frac1\pi\,\,
\mathop{\textstyle\trueint}\limits_0^{\,\,\pi/h} {\rm e}^{-i\omega(k)t}\sin(nkh) \^g(k,t)\,{\rm d} k\,,
\nonumber
\\[-1ex]
\label{e:dLSIBVsolnk}
\\
\noalign{\noindent where $\omega(k)=2(1-\cos(kh))/h^2$, and with}
\^q\o{s'}(k,t)=h\mathop{\textstyle\truesum}\limits_{n=1}^{\infty} \sin(nkh)q_n(t)\,,
\qquad
\^g(k,t)= 2i\,{\sin(kh) \over h} \mathop{\textstyle\trueint}\limits_0^t
{\rm e}^{i\omega(k)t'}q_0(t')\,{\rm d} t'\,.
\nonumber
\end{eqnarray}
It is then trivial to show that, in the limit $h\to 0$,
\eref{e:dLSIBVsolnk} yield the solution of the continuum problem,
namely~\eref{e:LSIBVPsinetransform}.
\paragraph{Remarks.}
Assuming existence, one can now verify that
the RHS of~\eref{e:LSIBVPsoln0} and~\eref{e:LSIBVPsoln}
indeed satisfies the DDE as well as the initial and BCs.
That the function defined by~\eref{e:LSIBVPsoln0}
solves the DLS equation is a trivial consequence
of the fact that it is in Ehrenpreis form.
When $t=0$ the term proportional to $z^{-n}$ in the first
integral of~\eref{e:LSIBVPsoln}
gives zero contribution, since the corresponding integrand
is analytic, bounded for $|z|>1$, and $O(1/z^{n+1})$ as $z\to\infty$.
Similarly, the second integral vanishes for the same reasons.
The only piece left coincides with the RHS of the first of~\eref{e:LSinvztransf}
at $t=0$, which therefore yields the initial datum~$q_n(0)$.
Finally, for $n=0$ the first integral in~\eref{e:LSIBVPsoln} is obviously zero,
while the second becomes just the inversion integral in~\eref{e:LSinvztransf}.
Hence its result is simply~$q_0(t)$.
Even though $\^f_0(z,T)$ depends on values of the BC $q_0(t)$
at all times~$t$~ from 0 to~$T$,
in practice~\eref{e:LSIBVPsoln} preserves causality, and the solution of
the IBVP at time~$t$ does not depend on future values of the BCs,
because one can replace $T$ with $t$ in~\eref{e:LSIBVPsoln}.
The reason is that the difference between the two terms is
\[
\frac1{2\pi}\mathop{\textstyle\trueint}\limits_{\partial D_{+\#out}}(z-1/z)\,z^{-n-1}
\mathop{\textstyle\trueint}\limits_t^T {\rm e}^{-i\omega(z)(t-t')}q_0(t')\,{\rm d} t'\,{\rm d} z\,,
\]
and $\forall n\ne0$
the integrand is analytic and bounded in~$D_{+\#out}$, and vanishes
as $z\to\infty$ in~$D_+$.
Hence, the integral is zero~$\forall n>0$.
For all $n\ne0$, the second integrand in~\eref{e:LSIBVPsoln}
is analytic and bounded in $D_{-\#out}$.
Hence we can deform the integration contour from
$\partial D_{+\#out}$ to $|z|=1$, and substitute $z\to1/z$
in half of the integral.
The resulting expression for the solution coincides with
the solution of the IBVP via sine series,
namely~\eref{e:dLSIBVPsoln}.
We reiterate however that~\eref{e:LSIBVPsoln} also holds for $n=0$,
unlike~\eref{e:dLSIBVPsoln}.
Unlike sine/cosine transforms, the present method works equally well
for more general BCs, as we show below.
Also, unlike sine/cosine transforms, the present method can solve
IBVPs for arbitrary linear discrete evolution equations.
Finally, the method can be generalized
to solve IBVPs for integrable nonlinear DDEs,
as we show in section~\ref{s:IDNLS}.
\paragraph{Other boundary conditions.}
We now consider a IBVP for the DLS equation~\eref{e:DLS}
in which the BCs are a linear combination
of $q_0(t)$ and $q_{-1}(t)$ with constant coefficients,
namely, when
\begin{equation}
q_{-1}(t) - \alpha q_0(t) = h(t)\,
\label{e:DLSRobinBC}
\end{equation}
is given,
$\alpha\in{\mathbb{C}}$ is a nonzero but otherwise arbitrary constant,
and where in this case the labeling of the lattice should be such that
$n=-1$, not $n=0$, is the first lattice site.
Such BCs are the discrete analogue of Robin-type BCs in IBVPs for PDEs,
and cannot be solved using sine/cosine series.
The present method however works equally well;
the only difference from the previous case being that one needs to solve
the global relation for a different unknown.
Indeed, in \ref{s:Robin} we show that the solution of this IBVP is given by
\begin{eqnarray}
\fl
q_n(t)= \frac1{2\pi i}\!\mathop{\textstyle\trueoint}\limits_{|z|=1}
z^{n-1}{\rm e}^{-i\omega(z)t}\,\^q(z,0)\,{\rm d} z
- \frac1{2\pi i}\!\mathop{\textstyle\trueint}\limits_{\partial D_{\!+\#in}}\!\!z^{n-1}{\rm e}^{-i\omega(z)t}\,
\frac{\^G(z,T)}{1/z-\alpha}\,{\rm d} z
-\nu_\alpha\alpha^{-n-1}{\rm e}^{-i\omega(\alpha)t}\^G(1/\alpha,t)\,,
\nonumber\\[-0.4ex
\label{e:DLSRobinsoln}
\\
\noalign{\noindent where}
\^G(z,t) =
i(z-1/z)\^h(z,t) + (z-\alpha)\^q(1/z,0)\,,
\label{e:DLSRobinGdef}
\end{eqnarray}
and where $\nu_\alpha=1$ if $\alpha\in
D_{\!+\#out}$, $\nu_\alpha=1/2$ if $\alpha\in\partial D_{\!+\#out}$
and $\nu_\alpha=0$ otherwise, and where the integral along $\partial
D_{\!+\#in}$ is to be taken in the principal value sense when
$\alpha\in\partial D_{\!+\#out}$. As before, one can easily verify
that the expression in~\eref{e:DLSRobinsoln} indeed
solves~\eref{e:DLS} and satisfies the initial condition and the
BC~\eref{e:DLSRobinBC}. Moreover, one can also verify that, in the
limit $\alpha\to\infty$ with $h(t)/\alpha= h'(t)$ finite, the
solution of the IBVP with ``Dirichlet-type'' BCs
[namely~\eref{e:LSIBVPsoln0}], is recovered.
\section{Integrable discrete nonlinear Schr\"odinger equation}
\label{s:IDNLS}
We now turn our attention to IVBPs for the IDNLS
equation~\eref{e:IDNLS}. As before, we first review the IVP, which
serves to introduce some of tools that will be used for the IBVP. We
require the same regularity conditions on the initial-boundary data
as in the linear case.
\subsection{The Ablowitz-Ladik system on the integers}
\label{s:AL}
Consider the AL system~\eref{e:AL} with $n\in{\mathbb{Z}}$ and
$t\in{\mathbb{R}}^+$, and with $q_n(0)$ given. A Lax pair for~\eref{e:AL}
is given by~\eref{e:ALLP}, where now we take $\Phi_n(z,t)$ to be a
$2\times2$ matrix, $\_Q_n(t)$ and $\_H_n(z,t)$ are defined
in~\eref{e:QH}, and $\omega(z)\equiv\omega_\mathrm{idnls}(z)=
\omega_\mathrm{dls}(z^2)/2$,
where $\omega_\mathrm{dls}(z)$ was defined in~\eref{e:DLSdisprel}.
As in the linear case, we assume
$q_n(0)\in\ell^1({\mathbb{Z}})$.
(As in the continuum limit, the IST with non-vanishing BCs at infinity
is significantly more involved, see Refs.~\cite{IP23p1711,IP8p889}.)
\paragraph{Jost solutions.}
As customary, we remove the $n$-dependence of the eigenfunctions
as $n\to\pm\infty$
by introducing a modified eigenfunction as
\begin{equation}
\Phi_n(z,t)= \mu_n(z,t)\,\_Z^n{\rm e}^{-i\omega(z)t\sigma_3}\,.
\label{e:ALphimu}
\end{equation}
(This definition differs from the usual one
by the factor ${\rm e}^{-i\omega(z)t\sigma_3}$, which has been added
for consistency with the the IBVP, discussed in section~\ref{s:ALIBVP}.
With this choice, the scattering matrix will be independent of time.)
Then $\mu_n(z,t)$ satisfies
the following modified Lax pair:
\begin{eqnarray} \mu_{n+1}-\^{\_Z}\mu_n= \_Q_n\mu_n\_Z^{-1}\,,
\qquad \.\mu_n +
i\omega(z)[\sigma_3,\mu_n] = \_H_n\mu_n\,, \label{e:ALLPm} \end{eqnarray}
where $\^{\_Z}\_A= \_Z\_A\_Z^{-1}$
It is also useful to use the integrating factor
$\esh{i\theta}(\_A)={\rm e}^{i\theta\sigma_3}\_A\,{\rm e}^{-i\theta\sigma_3}$
(cf.~\ref{s:notations}). Then, the function \begin{equation} \Psi_n(z,t)=
\^{\_Z}^{-n}\esh{i\omega(z)t}\mu_n(z,t)\,, \label{e:PsiALdef} \end{equation}
solves \begin{eqnarray} \Psi_{n+1}-\Psi_n=
\_Z^{-1}\^{\_Z}^{-n}\esh{i\omega(z)t}(\_Q_n)\Psi_n\,. \qquad \.\Psi_n=
\^{\_Z}^{-n}\esh{i\omega(z)t}(\_H_n)\Psi_n\,.
\label{e:ALmodifiedLPPsi} \end{eqnarray} One can now easily
``integrate''~\eref{e:ALmodifiedLPPsi} and thereby obtain the
solutions of~\eref{e:ALLPm} which reduce to the identity matrix as
$n\to\mp\infty$: \begin{equation} \fl \label{e:ALmusolns} \mu_n\o1(z,t)= \_I +
\_Z^{-1}\mathop{\textstyle\truesum}\limits_{m=-\infty}^{n-1}\^{\_Z}^{n-m}(\_Q_m\mu_m\o1)\,,\quad
\mu_n\o2(z,t)= \_I -
\_Z^{-1}\mathop{\textstyle\truesum}\limits_{m=n}^\infty\^{\_Z}^{n-m}(\_Q_m\mu_m\o2)\,. \end{equation} Of
course, unlike the linear case the eigenfunctions are now defined in
terms of summation equations (the discrete analogue of integral
equations).
As in the linear problem,
\eref{e:ALmusolns} imply certain analyticity properties
for the eigenfunctions.
More precisely, let $\mu_n\o{j}(z,t)=(\mu_n\o{j,L},\mu_n\o{j,R})$,
$j=1,2$,
where the column vectors $\mu_n\o{j,L}(z,t)$ and $\mu_n\o{j,R}(z,t)$
denote respectively the first and second column of $\mu_n\o{j}(z,t)$.
These columns are analytic in the following regions~\cite{APT2003}:
\[
\mu_n\o{1,L},~\mu_n\o{2,R}:\quad |z|>1\,,\qquad
\mu_n\o{1,R},~\mu_n\o{2,L}:\quad |z|<1\,,
\]
Moreover, these columns are continuous and bounded on the closure of
these domains. These properties immediately yield those of
$\Phi_n\o{j}(z,t)=\mu_n\o{j}(z,t)\,\_Z^n{\rm e}^{-i\omega(z)t\sigma_3}$
for $j=1,2$:\break $\Phi_n\o{1,L}(z,t)$ and $\Phi_n\o{2,R}(z,t)$ are
analytic for $|z|>1$, and $\Phi_n\o{1,R}(z,t)$ and
$\Phi_n\o{2,L}(z,t)$ for $|z|<1$.
\paragraph{Scattering matrix.}
Equation~\eref{e:ALLP1} implies\,
$\det\,\Phi_{n+1}=(1-q_np_n)\det\,\Phi_n$.
Therefore
\begin{equation}
\det\,\Phi_n\o1= \mathop{\textstyle\trueprod}\limits_{m=-\infty}^{n-1}(1-q_m p_m)\,,\quad
\det\,\Phi_n\o2= \mathop{\textstyle\trueprod}\limits_{m=n}^\infty(1-q_m p_m)^{-1}=:1/C_n\,.
\label{e:ALphidet}
\end{equation}
(Note $\det\Phi_n=\det\mu_n$.)
Equations~\eref{e:ALphidet} mark a significant difference of the discrete case
from the continuum case,
where such determinants are independent of both the potential and the
independent variable (cf.\ section~\ref{s:continuum}).
For the focusing IDNLS [namely, \eref{e:AL} with $p_n=\nu q_n^*$ and $\nu=-1$],
$1-q_np_n=1+|q_n|^2$, and therefore
$\det\mu_n\o{j}\ne0$ $\forall n\in{\mathbb{Z}}$ for $j=1,2$.
For the defocusing case ($\nu=1$), however, it is necessary to assume that
$|q_n|\ne1$ $\forall n\in{\mathbb{Z}}$ in order that $\det\mu_n\o{j}$ to be
guaranteed to be nonzero.
Hereafter we will assume that $q_np_n\ne1~\forall n\in{\mathbb{Z}}$.
Moreover, we will require that the product
\[
C_{-\infty}= \det\,\Phi_\infty\o1= 1/\det\,\Phi_{-\infty}\o2=
\mathop{\textstyle\trueprod}\limits_{n=-\infty}^\infty(1-q_np_n)
\]
be finite, which will simplify the study of the scattering
coefficients. Under these hypotheses, the matrices $\Phi_n\o1$ and
$\Phi_n\o2$ are both fundamental solutions of the scattering
problem~\eref{e:ALLP1}. Hence they must be proportional to each
other: $\Phi_n\o1(z,t)= \Phi_n\o2(z,t)\_A(z)$ on $|z|=1$, where
$\_A(z)=\big(a_{jj'}(z)\big)$ is the $2\times2$ scattering matrix.
In terms of the modified eigenfunctions:
\begin{equation} \mu_n\o1(z,t)=
\mu_n\o2(z,t)\^{\_Z}^n\,\esh{-i\omega(z)t}\_A(z)\,.
\label{e:ALmuscat0} \end{equation} Or, in component form, \numparts
\label{e:ALmuscat} \begin{eqnarray} \mu_n\o{1,L}(z,t)=
a_{11}(z)\,\mu_n\o{2,L}(z,t)+z^{-2n}{\rm e}^{2i\omega(z)t}a_{21}(z)\,\mu_n\o{2,R}(z,t)\,,
\\
\mu_n\o{1,R}(z,t)=
z^{2n}{\rm e}^{-2i\omega(z)t}a_{12}(z)\,\mu_n\o{2,L}(z,t)+a_{22}(z)\,\mu_n\o{2,R}(z,t)\,.
\end{eqnarray}
\endnumparts
The above relations imply
$\_A(z)=\lim_{n\to\infty}\^{\_Z}^{-n}\esh{i\omega(z)t}\mu_n\o1(z,t)=
\lim_{n\to\infty}\Psi_n\o1(z,t)$, that is,
\begin{equation}
\_A(z)= \_I + \_Z^{-1}\mathop{\textstyle\truesum}\limits_{n=-\infty}^\infty
\^{\_Z}^{-n}\esh{-i\omega(z)t}\big(\_Q_n(t)\mu_n\o1(z,t)\big)\,.
\label{e:SmatrixAL}
\end{equation}
The scattering matrix $\_A(z)$ is independent of time,
since $\_A(z)=\lim_{n\to\infty}\Psi_n\o1(z,t)$,
and
$\lim_{n\to\infty}\.\Psi_n(z,t)=0$.
Equation~\eref{e:ALmuscat0} also implies\,
$\det\,\_A(z)= \det\,\Phi_\infty\o1(z,t)=C_{-\infty}$,
as well as
\begin{eqnarray}
\_A(z)= C_n\begin{pmatrix}
\mathop{\rm Wr}\nolimits\big(\Phi_n\o{1,L},\Phi_n\o{2,R}\big)
&\mathop{\rm Wr}\nolimits\big(\Phi_n\o{1,R},\Phi_n\o{2,R}\big)
\\
- \mathop{\rm Wr}\nolimits\big(\Phi_n\o{1,L},\Phi_n\o{2,L}\big)
&- \mathop{\rm Wr}\nolimits\big(\Phi_n\o{1,R},\Phi_n\o{2,L}\big)
\end{pmatrix}\,.
\label{e:ALWronskian}
\end{eqnarray}
The analyticity of the eigenfunctions then implies that
$a_{11}(z)$ and $a_{22}(z)$ can be analytically continued off the unit circle,
respectively into the domains $|z|>1$ and $|z|<1$,
but $a_{12}(z)$ and $a_{21}(z)$ cannot.
It is also useful to introduce the reflection coefficients
\begin{equation}
\rho_1(z)= {a_{21}(z)}/{a_{11}(z)}\,,\qquad
\rho_2(z)= {a_{12}(z)}/{a_{22}(z)}\,.
\label{e:ALIVPreflection}
\end{equation}
\paragraph{Symmetries.}
When $p_n(t)=\nu q_n^*(t)$,
the scattering problem~\eref{e:ALLP1} admits an important involution,
which can be conveniently written introducing the matrix~$\sigma_\nu$
defined in~\eref{e:sigmanudef}.
Indeed, when $p_n=\nu q_n^*$, if $\Phi_n(z,t)$ is a solution of~\eref{e:ALLP1},
so is the matrix
\begin{equation}
\Phi_n'(z,t)= \sigma_\nu \Phi_n^*(1/z^*,t)\,,
\label{e:PhiALsymm}
\end{equation}
Then, comparing the asymptotic behavior of the first and second columns of
the Jost eigenfunctions as $n\to\pm\infty$
one obtains, for $j=1,2$,
\begin{eqnarray}
\Phi_n\o{j,L}(z,t)=\sigma_\nu\big(\Phi_n\o{j,R}(1/z^*,t)\big)^*,\quad
\Phi_n\o{j,R}(z,t)=\nu\sigma_\nu\big(\Phi_n\o{j,L}(1/z^*,t)\big)^*.
\label{e:PhiALsymmcol}
\end{eqnarray}
The above relations imply the following symmetries for the elements of the
scattering matrix:
\begin{equation}
a_{22}(z)= a_{11}^*(1/z^*)\,,\qquad
a_{21}(z)= \nu\,a_{12}^*(1/z^*)\,.
\label{e:ALscattsymm}
\end{equation}
In turn, these imply $\rho_2(z)=\nu \rho_1^*(1/z^*)$.
\paragraph{Discrete spectrum.}
The proper eigenvalues of the scattering problem~\eref{e:ALLP1}
are the values $z=z_j$ with $|z_j|<1$ and $z=\=z_j$ with $|\=z_j|>1$
for which there exist eigenfunctions bounded $\forall n\in{\mathbb{Z}}$.
From the asymptotic behavior of the Jost solutions one can see that
such eigenvalues occur whenever the appropriate left- and right-sided
Jost solutions are proportional, namely
$\Phi_n\o{1,L}(\=z_j,t)= \=b_j\o{o}\Phi_n\o{2,R}(\=z_j,t)$ and
$\Phi_n\o{1,R}(z_j,t)= b_j\o{o}\Phi_n\o{2,L}(z_j,t)$,
or equivalently:
\begin{eqnarray}
\fl
\mu_n\o{1,L}(\=z_j,t)= \=b_j\o{o} \=z_j^{-2n}{\rm e}^{2i\omega(\=z_j)t}\mu_n\o{2,R}(\=z_j,t)\,,
\quad
\mu_n\o{1,R}(z_j,t)= b_j\o{o} z_j^{2n}{\rm e}^{-2i\omega(z_j)t}\mu_n\o{2,L}(z_j,t)\,.
\label{e:ALIVPzeros}
\end{eqnarray}
The Wronskian representations~\eref{e:ALWronskian} then imply
that such eigenvalues are the zeros of the scattering coefficients:
$a_{11}(\=z_j)=0$ and $a_{22}(z_j)=0$, respectively.
(As in Ref.~\cite{APT2003} we assume that $a_{jj}(z)\ne0$ for all $|z|=1$.)
Since no accumulation points of such zeros can exist
(because of the sectional analyticity of the scattering coefficients),
it follows that there is a finite number of them.
As in Ref.~\cite{APT2003} we assume all of these zeros are simple.
(The case of multiple zeros can be studied as the coalescence of simple zeros,
by analogy with the continuum case~\cite{JETP34p62}.)
Since $a_{jj}(z)$ are even functions \cite{APT2003},
$z=z_j$ is a zero of $a_{22}(z)$ iff $z=-z_j$ is,
and similarly for $a_{11}(z)$.
Moreover, the symmetries~\eref{e:ALscattsymm} imply that
$z=z_j$ is a zero of $a_{22}(z)$ iff $\=z_j=1/z_j^*$ is a zero of $a_{11}(z)$.
Thus, discrete eigenvalues appear in quartets.
The inverse problem will involve the modified eigenfunctions
$\mu_n\o{1,L}(z,t)/a_{11}(z)$ and $\mu_n\o{1,R}(z,t)/a_{22}(z)$.
Equations~\eref{e:ALIVPzeros} imply \begin{eqnarray} \fl
\mathop{\rm Res}\limits_{z=\=z_j}\bigg[\frac{\mu_n\o{1,L}(z,t)}{a_{11}(z)}\bigg]
= \=b_j\=z_j^{-2n}{\rm e}^{2i\omega(\=z_j)t}\mu_n\o{2,R}(\=z_j,t)\,,
\quad
\mathop{\rm Res}\limits_{z=z_j}\bigg[\frac{\mu_n\o{1,R}(z,t)}{a_{22}(z)}\bigg]
= b_jz_j^{2n}{\rm e}^{-2i\omega(z_j)t}\mu_n\o{2,L}(z_j,t)\,,
\nonumber\\[-1ex]
\end{eqnarray}
where $b_j= b_j\o{o}/a'_{22}(z_j)$ and $\=b_j= \=b_j\o{o}/a'_{11}(\=z_j)$
are referred to as the norming constants.
The symmetries of the scattering problem imply
$\=b_j= -\nu(b_j/z_j^2)^*$.
\newpage
\paragraph{Asymptotics.}
The asymptotic behavior of the eigenfunctions as $z\to0$ or $z\to\infty$
can be obtained from~\eref{e:ALmusolns}.
For example, for $\mu_n\o1(z,t)$ it is
\begin{eqnarray}
\mu_n\o1(z,t)= \_I + \_Q_{n-1}\,\_Z^{-1}
+ O(\_Z^{-2})\,
\qquad\rm{as}~z\to(\infty,0)\,,
\label{e:ALasymp}
\end{eqnarray}
where $z\to(z_L,z_R)$ indicates $z\to z_L$ in the first
column and $z\to z_R$ in the second one, and the asymptotics
corresponding to $O(\_Z^m)$ is defined in~\ref{s:notations}.
Equation~\eref{e:ALasymp} will allow us
to reconstruct the potentials from the asymptotic behavior of
$\mu_n\o1$:
\[
\_Q_n(t)= \lim_{z\to(\infty,0)} (\,\mu_{n+1}\o1(z,t)-\_I\,)\,\_Z\,.
\]
The asymptotic behavior of $\mu_n\o2(z,t)$ is obtained
in a slightly different way as that of $\mu_n\o1(z,t)$,
and the result is also different.
More precisely, in~\ref{s:asymptotics} we show that
\begin{eqnarray}
C_n\mu_n\o2(z,t)= \_I - \_Q_n\,\_Z +O(\_Z^2)
\qquad\mathrm{as}~z\to(0,\infty)\,.
\label{e:ALasympmu2}
\end{eqnarray}
Also, inserting~\eref{e:ALasymp} into the diagonal elements
of~\eref{e:SmatrixAL}
one obtains the asymptotic behavior of the analytic scattering coefficients:
\begin{eqnarray}
a_{11}(z)=1+ \frac1{z^2}\mathop{\textstyle\truesum}\limits_{n=-\infty}^\infty q_n(t)p_n(t)+O(1/z^4)\,,
\qquad\mathrm{as}~z\to\infty\,.
\nonumber
\end{eqnarray}
which by symmetry also determines the behavior of $a_{22}(z)$ as $z\to0$.
\paragraph{Inverse problem.}
The inverse problem is the RHP defined by \eref{e:ALmuscat}
for $|z|=1$:
\numparts
\label{e:ALmu2scat}
\begin{eqnarray}
\frac{\mu_n\o{1,L}(z,t)}{a_{11}(z)} - \mu_n\o{2,L}(z,t)
= z^{-2n}{\rm e}^{2i\omega(z)t}\rho_1(z,t)\mu_n\o{2,R}(z,t)\,,
\\
\frac{\mu_n\o{1,R}(z,t)}{a_{22}(z)} - \mu_n\o{2,R}(z,t)
= z^{2n}{\rm e}^{-2i\omega(z)t}\rho_2(z,t)\mu_n\o{2,L}(z,t)\,,
\end{eqnarray}
\endnumparts
where $\rho_1(z)$ and $\rho_2(z)$ as in~\eref{e:ALIVPreflection}.
Unlike the continuum case,
the asymptotics of $\mu_n\o{2,L}(z,t)$
as $z\to\infty$ depends on the values of the potentials
$q_m(t)$ and $p_m(t)$ for all $m\ge n$ through~$C_n$ [cf.~\eref{e:ALphidet}]\,.
This problem can be circumvented by introducing
the following renormalizations:
\begin{eqnarray}
\_M_n^-(z,t)= \begin{pmatrix}1&0\\0&C_n\end{pmatrix}
\bigg(\frac{\mu_n\o{1,L}(z,t)}{a_{11}(z)}\,,\,\mu_n\o{2,R}(z,t)\bigg)\,,
\nonumber
\\
\_M_n^+(z,t)= \begin{pmatrix}1&0\\0&C_n\end{pmatrix}
\bigg(\mu_n\o{2,L}(z,t),\,\frac{\mu_n\o{1,R}(z,t)}{a_{22}(z)}\bigg)\,.
\nonumber
\end{eqnarray}
The matrices $\_M_n^\pm(z,t)$ are sectionally meromorphic for
$|z|<1$ and $|z|>1$, respectively.
Moreover, \eref{e:ALmu2scat} yields
the following jump condition for the matrices~$\_M_n^\pm(z,t)$ on $|z|=1$:
\begin{eqnarray}
\_M_n^-(z,t)=\_M_n^+(z,t)\big(\_I-\_J_n(z,t)\big)\,,
\label{e:ALRHP}
\\
\noalign{\noindent where the jump matrix $\_J_n(z,t)$ is}
\_J_n(z,t)= \begin{pmatrix}\rho_1(z)\rho_2(z) &z^{2n}{\rm e}^{-2i\omega(z)t}\rho_2(z)\\
-z^{-2n}{\rm e}^{2i\omega(z)t}\rho_1(z) &0\end{pmatrix}.
\nonumber
\end{eqnarray}
Moreover, $\_M_n^\pm(z,t)$ have the following asymptotic behavior:
\numparts
\label{e:MasympAL}
\begin{eqnarray}
\_M_n^-(z,t)= \_I
+ \frac1z\begin{pmatrix}0 &-q_n/C_n\\ p_{n-1}C_n &0\end{pmatrix}
+ O(1/z^2)
\qquad\mathrm{as}~z\to\infty\,,
\label{e:M-asympAL}
\\
\_M_n^+(z,t)= \begin{pmatrix}1/C_n &0\\ 0 &C_n\end{pmatrix}
+ z \begin{pmatrix}0 &q_{n-1}\\ -p_n &0\end{pmatrix}
+ O(z^2)
\qquad\mathrm{as}~z\to0\,.
\label{e:M+asympAL}
\end{eqnarray}
\endnumparts
In the absence of a discrete spectrum [that is, if $a_{11}(z,t)\ne 0$
for $|z|>1$ and $a_{22}\ne 0$ for $|z|<1$] the matrix
functions $\_M_n^\pm(x,t,k)$ are analytic in their
respective domains.
In particular, \eref{e:M-asympAL} allows the RHP~\eref{e:ALRHP}
to be solved via the Cauchy projectors $P^\pm$ over the
unit circle, as in the linear case. Of course, unlike the linear case
the solution is now expressed in terms of a matrix integral equation:
\begin{equation}
\_M_n^+(z,t)= \_I +
\frac1{2\pi i}\mathop{\textstyle\trueint}\limits_{|\zeta|=1}\_M_n^+(\zeta,t)\frac{\_J_n(\zeta,t)}{\zeta-z}\,{\rm d}\zeta\,.
\label{e:ALRHPsoln}
\end{equation}
The asymptotic behavior of $\_M_n^+(z,t)$ as $z\to0$ is easily obtained
from~\eref{e:ALRHPsoln}:
\begin{eqnarray}
\_M_n^+(z,t)= \_I
+ \frac1{2\pi i}\!\mathop{\textstyle\trueint}\limits_{|\zeta|=1}\!\_M_n^+(\zeta,t)\_J_n(\zeta,t)\frac{{\rm d}\zeta}\zeta
+ \frac z{2\pi i}\!\mathop{\textstyle\trueint}\limits_{|\zeta|=1}\!\_M_n^+(\zeta,t)\_J_n(\zeta,t)\frac{{\rm d}\zeta}{\zeta^2}
+ O(z^2).
\nonumber\\[-2ex]
\label{e:ALasympRHP}
\end{eqnarray}
Comparing the limit as $z\to0$ of~\eref{e:ALasympRHP} with~\eref{e:M+asympAL},
we see that the off-diagonal portion of the first integral in~\eref{e:ALasympRHP}
is zero, a fact which is not entirely obvious otherwise.
(This integral is missing in the corresponding formula in Ref.~\cite{APT2003}.)
Then, comparing the $(1,2)$ components of~\eref{e:ALasympRHP}
and~\eref{e:M+asympAL} we obtain the reconstruction formula
for the solution of the IVP:
\[
q_n(t)= \frac1{2\pi i}\mathop{\textstyle\trueint}\limits_{|z|=1}z^{2n}{\rm e}^{-2i\omega(z)t}
\rho_2(z)\big(\mu_{n+1}\o2(z,t)\big)_{11}\,{\rm d} z\,.
\]
\paragraph{Linear limit.}
As in the continuum limit, the IST is the nonlinear analogue of
the linear transform pair.
Namely, if $\_Q_n= O(\epsilon)$, then $\mu_n\o1=\_I + O(\epsilon)$ and
\begin{eqnarray}
\_A(z)= \_I + \_Z^{-1}\mathop{\textstyle\truesum}\limits_{n=-\infty}^\infty\^{\_Z}^{-n}\esh{-i\omega(z)t}\_Q_n(t)
+ O(\epsilon^2)\,.
\nonumber
\\
\noalign{\noindent Thus}
\rho_2(\zeta)= \mathop{\textstyle\truesum}\limits_{n=-\infty}^\infty \zeta^{-2n-1}{\rm e}^{-2i\omega(\zeta)t}q_n(t)+O(\epsilon^2)
= \frac1\zeta \^q(\zeta^2,0) + O(\epsilon^2)\,,
\nonumber
\end{eqnarray}
where $\^q(z,t)$ is the linear $z$-transform defined in~\eref{e:Fourierpair}.
Similarly,
\begin{eqnarray}
\fl
q_n(t)= \frac1{2\pi i}\mathop{\textstyle\trueint}\limits_{|\zeta|=1}\zeta^{2n}{\rm e}^{-2i\omega(\zeta)t}\rho_2(\zeta)\,{\rm d}\zeta
+ O(\epsilon^2)
= \frac1{2\pi i}\mathop{\textstyle\trueint}\limits_{|z|=1}z^{n-1}{\rm e}^{-i\omega_\mathrm{dls}(z)t}\^q(z,0)\,{\rm d} z + O(\epsilon^2)\,,
\label{e:ALlinearlimit}
\end{eqnarray}
where the change of variable $\zeta^2= z$ was performed
in the RHS of~\eref{e:ALlinearlimit},
and where $\omega_\mathrm{idnls}(\zeta)=
{\textstyle\frac12}\omega_\mathrm{dls}(\zeta^2)$,
as discussed in section~\ref{s:linearLaxpair}.
\subsection{The Ablowitz-Ladik system on the naturals}
\label{s:ALIBVP}
We now consider the IBVP for the IDNLS.
That is, we solve~\eref{e:IDNLS} with $n\in{\mathbb{N}}$$, t\in{\mathbb{R}}^+$
and with $q_n(0)$ and $q_0(t)$ given.
The approach we will follow is a combination of the method for the IVP
for the IDNLS
on the integers and that for the IBVP for the DLS on the naturals.
\paragraph{Eigenfunctions and analyticity.}
Making use of the modified eigenfunction
$\Psi_n(z,t)$ in~\eref{e:PsiALdef},
we define three eigenfunctions $\mu_n\o{j}(z,t)$ which reduce to the
identity matrix respectively when $(n,t)=(0,0)$,
as $(n,t)\to(\infty,t)$ and at $(n,t)=(0,T)$:
\numparts
\label{e:ALmuIBVPsolns}
\begin{eqnarray}
\fl
\mu_n\o1(z,t)= \_I
+ \_Z^{-1}\mathop{\textstyle\truesum}\limits_{m=0}^{n-1}\^{\_Z}^{n-m}(\_Q_m(t)\mu_m\o1(z,t))
+ \^{\_Z}^n\mathop{\textstyle\trueint}\limits_0^t
\esh{-i\omega(z)(t-t')}\big(\_H_0(z,t')\mu_0\o1(z,t')\big)\,{\rm d} t'\,,
\label{e:ALmu1IBVPsolns}
\\
\fl
\mu_n\o2(z,t)= \_I -
\_Z^{-1}\mathop{\textstyle\truesum}\limits_{m=n}^\infty\^{\_Z}^{n-m}(\_Q_m(t)\mu_m\o2(z,t))\,,
\\
\fl
\mu_n\o3(z,t)= \_I
+ \_Z^{-1}\mathop{\textstyle\truesum}\limits_{m=0}^{n-1}\^{\_Z}^{n-m}(\_Q_m(t)\mu_m\o3(z,t))
- \^{\_Z}^n\mathop{\textstyle\trueint}\limits_t^T
\esh{-i\omega(z)(t-t')}\big(\_H_0(z,t')\mu_0\o3(z,t')\big)\,{\rm d} t'\,.
\end{eqnarray} \endnumparts Note that $\mu_n\o2(z,t)$ coincides with the eigenfunction
in the IVP, defined in~\eref{e:ALmusolns}.
As in the linear case,
we partition the complex $z$-plane into the domains $D_\pm$ defined as
$D_\pm=\{z\in{\mathbb{C}}:\mathop{\rm Im}\nolimits\omega(z)\gl0\}$.
We then write $D_\pm= D_{\pm\#in}\cup D_{\pm\#out}$
where the subscripts ``in'' and ``out'' denote the portions of $D_\pm$
inside and outside the unit disk, respectively.
That is (cf.~Fig~\ref{f:DpmAL}),
\vglue-1.4\medskipamount
\numparts
\begin{eqnarray}
D_{+\#in}= \{z\in{\mathbb{C}}: |z|<1\,\wedge\,\arg z\in(0,\pi/2)\cup(\pi,3\pi/2)\}\,,
\nonumber
\\
D_{-\#in}= \{z\in{\mathbb{C}}: |z|<1\,\wedge\,\arg z\in(\pi/2,\pi)\cup(3\pi/2,2\pi)\}\,,
\nonumber
\\
D_{+\#out}= \{z\in{\mathbb{C}}: |z|>1\,\wedge\,\arg z\in(\pi/2,\pi)\cup(3\pi/2,2\pi)\}\,,
\nonumber
\\
D_{-\#out}= \{z\in{\mathbb{C}}: |z|>1\,\wedge\,\arg z\in(0,\pi/2)\cup(\pi,3\pi/2)\}\,.
\nonumber
\end{eqnarray}
\endnumparts
\smallskip\noindent
Then, in a similar way as in the IVP on the whole line and the
IBVP in the linear problem,
we can obtain the regions of analyticity and boundedness
of the eigenfunctions.
More precisely, writing again $\mu_n\o{j}(z,t)=(\mu_n\o{j,L},\mu_n\o{j,R})$,
we have:
\begin{itemize}
\item
$\mu_n\o1(z,t)$ and $\mu_n\o3(z,t)$ are analytic in the punctured
complex $z$-plane~${\mathbb{C}}^{\,[\raise0.08ex\hbox{\scriptsize$\slash$}\kern-0.34em0]}$;
\item
$\mu_n\o{1,L}(z,t)$ is continuous and bounded in $\=D_{+\#out}$;
\item
the restriction of $\mu_n\o{1,R}(z,t)$ to $D_{-\#in}$ is continuous and
bounded in $\=D_{-\#in}$;
\item
$\mu_n\o{3,L}(z,t)$ is continuous and bounded in $\=D_{-\#out}$;
\item
the restriction of $\mu_n\o{3,R}(z,t)$ to $D_{+\#in}$ is continuous and
bounded in $\=D_{+\#in}$;
\item
$\mu_n\o{2,L}(z,t)$ is analytic for $|z|<1$
and continuous and bounded for $|z|\le1$;
\item
$\mu_n\o{2,R}(z,t)$ is analytic for $|z|>1$
and continuous and bounded for $|z|\ge1$.
\end{itemize}
The analyticity of the eigenfunctions is formally proven via
Neumann series as in the IVP~\cite{APT2003} and as in the
IVP for the continuum case~\cite{NLTY18p1771}.
However, showing the continuity of $\mu_n\o{1,R}(z,t)$ and
$\mu_n\o{3,R}(z,t)$ at $z=0$ is nontrivial,
and it requires studying the asymptotic behavior of the eigenfunctions
as $z\to 0$ (see~\ref{s:asymptotics}).
\begin{figure}[t!]
\rightline{\includegraphics[width=0.405\textwidth]{figs/dnlsregions2.eps}\quad
\includegraphics[width=0.405\textwidth]{figs/dnlscontours4.eps}}
\caption{(Left) The regions $D_+$ (shaded) and $D_-$ (white) of the
$z$-plane where $\mathop{\rm Im}\nolimits[\omega(z)]\protect\mathrel{\mathpalette\overl@ss>} 0$ in the nonlinear case,
with $D_\pm= D_{\pm\#in}\cup D_{\pm\#out}$\,.
(Right) The contours $L_1,\dots,L_4$ that define the Riemann-Hilbert problem
(see text for details)}
\label{f:DpmAL}
\end{figure}
\paragraph{Scattering matrices.}
The relation $\det\,\Phi_{n+1}= (1-q_np_n)\,\det\,\Phi_n$ still holds.
Therefore $\det\,\Phi_n\o1$ and $\det\,\Phi_n\o2$
are still given by~\eref{e:ALphidet}, and
$\det\,\Phi_n\o1= \det\,\Phi_n\o3$.
[Note that $\mu_t=\_L\mu+\mu\_R$ implies
$(\det\mu)_t=\mathop{\rm tr}(\_L+\_R)\,\det\mu$,
and in our case both $\_L$ and $\_R$ are traceless;
cf.~\eref{e:ALLP} and~\ref{s:notations}.]\,\
Hence, under the same regularity hypotheses as before,
$\Phi_n\o1$, $\Phi_n\o2$ and $\Phi_n\o3$ are each fundamental
solutions of the Lax pair~\eref{e:ALLP}.
We can therefore write the following relations
among the modified eigenfunctions:
\numparts
\label{e:ALIBVPjump}
\begin{eqnarray}
\mu_n\o2(z,t)= \mu_n\o1(z,t)\,\^{\_Z}^n\esh{-i\omega(z)t}\_s(z)\,,
\label{e:ALIBVPjump12}
\\
\mu_n\o3(z,t)= \mu_n\o1(z,t)\,\^{\_Z}^n\esh{-i\omega(z)t}\_S(z,T)\,,
\label{e:ALIBVPjump13}
\end{eqnarray}
\endnumparts
which hold wherever all terms are defined, namely:
the first column of~\eref{e:ALIBVPjump12} holds for
$0<|z|\le1$, the second column for $|z|\ge1$
and~\eref{e:ALIBVPjump13} holds $\forall z\ne0$.
Thus
\begin{equation}
\_s(z)= \mu_0\o2(z,0)\,,\qquad
\_S(z,T)= \big(\esh{i\omega(z)T}\mu_0\o1(z,T)\big)^{-1}\,.
\label{e:ALscattdef}
\end{equation}
Equation~\eref{e:ALscattdef} allows us to write
integral representations for the scattering matrices:
\numparts
\label{e:ALintegralscattering}
\begin{eqnarray}
\_s(z)= \_I - \_Z^{-1}\mathop{\textstyle\truesum}\limits_{n=0}^\infty
\^{\_Z}^{-n}\big(\_Q_n(t)\mu_n\o2(z,0)\big)\,,
\label{e:ALintegralscatterings}
\\
\_S^{-1}(z,T)= \_I + \mathop{\textstyle\trueint}\limits_0^T
\esh{i\omega(z)t}\big(\_H_0(z,t)\mu_0\o1(z,t)\big)\,{\rm d} t\,.
\label{e:ALintegralscatteringS}
\end{eqnarray}
\endnumparts
Note that $\_s(z)$ is again independent of time, since $\_s^{-1}(z)=
\lim_{n\to\infty}\^{\_Z}^{-n}\esh{i\omega(z)t}\mu_n\o1(z,t)=
\lim_{n\to\infty}\Psi_n\o1(z,t)$, as in the IVP.
Note also that~\eref{e:ALIBVPjump} implies
\begin{equation}
\det\,\_s(z)=1/C_0\,,\qquad
\det\,\_S(z,T)=1\,.
\label{e:IDNLSIBVPscattdet}
\end{equation}
The analyticity properties of $\mu_0\o2(z,t)$
are the same as those of $\mu_n\o2(z,t)$.
However, $\mu_0\o1(z,t)$ enjoys
larger domains of analyticity and boundedness than $\mu_n\o1(z,t)$.
The analyticity and boundedness regions of the scattering matrices
are determined correspondingly via~\eref{e:ALscattdef}:
\begin{itemize}
\item
$\_s_L(z)$ is analytic for $|z|<1$ and continuous and bounded for $|z|\le1$;
while $\_s_R(z)$ is analytic in $|z|>1$ and continuous and bounded for $|z|\ge1$;
\item
$\_S(z,T)$ is analytic in ${\mathbb{C}}^{\,[\raise0.08ex\hbox{\scriptsize$\slash$}\kern-0.34em0]}$;
moreover, $\_S_L(z,T)$ is continuous and bounded in $\=D_-$,
while $\_S_R(z,T)$ is continuous and bounded in $\=D_+$.
\end{itemize}
The above boundedness properties of~$\_S(z,T)$ can be obtained
as follows.
Let us write the matrix $\_S(z,T)$ as
\[
\_S(z,T)= \begin{pmatrix} A(z,T) &\~B(z,T)\\ B(z,T) &
\~A(z,T)\end{pmatrix}.
\]
As we show below, the symmetries of the problem imply that
$\~A(z,T)$ and $\~B(z,T)$ can be obtained respectively in terms of
$A(z,T)$ and $B(z,T)$. Hence, we only need to discuss the properties
of $A(z,T)$ and $B(z,T)$.
Recall that $\_S(z,T)$ is an entire function of~$z$,
and note that~\eref{e:IDNLSIBVPscattdet} implies
\[
\_S^{-1}(z,T)=
\begin{pmatrix} \~A(z,T) &-\~B(z,T)\\ -B(z,T) & A(z,T)\end{pmatrix}.
\]
Then \eref{e:ALscattdef} and the analyticity properties of
$\mu\o1_0(z,T)$ imply that $A(z,T)$ is bounded in $\=D_-$. Also,
\eref{e:ALintegralscatteringS} and the integral
representation~\eref{e:ALmu1IBVPsolns} with $n=0$ can be used to
write a Neumann series for $\_S^{-1}(z,T)$, which in turn can be
used to prove analyticity and boundedness of $B(z,T)$ in $\=D_-$.
The involution symmetry discussed when dealing with the~IVP is a
local property. Therefore, when $p_n(t)= \nu\,q_n^*(t)$, it also
applies for the~IBVP. That is, \eref{e:PhiALsymmcol} still holds, as
does~\eref{e:NLSsymmetriesPhij} for $j=1,2,3$. This implies \begin{equation}
\_s(z)= \begin{pmatrix} a(z) &\nu b^*(1/z^*)\\
b(z) &a^*(1/z^*)\end{pmatrix},
\quad
\_S(z,T)= \begin{pmatrix}A(z,T) & \nu B^*(1/z^*,T)\\
B(z,T) &A^*(1/z^*,T)\end{pmatrix}.
\label{e:ALIBVPscattdef}
\end{equation}
Note that \eref{e:IDNLSIBVPscattdet} imply
\begin{eqnarray}
a(z)a^*(1/z^*)-\nu b(z)b^*(1/z^*)=1/C_0 \,,
\nonumber\\
A(z,T)A^*(1/z^*,T)-\nu B(z,T)B^*(1/z^*,T)=1\,.
\nonumber
\end{eqnarray}
\paragraph{Asymptotics.}
Since $\mu_n\o2(z,t)$ coincides with~\eref{e:ALmusolns},
its asymptotics as $z\to (0,\infty)$ is still given
by~\eref{e:ALasympmu2}.
Also, in~\ref{s:asymptotics} we show that, even though the definition
of $\mu_n\o1(z,t)$ and $\mu_n\o3(z,t)$ involves time integrals,
it is still
$\mu_n\o{j}(z,t)=\_I+O(\_Z^{-1})$
as $z\to(\infty,0)$
for $j=1$ and $j=3$ in their respective domains of boundedness.
More precisely, for all $n>0$ it is
\numparts
\begin{eqnarray}
\mu_n\o{j}(z,t)=\_I+ \_Q_{n-1}(t)\_Z^{-1} + O(\_Z^{-2})\,,
\qquad {\rm as}~ z\to(\infty,0)\,
\label{e:ALIBVPmu13asymp}
\\
\noalign{\noindent for $j=1,3$, and the limits are restricted the
appropriate regions of the complex plane, where the corresponding
columns are bounded. For $n=0$ it is instead}
\mu_0\o1(z,t)=
\_I + \big(\_Q_{-1}(t)-\esh{-i\omega(z)t}\_Q_{-1}(0)\big)\,\_Z^{-1}+O(\_Z^{-2})\,,
\label{e:mu1Oasymp}
\\
\mu_0\o3(z,t)=
\_I + \big(\_Q_{-1}(t)-\esh{-i\omega(z)(t-T)}\_Q_{-1}(T)\big)\,\_Z^{-1}+O(\_Z^{-2})\,,
\label{e:mu3Oasymp} \end{eqnarray} \endnumparts as $z\to (\infty,0)$. The above yield,
for all $n\ge0$, \begin{equation} \_Q_{n-1}(t)=
\lim_{z\to(\infty,0)}\big(\mu_n\o{j}(z,t)-\_I\big)\,\_Z\,,\qquad
{\rm for}~j=1,3\,.\label{e:ALQnasymp} \end{equation}
\noindent Also, the asymptotic behavior of the eigenfunctions
determines that of the scattering matrices. In particular, from the
second of~\eref{e:ALscattdef} we have \numparts
\label{e:ALIBVPscattcoeffasymp} \begin{eqnarray} A^*(1/z^*,T)= 1+O(1/z^2)\,,\,\,
B^*(1/z^*,T)= O(1/z)\,\,\,
\mathrm{as}~z\to\infty~\mathrm{in}~\=D_{+\#out},
\\
\noalign{\noindent while~\eref{e:ALIBVPjump13} implies}
A^*(1/z^*,T)= 1+O(z^2)\,,\quad B^*(1/z^*,T)= O(z)\,\quad
\mathrm{as}~z\to0~\mathrm{in}~\=D_{+\#in}. \end{eqnarray} Similarly,
\eref{e:ALasympmu2} and~\eref{e:ALIBVPjump12} yield \begin{eqnarray} a^*(1/z^*)=
1/C_0 + O(1/z^2)\,,\quad b^*(1/z^*)= O(1/z)\quad
\mathrm{as}~z\to\infty~\mathrm{in}~\=D_{+\#out}.
\nonumber\\[-1ex]
\end{eqnarray}
\endnumparts
\paragraph{Riemann-Hilbert problem, solution and reconstruction formula.}
We now formulate the RHP whose solution will enable us to obtain a
representation for the solution of the AL system on the naturals.
For later reference, we introduce the quantities \begin{eqnarray} \gamma(z)={\nu
b^*(z) \over a(z)} \,,\qquad R(z,t)= {B^*(1/z^*,t)\over
A^*(1/z^*,t)}\,,\qquad \Gamma(z)={ B(z,T) \over
a^*(1/z^*)d^*(1/z^*)} \,, \nonumber
\\
\noalign{\noindent with}
d(z)=
a(z)A^*(1/z^*,T)-\nu b(z)B^*(1/z^*,T) \,. \nonumber \end{eqnarray} Note that
$R(z,T)$ is defined $\forall z\in{\mathbb{C}}$ except where
$A^*(1/z^*,T)=0$, $\Gamma(z)$ is defined for $z\in L_3\cup L_4$,
$d(z)$ for $z\in\=D_{\pm\#in}$, and $\gamma(z)$ for $|z|=1$.
Moreover, $d^*(1/z^*)=1/C_0+O(1/z^2)$ as $z\to\infty$. In the
analysis of linearizable BCs, it will be useful to write
$\Gamma^*(1/z^*)$ in terms of only $a(z)$, $b(z)$ and $R(z,T)$ as
\[
\Gamma^*(1/z^*)={R(z,T) \over a(z)\big(a(z)-\nu b(z)R(z,T)\big)}\,.
\]
Finally,
we introduce the normalization matrix
$\_C_n= \mathop{\rm diag}\nolimits(1/C_0,C_n)\,$.
We are now ready to formulate the RHP, which we do
using~\eref{e:ALIBVPjump}. We introduce the matrix functions
$\_M_n^\pm(z,t)$ defined as: \numparts \label{e:IBVPALdefM} \begin{eqnarray}
\_M_n^+(z,t)= \left\{\!\begin{array}{l}\displaystyle
\_C_n\,\bigg(\mu_n\o{2,L}(z,t),\displaystyle{\mu_n\o{3,R}(z,t)\over
d(z)}\bigg),\qquad z \in D_{+\#in},
\\[1ex]\displaystyle
\_C_n\,\bigg( {\mu_n\o{1,L}(z,t)\over
a^*(1/z^*)},\mu_n\o{2,R}(z,t)\bigg),\qquad z \in D_{+\#out},
\end{array}\right.
\label{e:IBVPALdefM1}
\\
\_M_n^-(z,t)= \left\{\!\begin{array}{l}\displaystyle
\_C_n\,\bigg(\mu_n\o{2,L} , {\mu_n\o{1,R} \over a(z)} \bigg),\qquad
z \in D_{-\#in}\,\,,
\\[1ex]\displaystyle
\_C_n\,\bigg( {\mu_n\o{3,L} \over d^*(1/z^*)},
\mu_n\o{2,R}\bigg),\qquad z \in D_{-\#out}\,.
\end{array}\right.
\label{e:IBVPALdefM2}
\end{eqnarray}
\endnumparts
Note that $\_M_n^\pm(z,t)$ are sectionally meromorphic respectively
for $z\in D_+$ and $z\in D_-$.
Moreover, after some tedious but straightforward algebra,
equations~\eref{e:ALIBVPjump} yield the jump conditions as
\begin{equation}
\_M_n^-(z,t)=\_M_n^+(z,t)\,\big(\_I-\_J_n(z,t)\big), \qquad z \in L\,,
\label{e:IBVPALsystemRHP}
\end{equation}
where the contours $L=L_1\cup L_2\cup L_3 \cup L_4$ are (cf.~Fig.~\ref{f:DpmAL})
\[\fl
L_1=\=D_{+\#in}\cap\=D_{-\#in}\,,\quad
L_2=\=D_{-\#in}\cap\=D_{+\#out}\,,\quad
L_3=\=D_{+\#out}\cap\=D_{-\#out}\,,\quad
L_4=\=D_{+\#in}\cap\=D_{-\#out}\,,
\]
and the jump matrices $\_J_n\o1,\dots ,\_J_n\o4$ are defined by
\begin{eqnarray}
\_J\o1_n(z,t)=\begin{pmatrix}
0 &\nu z^{2n}{\rm e}^{-2i\omega(z)t}\Gamma^* (1/z^*,T)\\
0 & 0\end{pmatrix} \,, \qquad z \in L_1\,,
\nonumber
\\
\_J\o2_n(z,t)=\begin{pmatrix} 1-1/C_0 &z^{2n}{\rm e}^{-2i\omega(z)t}\gamma (z)\\
-\nu z^{-2n}{\rm e}^{2i\omega(z)t}\gamma^*(z) & 1-C_0\big(1-\nu |\gamma(z)|^2\big)
\end{pmatrix} \,,\qquad z \in L_2\,,
\nonumber
\\
\_J\o3_n(z,t)=\begin{pmatrix}0 & 0\\ -z^{-2n}{\rm e}^{2i\omega(z)t}\Gamma (z,T) &0
\end{pmatrix} \,,\qquad z\in L_3\,,
\nonumber
\\
\_J\o4_n(z,t)=\_I- (\_I-\_J\o1_n)(\_I-\_J\o2_n)^{-1}(\_I-\_J\o3_n)\,, \qquad z \in L_4\,.
\nonumber
\end{eqnarray}
As in the IVP, we first consider the case in which no discrete spectrum is
present.
For the IBVP, this corresponds to assuming that $a(z)\ne0$ for
$z\in D_{-\#in}$ and $d(z)\ne0$ for $z\in D_{+\#in}$.
In this case,
the matrix functions $\_M_n^\pm(z,t)$ are analytic in their
respective domains. Also, $\_M_n(z,t)\to\_I$ as $z\to\infty$ thanks
to~\eref{e:ALasympmu2}, \eref{e:ALIBVPmu13asymp},
\eref{e:ALIBVPscattcoeffasymp} and \eref{e:IBVPALdefM}. Hence the
matrix RHP \eref{e:IBVPALsystemRHP} is solved by the Cauchy
projectors $P^\pm$ over the contour~$L$, namely $P^\pm=1/(2\pi
i)\,\mathop{\textstyle\trueint}\limits\nolimits_L [1/(k'-k)]\,{\rm d} k'$. That is, \begin{equation}
\_M_n^+(z,t)=\_I+ \frac1{2\pi i} \mathop{\textstyle\trueint}\limits_L \_M_n^+(\zeta,t)
{\_J_n(\zeta,t) \over \zeta-z} \,{\rm d}\zeta
\,.
\label{e:ALsystemRHPsoln}
\end{equation}
Equation~\eref{e:ALsystemRHPsoln} also yields the asymptotic
expansion of $\_M_n^+(z,t)$ as $z \to 0$, namely,
\begin{eqnarray}
\_M_n^+(z,t)= \_I
+ \frac1{2\pi i}\mathop{\textstyle\trueint}\limits_L\_M_n^+(\zeta,t)\_J_n(\zeta,t)\,\frac{{\rm d}\zeta}\zeta
+ \frac z{2\pi i}\mathop{\textstyle\trueint}\limits_L\_M_n^+(\zeta,t)\_J_n(\zeta,t)\,\frac{{\rm d}\zeta}{\zeta^2}
+ O(z^2)\,.
\nonumber\\[-2ex]
\label{e:IBVPALasympRHPsoln}
\end{eqnarray}
Note that we can write~\eref{e:IBVPALasympRHPsoln} as
\begin{eqnarray}
\_M_n^+(z,t)=
\mathop{\rm diag}\nolimits[1/(C_0C_n),C_0C_n]
+ \frac z{2\pi i} \mathop{\textstyle\trueint}\limits_L \_M_n^+(\zeta,t)
\big(\_I-\_J_n(\zeta,t)\big) \,{{\rm d}\zeta\over \zeta^2} +O(z^2)\,.
\nonumber\\[-2ex]
\label{e:IBVPALasympMn} \end{eqnarray} Now note that the matrix
$\_C_n^{-1}\_M_n(z,t)$ satisfies the $n$-part of the Lax
pair~\eref{e:ALLP1}. Also, thanks to \eref{e:ALasympmu2},
\eref{e:ALIBVPmu13asymp} and~\eref{e:ALIBVPscattcoeffasymp}, it is
\[
\_C_n^{-1}\_M_n^+(z,t)= \mathop{\rm diag}\nolimits[1/C_n,C_0] + O(z)\qquad
\quad{\rm as}~z\to0\,.
\]
Hence, substituting the asymptotic expansion of $\_M_n(z,t)$
into~\eref{e:ALLP1} and comparing the $(1,2)$-components of the
$O(z)$ terms, we can recover the scattering potentials as \begin{equation}
q_n(t)=\lim_{z\to 0}\big(\_M_{n+1}^+(z,t)-\_I\big)_{12}/z
\label{e:ALreconstruction}\,. \end{equation} Taking the $(1,2)$-component
of~\eref{e:IBVPALasympMn} and comparing
with~\eref{e:ALreconstruction}, we then obtain the reconstruction
formula for the solution of the IDNLS equation on the natural
numbers: \begin{eqnarray} \fl q_n(t)=-\frac1{2\pi i}\mathop{\textstyle\trueint}\limits_{|z|=1}
z^{2n}{\rm e}^{-2i\omega(z)t}\gamma(z)\big(\_M_{n+1}^+(z,t)\big)_{11}\,{\rm d} z
+ \frac\nu{2\pi i} \mathop{\textstyle\trueint}\limits_{L_1}z^{2n}
{\rm e}^{-2i\omega(z)t}\Gamma^*(1/z^*)\big(\_M_{n+1}^+(z,t)\big)_{11}\,{\rm d} z
\nonumber\\\kern-2em{ }
+ \frac1{2\pi i}\mathop{\textstyle\trueint}\limits_{L_2}
\big(\nu C_0|\gamma(z)|^2-C_0+1\big)
\big(\_M_{n+1}^+(z,t)\big)_{12}\,{{\rm d} z \over z^2}
+ \frac1{2\pi i}\,\bigg(1-\frac1{C_0}\bigg)\mathop{\textstyle\trueint}\limits_{L_4}
\big(\_M_{n+1}^+(z,t)\big)_{12} \,{{\rm d} z \over z^2}
\nonumber\\\kern2em{ }
+ \frac1{2\pi i}{\nu\over C_0}\mathop{\textstyle\trueint}\limits_{L_4}z^{2n}
{\rm e}^{-2i\omega(z)t}\Gamma^*(1/z^*)\big(\_M_{n+1}^+(z,t)\big)_{11}\,{\rm d} z
\,.
\label{e:representationqn}
\end{eqnarray}
\paragraph{Global relation.}
As in the linear problem and the continuum limit,
the unknown boundary datum can be obtained in terms of the known
initial-boundary conditions using the global relation and the
symmetries of the system.
Integrating~\eref{e:ALmodifiedLPPsi} around the boundary of the region
${\mathbb{N}}_0 \times[0,t]$, one obtains
\begin{eqnarray}
\fl
\_Z\mathop{\textstyle\trueint}\limits_0^t \esh{i\omega(z)t'}\big(\_H_0(z,t')\mu_0(z,t')\big) \,{\rm d}
t'+\esh{i\omega(z)t'}\mathop{\textstyle\truesum}\limits_{n=0}^{\infty} \^{\_Z}^{-n}
\big(\_Q_n(t)\mu_n(z,t)\big)=\mathop{\textstyle\truesum}\limits_{n=0}^{\infty} \^{\_Z}^{-n}
\big(\_Q_n(0)\mu_n(z,0)\big) \,.
\nonumber\\
\label{e:ALIBVPGR0} \end{eqnarray} When~\eref{e:ALIBVPGR0} is evaluated with
$\mu_n(z,t)\equiv\mu_n\o2(z,t)$ and $t=T$, the first and second
columns of the resulting equation are valid respectively for
$z\in\=D_{\pm \#in}$ and $z\in\=D_{\pm \#out}$. Moreover, the RHS
of~\eref{e:ALIBVPGR0} becomes $\_Z(\_I-\_s(z))$ thanks
to~\eref{e:ALintegralscatterings}. Finally,
using~\eref{e:ALIBVPjump}, we can write the first term and the
second term in~\eref{e:ALIBVPGR0} respectively as
$\_Z\,\big(\_S^{-1}(z,T)-\_I\big)\,\_s(z)$ and
$\_Z\,\big(\_I-{\rm e}^{i\omega(z)t\^\sigma_3}\mu_0\o2(z,T)\big)$. We
therefore have the following global relation in terms of the
scattering data:
\begin{eqnarray}
\_S^{-1}(z,T)\_s(z)=\_I-\esh{i\omega(z)T}\_G(z,T)\,,
\label{e:ALIBVPGR1}
\\
\noalign{\noindent where}
\_G(z,t)=\_Z^{-1}\mathop{\textstyle\truesum}\limits_{n=0}^{\infty} \^{\_Z}^{-n}
\big(\_Q_n\mu_n\o2(z,t)\big)\,.
\nonumber
\end{eqnarray}
Like for~\eref{e:ALIBVPGR0}, the first and second column
of~\eref{e:ALIBVPGR1} are respectively valid for $|z|\le1$ and
$|z|\ge1$. Also, from the analyticity domains of $\mu_n\o2(z,t)$ it
follows that $\_G_L(z,t)$ is analytic in $|z|<1$ and $\_G_R(z,t)$ is
analytic in $|z|>1$. Taking the $(1,2)$ component
of~\eref{e:ALIBVPGR1} we have
\begin{eqnarray}
A^*(1/z^*,T)b^*(1/z^*)-B^*(1/z^*,T)a^*(1/z^*)=
-\nu{\rm e}^{2i\omega(z)T}G(z,T)
\,,\,\, |z|>1,
\nonumber\\[-2ex]
\label{e:ALIBVPGR} \end{eqnarray} where \begin{eqnarray} G(z,T)=\mathop{\textstyle\truesum}\limits_{n=0}^{\infty}
z^{-2n-1}q_n(T)\big(\mu_n\o2(z,T)\big)_{22} \,. \nonumber \end{eqnarray} Also
note that the RHS of~\eref{e:ALIBVPGR} is bounded for $z\in
\=D_{+\#out}$. Then, for $z\in \=D_{+\#out}$ the RHS vanishes in the
limit $T\to\infty$, implying
\begin{equation}
A^*(1/z^*,T)b^*(1/z^*)-B^*(1/z^*,T)a^*(1/z^*)=0 \,,\quad z \in
\=D_{+\#out}\,. \label{e:ALGRTinfty} \end{equation}
\noindent For finite values of $T$, letting $r(z)=b(z)/a(z)$, the
global relation is now
\[
B^*(1/z^*,T)-r(1/z^*)A^*(1/z^*,T)=
\nu{\rm e}^{2i\omega(z)T}G(z,T)/a^*(1/z^*)\,.
\]
Since $G(z,t)=O(1/z)$ as $z\to\infty$ for $z\in D_{+\#out}$,
multiplying by $z{\rm e}^{-2i\omega(z)t}$ and integrating
over $\partial\~D_{+\#out}$
[where $\~D_{+\#out}=\{D_{+\#out}\wedge\mathop{\rm Im}\nolimits z>0\}$]
we obtain the integral relation:
\begin{equation}
\mathop{\textstyle\trueint}\limits_{\partial \~D_{+\#out}}
z\,{\rm e}^{-2i\omega(z)t}\,\big(B^*(1/z^*,T)-r(1/z^*)A^*(1/z^*,T)\big)\,{\rm d}
z =0\,.
\end{equation}
This is the discrete analogue of the one that
in the continuum case is used to obtain the Dirichlet-to-Neumann map
\cite{CPAM58p639}.
In the discrete case, however, the unknown boundary datum can be obtained
using an alternative, simpler method,
as we will show in section~\ref{s:idnlsdata}.
\paragraph{Linear limit.}
The linear limit of the solution~\eref{e:representationqn} of the
IBVP for the IDNLS equation coincides with the solution of the IBVP
for the DLS equation, as we show next.
Suppose $\_Q_n(t)=O(\epsilon)$.
From~\eref{e:ALmuIBVPsolns} it follows that $\mu_n=\_I+O(\epsilon)$.
Recalling~\eref{e:ALintegralscatterings}, we obtain, to $O(\epsilon)$:
\begin{eqnarray}
\fl
\gamma(z)= -\frac 1z\,\^q(z^2,0)\,, \quad
d(z)= 1\,, \quad C_0=1\,,\quad
\Gamma^*(1/z^*)= i\nu\,\bigg(\frac 1z\^f_{-1}(z^2,T)-z\^f_0(z^2,T)\bigg)\,.
\nonumber
\end{eqnarray}
Thus~\eref{e:representationqn} yields, to $O(\epsilon)$,
\begin{eqnarray}
\fl
q_n(t)= \frac1{2\pi i} \mathop{\textstyle\trueint}\limits_{|\zeta|=1}
\zeta^{2n-1}{\rm e}^{-i\omega(\zeta^2)t}\^q(\zeta^2,0)\, {\rm d}\zeta
{ }+\frac1{2\pi}\mathop{\textstyle\trueint}\limits_{L_1+L_4}\zeta^{2n} {\rm e}^{-i\omega(\zeta^2)t}
\bigg( \frac1\zeta \^f_{-1}(\zeta^2,T)-\zeta\^f_0(\zeta^2,T) \bigg)\,{\rm d}\zeta\,,
\nonumber
\end{eqnarray}
where the integrals are taken in Cauchy's principal value sense.
Now note that the contour $L_1\cup L_4$ can be deformed to
$\partial D_{+\#in}$ by Cauchy's theorem.
Performing the change of variable $\zeta^2=z$,
we then obtain that the linear limit of~\eref{e:representationqn}
coincides with the solution of the IBVP for the DLS on the natural numbers,
namely~\eref{e:LSIBVPsoln0}.
\paragraph{Continuum limit.}
Reinstating the lattice spacing $h$, it is easy to show that the Lax
pair for the NLS is the continuum limit of that for the AL system as
$h\to 0$ \cite{JMP16p598,APT2003}.
The continuum limit is formally obtained by writing the solution of the
discrete case as $Q_n(t)=hq(nh,t)$ and $P_n(t)=hp(nh,t)$. Then for
$z={\rm e}^{ikh}$, the Lax pair~\eref{e:ALLPm} becomes
\begin{eqnarray}
{\mu_{n+1}-\mu_n \over h}-ik[\sigma_3,\mu_n]=\_Q_n(t)\,\mu_n+O(h^2)\,,
\nonumber
\\
\.\mu_n +i\omega(k)[\sigma_3,\mu_n]=\_H_n(t,k)\,\mu_n+O(h)\,,
\nonumber
\end{eqnarray}
where now $\omega(k)=(1-\cos 2kh)/h^2$, with $\mu_n=\mu(nh,t,k)$,
$q_n=q(nh,t)$ and $p_n=p(nh,t)$ for brevity, and where
\[
\fl
\_H_n(t,k)=i\,\begin{pmatrix} -q_n p_{n-1} &
(q_n-q_{n-1})/h+ik(q_n+q_{n-1})\\
-(p_n-p_{n-1})/h+ik(p_n+p_{n-1}) & q_{n-1}p_n
\end{pmatrix}.
\]
Correspondingly, the Jost solutions are obtained
from~\eref{e:ALmuIBVPsolns}, for example,
\begin{eqnarray} \fl
\mu_n\o1(k,t)=\_I+h\mathop{\textstyle\truesum}\limits_{m=0}^{n-1}\esh{ikh(n-m)}(\_Q_m(t)\mu_m\o1(k,t))
+\mathop{\textstyle\trueint}\limits_0^t \esh{i[nkh-\omega(k)(t-t')]}(\_H(0,k,t')\mu_0\o1(k,t')) \,
{\rm d} t'\,. \nonumber
\end{eqnarray}
As $h\to 0$ with $x=nh$ fixed, we have $\omega(k)\to2k^2$, together with
$\_H_n(t,k)\to\_H(x,t,k)$ and
$\mu_n\o{j}(k,t)\to\mu\o{j}(x,t,k)$, $j=1,2,3$, where $\mu\o{j}(x,t,k)$
are the Jost solutions for the IBVP of the NLS, namely~\eref{e:NLSLPsolutions}.
Note also that $C_n\to 1$ as $h\to0$.
Hence, in the continuum limit, the solution of the IBVP for the IDNLS
becomes exactly that of the IBVP for NLS.
The result can also be verified directly via the continuum limit of the
solution~\eref{e:representationqn}.
Explicitly, since $C_0=1+O(h^2)$,
as $h\to 0$ we have
\begin{eqnarray}
\fl
Q_n(t)=-\frac1{2\pi i}\mathop{\textstyle\trueint}\limits_{|\zeta|=1}
\zeta^{2n}{\rm e}^{-2i\omega(\zeta)t}\gamma(\zeta)\big(\_M_{n+1}^+(\zeta,t)\big)_{11}
\,{\rm d}\zeta
\nonumber\\\fl\kern2em{ }
+\frac \nu{2\pi i} \mathop{\textstyle\trueint}\limits_{L_1+L_4} \zeta^{2n}
{\rm e}^{-2i\omega(\zeta)t}\Gamma^*(1/\zeta^*)\big(\_M_{n+1}^+(\zeta,t)\big)_{11}
\,{\rm d}\zeta
+ \frac \nu{2\pi i}\mathop{\textstyle\trueint}\limits_{L_2} |\gamma(\zeta)|^2
\big(\_M_{n+1}^+(\zeta,t)\big)_{12} \,{{\rm d}\zeta \over\zeta^2}+O(h^2)\,.
\nonumber
\end{eqnarray}
The oriented contour $L_1\cup L_4$ can be deformed onto $|\zeta|=1$
since the corresponding integrand is analytic in~$D_{-\#in}$.
In terms of $q(nh,t)=Q_n(t)/h$, and performing the substitution
$\zeta={\rm e}^{ikh}$, we then have
\begin{eqnarray}
\fl
q(nh,t)= -\frac h{\pi}\,\, \mathop{\textstyle\trueint}\limits_{\!\!-\pi/h}^{\,\,\pi/h}
{\rm e}^{2i(nkh-\omega(k)t)}\gamma(k)\big(\_M_{n+1}^+(k,t)\big)_{11}\,{\rm e}^{ikh}\,{\rm d} k
\nonumber\\\fl\kern3em
+ \frac h{\pi}\,\, \mathop{\textstyle\trueint}\limits_{\!\!-\pi/h}^{\,\,\pi/h}
\nu{\rm e}^{2i(nkh-\omega(k)t)}\Gamma^*(k^*)\big(\_M_{n+1}^+(k,t)\big)_{11}
\,{\rm e}^{ikh}\,{\rm d} k
-\frac h{\pi}\,\, \mathop{\textstyle\trueint}\limits_{\!\!\pi/2h}^{\,\,\pi/h} \nu
|\gamma(k)|^2 \big(\_M_{n+1}^+(k,t)\big)_{12}\,{\rm e}^{-ikh}\, {\rm d} k
\nonumber\\\fl\kern16em{ }
-\frac h{\pi}\,\, \mathop{\textstyle\trueint}\limits_0^{\,\,\pi/2h} \nu |\gamma(-k)|^2
\big(\_M_{n+1}^+(-k,t)\big)_{12}\,{\rm e}^{ikh}\, {\rm d} k\,.
\label{e:continuumlimit} \end{eqnarray} Now note that, since
${\rm e}^{-4ik^2t}\Gamma^*(k^*)(\_M_{n+1}^+(x,k,t))_{11}$ is analytic and
bounded for $(\mathop{\rm Re}\nolimits\,k\in[-\pi/h,0])\wedge(\mathop{\rm Im}\nolimits\,k>0)$ [which becomes
${\mathbb{C}}_\mathrm{II}$ in the limit $h\to0$], the portion of the
corresponding integral on the negative real axis can be deformed
onto the positive imaginary axis. Then, taking the continuum limit
of all the integrals in~\eref{e:continuumlimit} we obtain that
$q(nh,t)$ coincides with the solution of the IBVP for the NLS,
namely~\eref{e:NLSIBVPrepresentationq}, in the limit $h\to 0$.
\paragraph{Remarks.}
A few comments are now in order:
\begin{itemize}
\item
Equation~\eref{e:representationqn} provides
the Ehrenpreis~\cite{Ehrenpreis1970,Palamodov1970,Henkin1990}
representation for the solution of the IBVP for the IDNLS,
in analogy with Ref.~\cite{JNLMP10p47} in the continuum limit.
\item
One can now use~\eref{e:representationqn} as a starting point
to formally prove that the function $q(x,t)$ given by the reconstruction
formula satisfies~\eref{e:IDNLS} as well as the initial-boundary conditions,
using the dressing method, as in Ref.~\cite{NLTY18p1771} in the continuum limit.
\item
In the continuum problem, the location of the jumps is
the union of the jumps for the scattering problem in the
linear case and those of its adjoint. In the discrete problem,
however, this is not the case. Indeed, the extra jump along the
imaginary axis arises as a consequence of the rescaling $z\to z^2$
when going from the linear to the nonlinear case.
\item
The scattering matrix $\_S(z,T)$ involves $T$ explicitly.
In~\ref{s:IndependenceT}, however, we show that the solution of the IBVP
for the AL system on the naturals does not depend on future values of
the boundary datum.
\item
With the due modifications, the method presented here can also be used
to solve the IBVP for all members of the Ablowitz-Ladik hierarchy.
Moreover, the method can be generalized to any integrable
differential-difference evolution equation.
\end{itemize}
\section{Elimination of the unknown boundary datum, linearizable BCs and soliton solutions}
\label{s:idnlsdata}
\subsection{Elimination of the unknown boundary datum}
The scattering matrix $\_S(z,T)$ depends on the both the known and
the unknown boundary datum.
In the linear problem, it was possible to
overcome this difficulty by making use of the fact that the
transformation $z\to1/z$ leaves the transforms of the boundary data
unchanged. In the nonlinear problem, however, the matrix $\_S(z,T)$
is \textit{not} invariant under this transformation, because it is
defined in terms of the eigenfunction~$\mu_n\o1(z,t)$, which is not
invariant under $z\to1/z$.
As in the continuum case~\cite{CPAM58p639}, the determination of the
unknown boundary datum in terms of the known initial-boundary conditions
is in general a nontrivial issue.
For linearizable BCs it is possible to
express the RHP only in terms of the initial data,
as we show in section~\ref{s:ALlinearizable}.
This is not possible for generic BCs, however.
In this case one must solve a coupled system of nonlinear
ordinary differential equations (ODEs)
to obtain simultaneously the unknown boundary datum $q_{-1}(t)$
as well as scattering coefficients $A(z,T)$ and $B(z,T)$,
as we show next.
The boundary data enters the RHP only via the ratio
$R(z,T)=B(z,T)/A(z,T)$ appearing in $\Gamma(z)$.
Recalling~\eref{e:ALphimu} and~\eref{e:ALscattdef}, we have
$\_S(z,t)= {\~\Phi}^{-1}(z,t)\,{\rm e}^{-i\omega(z)t\sigma_3}$, where
the matrix
\begin{equation}
\label{e:ALPhitilde}
\~\Phi(z,t)=
\Phi_0\o1(z,t)=\begin{pmatrix}
e^{-i\omega(z)t}A^*(1/z^*) & -\nu e^{-i\omega(z)t}B^*(1/z^*)\\
-e^{i\omega(z)t}B(z) & e^{i\omega(z)t}A(z) \end{pmatrix}\,,
\end{equation}
satisfies the $t$-part of the Lax pair~\eref{e:ALLP} for $n=0$,
namely:
\begin{eqnarray}
\.{\~\Phi}= \big(-i\omega(z)\sigma_3 + \_H_0(z,t)\big)\,\~\Phi\,,
\label{e:ALtLP0}
\end{eqnarray}
together with the initial condition $\~\Phi(z,0)=\_I\,$.
The term $\_H_0(z,t)$ in~\eref{e:defHn}
contains $q_{-1}(t)$, of course.
Note however that using~\eref{e:ALQnasymp} with $n=0$, we can express
$q_{-1}(t)$ in terms of $\mu_0\o1(z,t)$:
\begin{equation}
q_{-1}(t)=
\lim_{ z\to 0} \big(\mu_0\o1(z,t)\big)_{12}/z\qquad z\in D_{-\#in}\,.
\label{e:ALQnm1}
\end{equation}
The simultaneous solution of~\eref{e:ALPhitilde}
and~\eref{e:ALQnm1} provides the unknown boundary datum as well as
the auxiliary spectral functions $A(z,t)$ and $B(z,t)$, allowing one
to completely define the RHP and therefore we also obtain the
solution of the inverse problem.
Note that this procedure is significantly simpler
than that requried to obtain the generalized Dirichlet-to-Neumann map
in the continuum case~\cite{CPAM58p639}.
\subsection{Linearizable boundary conditions}
\label{s:ALlinearizable}
Like in the continuum case,
there is a class of BCs, called \textit{linearizable}, for which
it is possible to obtain the unknown boundary datum via only
algebraic manipulations of the global relation.
Recall that $A(z,t)$ and $B(z,t)$ are given in terms of
$\_\Phi(z,t)=\mu_0\o1{\rm e}^{-i\omega(z)t\sigma_3}$ by~\eref{e:ALPhitilde}
which solves the ODE~\eref{e:ALtLP0}
together with the initial condition $\~\_\Phi(z,0)=\_I$.
Since $\omega(1/z)=\omega(z)$, the matrix~$\~\Phi(1/z,t)$
satisfies equations identical to~\eref{e:ALtLP0}
except that $\_H_0(z,t)$ is replaced by~$\_H_0(1/z,t)$.
If there exists a time-independent matrix
$\_N(z)$ such that
\begin{equation}
\_N(z)\,\big(-i\omega(z)\sigma_3 + \_H_0(z,t)\big)\,
= \big(-i\omega(z)\sigma_3 + \_H_0(1/z,t)\big)\,\_N(z)\,,
\label{e:ALlinearizcond} \end{equation} it is then easy to show that \begin{equation}
\~\Phi(1/z,t)= \_N(z)\,\~\Phi(z,t)\,\_N(z)^{-1}\,.
\label{e:ALIBVPphi1zinv} \end{equation} A necessary condition
for~\eref{e:ALlinearizcond} to be satisfied is obviously that
$\det[-i\omega(z)\sigma_3+\_H_0(z,t)]=
[(z^2-1/z^2)(q_0p_{-1}-q_{-1}p_0)]^2$
be invariant under the transformation~$z\to1/z$.
In turn, for this condition to be satisfied one needs
\begin{equation} q_0p_{-1}-q_{-1}p_0= 0\,. \label{e:ALlinearizableBC} \end{equation} In the
reduction $p_n(t)=\nu q_n^*(t)$ to IDNLS, \eref{e:ALlinearizableBC}
is satisfied by the discrete analogue of homogeneous Robin BCs: \begin{equation}
q_{-1}-\chi q_0=0 \,,\quad \chi\in{\mathbb{R}} \,. \label{e:ALIBVPRobinBC}
\end{equation} These BCs had been previously identified via algebraic methods
\cite{PLA207p263}. For the BCs~\eref{e:ALIBVPRobinBC}, we can solve
the system~\eref{e:ALlinearizcond} for $\_N(z)$, obtaining $N_{12}=
N_{21}=0$ and $N_{11}=f(z)N_{22}$, where
\[
f(z)=
{1-\chi z^2 \over z^2-\chi} \,.
\]
Recalling~\eref{e:ALIBVPphi1zinv}, we then find the following
symmetries for the scattering data: \begin{equation} A^*(z^*,T)=A^*(1/z^*,T)\,,
\quad B^*(z^*,T)=f(z)B^*(1/z^*,T) \label{e:ALsymmetryAB}\,. \end{equation} Note
that $\_N(z)$ is not invertible for
$z=\pm\chi^{1/2},\pm\chi^{-1/2}$. However, \eref{e:ALsymmetryAB} is
still valid at such values of~$z$. Indeed, since $\~\Phi(z,t)$
solves~\eref{e:ALtLP0}, writing a Neumann series for $\~\Phi(z,t)$
one finds $\~\Phi(\pm\chi^{1/2},t)_{12}=0$, which implies that
$B^*(\pm\chi^{-1/2},t)=0$. As a consequence, since $A^*(1/z^*,T)$
and $B^*(1/z^*,T)$ are analytic for $z\in{\mathbb{C}}^{\,[\raise0.08ex\hbox{\scriptsize$\slash$}\kern-0.34em0]}$, we can
conclude that the limit as $z\to\pm\chi^{1/2}$ of the product
$f(z)\,B^*(1/z^*,T)$ exists and is finite.
The above properties now allow $\Gamma^*(1/z^*)$ to be expressed in
terms of the known functions, $a(z)$ and $b(z)$.
For simplicity, we consider the case in which no discrete spectrum is present.
Consider first the case $T=\infty$. The global relation in this case
is simply given by~\eref{e:ALGRTinfty}. Replacing $1/z$ by $z$ and
using~\eref{e:ALsymmetryAB},
we obtain
\numparts
\[
A^*(1/z^*)={a^*(z^*)d(z)\over \Delta (1/z)}\,,\quad
B^*(1/z^*)={f(1/z)b^*(z^*)d(z)\over \Delta (1/z)} \quad z\in
D_{+\#in}\,,
\label{e:ALABLin}
\]
\eject\noindent
where
\[
\Delta(z)=a(1/z)a^*(1/z^*)-\nu f(z)b(1/z)b^*(1/z^*)\,.
\label{e:ALDeltadef}
\]
\endnumparts As a result, we can express the ratio $R(z,T)=
B^*(1/z^*,T)/A^*(1/z^*,T)$ as
\begin{equation}
R(z,T)=f(1/z)\,{b^*(z^*)\over
a^*(z^*)} \qquad z\in D_{+\#in}\,, \label{e:ALratioAB} \end{equation} and we
therefore obtain $\Gamma^*(1/z^*)$ only in terms of known spectral
functions.
Now consider the case $T<\infty$. The global relation in this case
is~\eref{e:ALIBVPGR}. Replacing $1/z$ by $z$ in~\eref{e:ALIBVPGR}
and using the symmetry~\eref{e:ALsymmetryAB} as before, we obtain
\begin{equation}
R(z,T)=
f(1/z){b^*(z^*)\over a^*(z^*)}+\nu{\rm e}^{2i\omega(z)T}\,
{f(1/z)G(1/z,T)\over a^*(z^*)A^*(z^*,T)}\,,\quad z\in D_{+\#in} \,.
\label{e:ALratioAB1} \end{equation} We therefore see that the difference from
the case $T=\infty$ is simply the appearance of an additional term
in the RHS of~\eref{e:ALratioAB}. In~\ref{s:IndependenceT}, however,
we show that the second term in the RHS of~\eref{e:ALratioAB1} does
not affect the solution of the IBVP for the IDNLS. Hence, even in
the case $T<\infty$, we can use~\eref{e:ALratioAB} in the
RHP~\eref{e:IBVPALsystemRHP}.
\subsection{Discrete spectrum and soliton solutions}
Equations~\eref{e:IBVPALdefM} imply that
when the functions $a(z)$ and $d(z)$ possess zeros
the matrices $\_M_n^{\pm}(z,t)$ are only meromorphic
functions in $D_+$ and $D_-$, respectively.
As a consequence, the RHP~\eref{e:IBVPALsystemRHP} formulated
becomes singular.
As in the IVP, however, it can be converted to a regular RHP
by taking into account the appropriate residue relations.
We assume that these discrete eigenvalues are all simple.
More precisely, we assume that:
\begin{itemize}
\item
$a(z)$ has simples zeros in $D_{-\#in}$.
We label such zeros $\pm z_j$ for $j=1,\,\dots\,,J$;
\item
$d(z)$ has simple zeros in $D_{+\#in}$.
We label such zeros $\pm\lambda_j$ for $j=1,\,\dots\,,J'$.
\end{itemize}
We also assume that there are no zeros on the boundaries of these
domains and that there are no common zeros of $a(z)$ and $d(z)$
in $D_{+\#in}$\,.
The fact that the zeros of $a(z)$ and $d(z)$ always appear in opposite pairs
is a trivial consequence of $a(z)$ and $d(z)$ both being
even functions of~$z$ [cf.~\ref{s:asymptotics}].
Also, the symmetry $p_n(t)= \nu q_n(t)$ of the potentials
implies that, corresponding to these zeros,
there is an equal number of zeros of $a^*(1/z^*)$ and $d^*(1/z^*)$
in $D_{+\#out}$ and $D_{-\#out}$, respectively,
which we denote respectively by
$\=z_j=1/z_j^*$ and $\=\lambda_j=1/\lambda_j^*$.
Thus, discrete eigenvalues in the IBVP can appear in two different
kinds of quartets, namely,
\[
\{\pm z_j,\,\pm\=z_j\}_{j=1}^{J}\,,\quad
\{\pm \lambda_j,\,\pm\=\lambda_j\}_{j=1}^{J'}\,.
\]
Similarly to the IVP,
from~\eref{e:ALIBVPjump} and~\eref{e:IBVPALsystemRHP}
we find the following residue relations;
\numparts
\label{e:ResRelation}
\begin{eqnarray}
\mathop{\rm Res}\limits_{z=z_j}\big[\_M_n\o{-,R}\big]=a_j\,\_M_n\o{-,L}(z_j)\,,\quad
\mathop{\rm Res}\limits_{z=\=z_j}\big[\_M_n\o{+,L}\big]=\=a_j\, \_M_n\o{+,R}(\=z_j) \,,\\
\mathop{\rm Res}\limits_{z=\lambda_j}\big[\_M_n\o{+,R}\big]=d_j\,\_M_n\o{+,L}(\lambda_j)\,,\quad
\mathop{\rm Res}\limits_{z=\=\lambda_j}\big[\_M_n\o{-,L}\big]=\=d_j\,\_M_n\o{-,R}(\=\lambda_j)
\label{e:ResRelationlambda} \,,
\end{eqnarray}
\endnumparts
where
\begin{eqnarray}
\fl
a_j=K_j z_j^{2n}{\rm e}^{-2i\omega(z_j)t}\,,\quad
\=a_j=\=K_j\=z_j^{\,-2n}{\rm e}^{2i\omega(\=z_j)t}\,,\quad
d_j=\Lambda_j \lambda_j^{2n}{\rm e}^{-2i\omega(\lambda_j)t}\,, \quad
\=d_j=\=\Lambda_j \=\lambda_j^{-2n}{\rm e}^{2i\omega(\=\lambda_j)t}\,,
\nonumber\\
\fl
K_j=1/(\. a(z_j)\,b(z_j))\,,\quad
\Lambda_j= \nu B^*(\=\lambda_j)/(a(\lambda_j)\,\. d(\lambda_j))\,,\quad
\=K_j=(-z_j^*)^{-2}\nu K_j\,,\quad
\=\Lambda_j=(-\lambda_j^*)^{-2}\nu \Lambda_j\,.
\nonumber
\end{eqnarray}
and as customary $K_j$, $\Lambda_j$, $\=K_j$ and $\=\Lambda_j$
are referred to as norming constants.
Note that since $b(z)$ and $B^*(1/z^*)$ are odd
functions of~$z$ [cf.~\ref{s:asymptotics}],
the norming constants $K_j$ at $z=\pm z_j$ are identical,
and the same follows for $\Lambda_j$ at $z=\pm \lambda_j$.
The RHP is now solved by removing the singularities,
which is done by subtracting the residue contributions at the poles.
As usual, the solution of the RHP then has additional terms compared to
the case of no poles~\eref{e:ALsystemRHPsoln}, and is given by
\begin{eqnarray}
\fl
\_M_n(z,t)= \_I
+ \frac1{2\pi i} \mathop{\textstyle\trueint}\limits_L \_M_n^+(\zeta,t){\_J_n(\zeta,t) \over \zeta-z} \,{\rm d}\zeta\,,
+ \mathop{\textstyle\truesum}\limits_{j=1}^{2J}\bigg( \frac1{z-z_j}\,\mathop{\rm Res}\limits_{z=z_j}[\_M_n^-(z)]
+ \frac1{z-\=z_j}\,\mathop{\rm Res}\limits_{z=\=z_j}[\_M_n^+(z)] \bigg)
\nonumber \\
+ \mathop{\textstyle\truesum}\limits_{j=1}^{2J'} \bigg(\frac1{z-\lambda_j}\,\mathop{\rm Res}\limits_{z=\lambda_j}[\_M_n^+(z)]
+\frac1{z-\=\lambda_j}\,\mathop{\rm Res}\limits_{z=\=\lambda_j}[\_M_n^-(z)] \bigg)\,,
\label{e:ALRHPsolnRes}
\end{eqnarray}
where we defined $z_{j+J}= -z_j$ for $j=1,\dots,J$
and $\lambda_{j+J'}= -\lambda_j$ for $j=1,\dots,J'$.
From the asymptotic expansion of~\eref{e:ALRHPsolnRes}
and the symmetries~\eref{e:symmMn}, we then obtain
the reconstruction formula:
\begin{eqnarray}
\fl
q_n(t)=-2\mathop{\textstyle\truesum}\limits_{j=1}^J
z_j^{2n}{\rm e}^{-2i\omega(z_j)t}K_j\_M_{n+1,11}^-(z_j)-2\mathop{\textstyle\truesum}\limits_{j=1}^{J'}
\lambda_j^{2n}{\rm e}^{-2i\omega(\lambda_j)t}\Lambda_j\_M_{n+1,11}^+(\lambda_j)
+\~q_n(t)\,,
\label{e:represnqnRes}
\end{eqnarray}
where $\~q_n(t)$ is given by~\eref{e:representationqn}.
In the reflectionless case with $\nu=-1$, we obtain the soliton
solution solving the following algebraic system of equations for
$\_M_{n+1,11}^-(z_j)$ and $\_M_{n+1,11}^+(\lambda_j)$:
\begin{eqnarray}
\fl
\_M_{n,11}^-(z_l)=1+\mathop{\textstyle\truesum}\limits_{j=1}^{J} \=a_j\,\bigg({1\over
z_l-\=z_l}-{1\over
z_l+\=z_j}\bigg)\,\_M_{n,12}^+(\=z_j)+\mathop{\textstyle\truesum}\limits_{j=1}^{J'}
\=d_j\,\bigg({1\over z_l-\=\lambda_j}-{1\over
z_l+\=\lambda_j}\bigg)\,\_M_{n,12}^-(\=\lambda_j)
\nonumber\\
\fl
\_M_{n,12}^+(\=z_l)=\mathop{\textstyle\truesum}\limits_{j=1}^{J} a_j\,\bigg({1\over
\=z_l-z_j}+{1\over
\=z_l+z_j}\bigg)\,\_M_{n,11}^-(z_j)+\mathop{\textstyle\truesum}\limits_{j=1}^{J'}
d_j\,\bigg({1\over \=z_l-\lambda_j}+{1\over
\=z_l+\lambda_j}\bigg)\,\_M_{n,11}^+(\lambda_j)\,,
\nonumber\\
\fl
\_M_{n,11}^+(\lambda_l)=1+\mathop{\textstyle\truesum}\limits_{j=1}^{J} \=a_j\,\bigg({1\over
\lambda_l-\=z_l}-{1\over
\lambda_l+\=z_j}\bigg)\,\_M_{n,12}^+(\=z_j)+\mathop{\textstyle\truesum}\limits_{j=1}^{J'}
\=d_j\,\bigg({1\over \lambda_l-\=\lambda_j}-{1\over
\lambda_l+\=\lambda_j}\bigg)\,\_M_{n,12}^-(\=\lambda_j)\,,
\nonumber\\
\fl
\_M_{n,12}^-(\=\lambda_l)=\mathop{\textstyle\truesum}\limits_{j=1}^{J} a_j\,\bigg({1\over
\=\lambda_l-z_j}+{1\over
\=\lambda_l+z_j}\bigg)\,\_M_{n,11}^-(z_j)+\mathop{\textstyle\truesum}\limits_{j=1}^{J'}
d_j\,\bigg({1\over \=\lambda_l-\lambda_j}+{1\over
\=\lambda_l+\lambda_j}\bigg)\,\_M_{n,11}^+(\lambda_j)\,.
\nonumber
\end{eqnarray}
For a single quartet $\{\pm z_1,\,\pm\=z_1\}$,
the solution of the above system with $J=1$ and $J'=0$
yields the one-soliton solution of the IDNLS as
\begin{equation}
q_n(t)= {\rm e}^{2i[(n+1)\beta+2wt+\phi]}\sinh(2\alpha)
\mathop{\rm sech}\nolimits[2((n+1)\alpha-v t-\delta)]\,,
\end{equation}
where $z_1={\rm e}^{\alpha+i\beta}$ and
\begin{eqnarray}
w=\cosh(2\alpha)\cos(2\beta)-1\,,\quad
v=\sinh(2\alpha)\sin(2\beta)\,,
\nonumber\\
\delta=\frac12\log\big(\sinh(2\alpha)\big)-\frac12\log|K_1|+\log|z_1|\,,
\quad \phi=\frac\pi2-\arg z_1 + \frac12\arg K_1\,.
\nonumber
\end{eqnarray}
The soliton solution corresponding to a single quartet
$\{\pm\lambda_1,\,\pm\=\lambda_1\}$ has an identical
functional representation, which also coincides
with the well-known one-soliton solution in the IVP.
Note that the norming constants $\Lambda_j$
contain the unknown scattering datum $q_{-1}(t)$ through
the spectral functions $A(z,t)$ and $B(z,t)$.
In general, this datum must be obtained by solving a nonlinear system
of ODEs, as explained previously.
In the case of linearizable BCs, however,
$\Lambda_j$ can be expressed only in terms of known scattering data.
In particular, with $T=\infty$, the global relation implies
\begin{equation}
\Lambda_j={f(1/\lambda_j)b^*(\lambda_j^*) \big/
\big[a(\lambda_j)\.\Delta(1/\lambda_j)}\big]\,,
\label{e:Lambdaj}
\end{equation}
where $\Delta(z)$ was defined in~\eref{e:ALDeltadef}.
This result can then be used in the
residue relations~\eref{e:ResRelationlambda}.
Equation~\eref{e:Lambdaj}
is a consequence of the fact that $d(z)$ and $\Delta(1/z)$
have the same set of zeros in $D_{+\#in}$,
which in turn can be easily proved considering the analyticity of
$A^*(1/z^*)$ and $B^*(1/z^*)$ with~\eref{e:ALABLin}.
\section{Continuum: linear and nonlinear Schr\"odinger equations}
\label{s:continuum}
In order to compare the solution of the IBVP in the discrete case
to its continuum limit, and to appreciate the differences between
the method for discrete problems and its continuum counterpart,
here we briefly review the solution of IBVPs
for the linear Schr\"odinger (LS) equation and the
nonlinear Schr\"odinger (NLS) equation:
\begin{eqnarray}
i\partialderiv qt + \partialderiv[2]qx - 2\nu |q|^2q=0\,
\label{e:NLS}
\end{eqnarray}
(with $\nu=0,\pm1$ denoting respectively the linear, defocusing and focusing
cases),
to which~\eref{e:DLS} and~\eref{e:IDNLS}
reduce to in the limit $h\to0$.
Note that,
even though the IVP for~\eref{e:NLS} was solved in the early days
of integrable systems for both vanishing~\cite{JETP34p62}
and nonzero~\cite{JETP37p823} BCs,
the IBVP on the half line was solved only recently~\cite{NLTY18p1771}.
Also, even though the IVP for the
vector generalization of~\eref{e:NLS} was also solved
early on in the case of vanishing BCs~\cite{JETP38p248},
the analogue problem with nonzero BCs was also
only recently solved~\cite{JMP47p63508}.
\paragraph{Linear Schr\"odinger equation: IVP and IBVP via Fourier methods.}
Consider first the initial value problem for the LS equation
with $x\in{\mathbb{R}}$, $t>0$ and $q(x,0)$ given.
For simplicity we assume that $q(x,0)$ belongs to the Schwartz class,
which we denote by ${\cal S}({\mathbb{R}})$.
The IVP is trivially solved using the Fourier transform pair, defined as
\begin{eqnarray}
\^q(k,t)= \mathop{\textstyle\trueint}\limits_{\!\!-\infty}^{\,\,\infty} {\rm e}^{-ikx} q(x,t)\,{\rm d} x\,,
\qquad
q(x,t)= \frac1{2\pi}\,\,\mathop{\textstyle\trueint}\limits_{\!\!-\infty}^{\,\,\infty} {\rm e}^{ikx}\^q(k,t)\,{\rm d} k\,.
\label{e:FTpair}
\end{eqnarray}
Use of~\eref{e:FTpair} yields the solution of the IVP as
\begin{equation}
q(x,t)=
\frac1{2\pi}\,\, \mathop{\textstyle\trueint}\limits_{\!\!-\infty}^{\,\,\infty} {\rm e}^{i(kx - k^2t)}
\^q(k,0)\,{\rm d} k\,.
\label{e:LSIVPsoln}
\end{equation}
Now consider the IBVP for the LS equation
on the half line with Dirichlet BCs;
i.e., $x>0$, $t>0$ and
with $q(x,0)\in{\cal S}({\mathbb{R}}^+)$ and $q(0,t)\in{\cal C}({\mathbb{R}}^+)$ given.
Employing the sine transform pair
\begin{eqnarray}
\^q\o{s}(k,t)= \mathop{\textstyle\trueint}\limits_0^{\,\,\infty} \sin(kx) q(x,t)\,{\rm d} x\,,
\qquad
q(x,t)= \frac2\pi\,\,\mathop{\textstyle\trueint}\limits_0^{\,\,\infty} \sin(kx) \^q\o{s}(k,t)\,{\rm d} k\,,
\label{e:FST}
\end{eqnarray}
yields the solution of the IBVP as
\begin{eqnarray}
q(x,t)=
\frac2\pi\,\, \mathop{\textstyle\trueint}\limits_0^{\,\,\infty} {\rm e}^{-ik^2t}\sin(kx)\,\^q\o{s}(k,0)\,{\rm d} k
+ \frac1\pi\,\,
\mathop{\textstyle\trueint}\limits_0^{\,\,\infty} {\rm e}^{-ik^2t}\sin(kx) \^g(k,t)\,{\rm d} k\,,
\label{e:LSIBVPsinetransform}
\\
\noalign{\noindent where}
\^g(k,t)=
2ik \mathop{\textstyle\trueint}\limits_0^t {\rm e}^{ik^2t'}q(0,t')\,{\rm d} t'\,.
\end{eqnarray}
\subsection{Linear Schr\"odinger equation: IVP and IBVPs via spectral methods}
\label{s:LSIBVP}
An algorithmic method to obtain the Lax pair of
linear PDEs was given in Ref.~\cite{IMA67p559}.
However, one can also obtain the Lax pair for the LS equation
via the linear limit of the Lax pair of the NLS equation,
namely~\eref{e:NLSLP}.
Let $\_Q=O(\epsilon)$
and take $\Phi(x,t,k)=\@v(x,t,k)$ to be a two-component vector.
To leading order it is $\@v(x,t,k)= \esh{i(kx-2k^2t)}\@v_o$,
where $\@v_o=(v_{1,o},v_{2,o})^t$ is an arbitrary constant vector.
Choosing $v_{2,o}=1$ and substituting into the RHS
of~\eref{e:NLSLP1} then yields the following equations
for $\mu(x,t,k)= {\rm e}^{i(kx-2k^2t)}v_1(x,t,k)$ up to $O(\epsilon^2)$ terms:
\begin{eqnarray}
\mu_x - ik'\mu = q\,,
\qquad
\mu_t + ik'^2\mu = iq_x-k'q\,,
\label{e:LSLP}
\end{eqnarray}
where $k'=2k$.
One can now verify that enforcing the compatibility
of~\eref{e:LSLP} yields the LS equation.
Hereafter, for convenience, we will omit the primes.
\subsubsection*{Initial value problem.}
Introduce a modified eigenfunction
$\psi(x,t,k)={\rm e}^{-i(kx-k^2t)}\mu(x,t,k)$,
which satisfies the simplified Lax pair
\[
\psi_x= {\rm e}^{-i(kx-k^2t)}q\,,\qquad
\psi_t= {\rm e}^{-i(kx-k^2t)}\big(iq_x - k q\big)\,.
\]
It is then easy to obtain the solutions of~\eref{e:LSLP}
which decay as $x\to\pm\infty$ respectively as:
\begin{eqnarray}
\mu\o1(x,t,k)= \mathop{\textstyle\trueint}\limits_{\!\!-\infty}^x {\rm e}^{ik(x-x')}q(x',t)\,{\rm d} x'\,,
\qquad
\mu\o2(x,t,k)= -\mathop{\textstyle\trueint}\limits_x^\infty {\rm e}^{ik(x-x')}q(x',t)\,{\rm d} x'\,.
\nonumber\\[-1ex]
\label{e:LSIVPmusoln2}
\end{eqnarray}
Note that $\mu\o{1,2}(x,t,k)$ are analytic for $\mathop{\rm Im}\nolimits k\gl0$, respectively
Also, on $\mathop{\rm Im}\nolimits k=0$ it is
\begin{eqnarray}
\mu\o1(x,t,k) - \mu\o2(x,t,k)=
{\rm e}^{ikx}\^q(k,t)= {\rm e}^{i(kx-k^2t)}\^q(k,0)\,,
\label{e:RHP0}
\\
\noalign{\noindent where $\^q(k,t)$ is the Fourier transform of
$q(x,t)$:} \^q(k,t)= \mathop{\textstyle\trueint}\limits_{\!\!-\infty}^{\infty\!\!}
{\rm e}^{-ikx'}q(x',t)\,{\rm d} x'\,. \end{eqnarray} Also, $\mu\o{1,2}(x,t,k)=O(1/k)$ as
$k\to\infty$ in their respective half planes. Thus~\eref{e:RHP0}
defines a scalar RHP which is trivially solved via the standard
Cauchy projectors $P^\pm$ over the real line: \begin{equation} \mu(x,t,k)=
\frac1{2\pi i}\, \mathop{\textstyle\trueint}\limits_{\!\!-\infty}^{\infty\!\!}
{\rm e}^{i(k'x-k'^2t)}\,\frac{\^q(k',0)}{k'-k}\,{\rm d} k'\,.
\label{e:LSIVPmusoln}
\end{equation}
Inserting~\eref{e:LSIVPmusoln} into~\eref{e:LSLP} then
yields~\eref{e:LSIVPsoln} as the solution of the IVP.
\subsubsection*{Initial-boundary value problems.}
We now consider the IBVP for the LS equation on the half line.
Define simultaneous solutions of both the $x$-part and the
$t$-part of the Lax pair:
\[
\mu\o{j}(x,t,k)= \mathop{\textstyle\trueint}\limits_{(x_j,t_j)}^{(x,t)}{\rm e}^{ik(x-x')-ik^2(t-t')}
\big[q(x',t')\,{\rm d} x' + \big(iq_{x'}(x',t')-k q(x',t')\big)\,{\rm d} t'\big]\,.
\]
In particular,
consider the three eigenfunctions $\mu\o{j}(x,t,k)$, $j=1,2,3$,
defined by the choices $(x_1,t_1)=(0,0)$,\, $(x_2,t_2)=(\infty,t)$ and
$(x_3,t_3)=(0,T)$:
\numparts
\label{e:LSLPsolutions}
\begin{eqnarray}
\mu\o1(x,t,k)= \mathop{\textstyle\trueint}\limits_0^x {\rm e}^{ik(x-x')}q(x',t)\,{\rm d} x'
+ \mathop{\textstyle\trueint}\limits_0^t {\rm e}^{ikx-ik^2(t-t')} \big(iq_x(0,t')-k q(0,t')\big)\,{\rm d} t'\,,
\nonumber\\[-2ex]
\label{e:LSLPsolutions1}
\\
\mu\o2(x,t,k)= - \mathop{\textstyle\trueint}\limits_x^\infty{\rm e}^{ik(x-x')}\, q(x',t)\,{\rm d} x'\,,
\label{e:LSLPsolutions2}
\\
\mu\o3(x,t,k)= \mathop{\textstyle\trueint}\limits_0^x {\rm e}^{ik(x-x')}q(x',t)\,{\rm d} x'
- \mathop{\textstyle\trueint}\limits_t^T {\rm e}^{ikx-ik^2(t-t')} \big(iq_x(0,t')-k q(0,t')\big)\,{\rm d} t'\,.
\nonumber\\[-2ex]
\label{e:LSLPsolutions3}
\end{eqnarray}
\endnumparts
Note that
$\mu\o2$ coincides with the eigenfunction in the IVP.
As for $\mu\o1$ and $\mu\o3$, they are entire functions of~$k$.
These eigenfunctions have the following domains of analyticity and boundedness:
\begin{eqnarray}
\mu\o1\!:~ k\in{\mathbb{C}}_\mathrm{II}\,,
\qquad
\mu\o2\!:~ k\in{\mathbb{C}}_{\mathrm{III}+\mathrm{IV}}\,,
\qquad
\mu\o3\!:~ k\in{\mathbb{C}}_\mathrm{I}\,,
\label{e:muregions}
\end{eqnarray}
where ${\mathbb{C}}_{\mathrm{III}+\mathrm{IV}}$ is the lower-half plane.
The two jumps on $\mathop{\rm Im}\nolimits\,k=0$ and the jump on
$\mathop{\rm Re}\nolimits\,k=0$ (with $\mathop{\rm Im}\nolimits\,k\ge0$) then define a scalar RHP:
\numparts
\label{e:LSIBVPmujumps}
\begin{eqnarray}
\mu\o1(x,t,k) - \mu\o3(x,t,k)= {\rm e}^{ikx-ik^2t}\,\^F(k,T)
&\mathop{\rm Re}\nolimits\,k=0~\wedge~\mathop{\rm Im}\nolimits\,k\ge0\,,
\nonumber\\[-1ex]
\label{e:mujumps13}
\\
\mu\o1(x,t,k) - \mu\o2(x,t,k) = {\rm e}^{ikx-ik^2t}\,\^q(k,0)\,,
&\mathop{\rm Im}\nolimits\,k=0~\wedge~\mathop{\rm Re}\nolimits\,k\le0\,,
\nonumber\\[-1ex]
\label{e:mujumps12}
\\
\mu\o3(x,t,k) - \mu\o2(x,t,k) = {\rm e}^{ikx-ik^2t}\,\big(\^q(k,0) - \^F(k,T)\big)\,,~
&\mathop{\rm Im}\nolimits\,k=0~\wedge~\mathop{\rm Re}\nolimits\,k\ge0\,,
\nonumber\\[-1ex]
\label{e:mujumps23}
\end{eqnarray}
\endnumparts
where $\^F(k,t)= i\^f_1(k,t)-k\^f_0(k,t)$, and with
\begin{equation}
\^q(k,t)= \mathop{\textstyle\trueint}\limits_0^{\infty\!\!} {\rm e}^{-ikx}q(x,t)\,{\rm d} x\,,
\qquad
\^f_n(k,t)= \mathop{\textstyle\trueint}\limits_0^t {\rm e}^{ik^2t'} \,\partial^n_x q(x,t')|_{x=0}\,{\rm d} t'\,.
\label{e:LSktransforms}
\end{equation}
The one-sided Fourier transform $\^q(k,t)$
is analytic and bounded for $\mathop{\rm Im}\nolimits\,k<0$,
while the transforms $\^f_n(k,t)$ of the boundary data are
entire, and are bounded for $\mathop{\rm Im}\nolimits\,k^2\ge0$.
Moreover, $\^q(k,t)\to0$ as $k\to\infty$ with $\mathop{\rm Im}\nolimits\,k<0$,
and $\^f_n(z,t)\to0$ as $k\to\infty$ with $\mathop{\rm Im}\nolimits\,k^2<0$.
The solution of the RHP defined by~\eref{e:LSIBVPmujumps} is thus given by
\begin{eqnarray}
\mu(x,t,k)= \frac1{2\pi i}\, \mathop{\textstyle\trueint}\limits_{\!\!-\infty}^{\infty\!\!}
{\rm e}^{ik'x-ik'^2t}\,\frac{\^q(k',0)}{k'-k}\,{\rm d} k'
- \frac1{2\pi i} \mathop{\textstyle\trueint}\limits_{\partial{\mathbb{C}}_\mathrm{I}} {\rm e}^{ik'x-ik'^2t}
\frac{F(k',T)}
{k'-k}\,{\rm d} k'\,.
\nonumber\\[-1.2ex]
\label{e:IBVPRHPsolution}
\end{eqnarray}
Inserting~\eref{e:IBVPRHPsolution} into the first of~\eref{e:LSLP}
then yields the reconstruction formula:
\begin{equation}
q(x,t)= \frac1{2\pi}\, \mathop{\textstyle\trueint}\limits_{\!\!-\infty}^{\infty\!\!}
{\rm e}^{ikx-ik^2t}\^q(k,0)\,{\rm d} k
- \frac1{2\pi} \mathop{\textstyle\trueint}\limits_{\partial{\mathbb{C}}_\mathrm{I}} {\rm e}^{ikx-ik^2t}
F(k,T) \,{\rm d} k\,.
\label{e:IBVPreconstruction}
\end{equation}
As in the discrete case, \eref{e:IBVPreconstruction} still depends
on the unknown boundary datum $q_x(0,t)$ via its transform in $F(k,t)$.
Integrating~\eref{e:LSLP} from $(0,0)$ to $(0,T)$, $(\infty,T)$, $(\infty,0)$
and back yields the global relation as
\[
\mathop{\textstyle\trueint}\limits_0^T {\rm e}^{ik^2t} \big(iq_x(0,t)-k\,q(0,t)\big){\rm d} t
+ {\rm e}^{ik^2T}\mathop{\textstyle\trueint}\limits_0^{\infty\!\!} {\rm e}^{-ikx}q(x,T)\,{\rm d} x =
\mathop{\textstyle\trueint}\limits_0^{\infty\!\!} {\rm e}^{-ikx}q(x,0)\,{\rm d} x\,,
\]
which holds for $\mathop{\rm Im}\nolimits\,k\le0\,\wedge\,\mathop{\rm Im}\nolimits\,k^2\le0$,
i.e., $k\in{\mathbb{C}}_\mathrm{III}$.
In terms of the spectral data:
\numparts
\begin{equation}
i\^f_1(k,T)-k\^f_0(k,T) + {\rm e}^{ik^2T}\^q(k,T)= \^q(k,0)\,,
\qquad\forall k\in\={\mathbb{C}}_\mathrm{III}\,.
\label{e:LSGR4}
\end{equation}
Using the the transformation $k\to-k$, which leaves $\^f_n(k,t)$
invariant, from~\eref{e:LSGR4} we obtain
\begin{equation}
i\^f_1(k,T)+k\^f_0(k,T) + {\rm e}^{ik^2T}\^q(-k,T)= \^q(-k,0)\,
\qquad\forall k\in\={\mathbb{C}}_\mathrm{I}\,.
\label{e:LSGR4a}
\end{equation}
\endnumparts
We then solve for $\^f_1(k,T)$ and insert the result
in~\eref{e:IBVPreconstruction}.
[The first term in the RHS of~\eref{e:LSGR4a}
yields a zero contribution to the solution.]
Thus, the solution of the IBVP is given by
\begin{eqnarray}
q(x,t)= \frac1{2\pi}\, \mathop{\textstyle\trueint}\limits_{\!\!-\infty}^{\infty\!\!}
{\rm e}^{ikx-ik^2t}\^q(k,0)\,{\rm d} k
- \frac1{2\pi} \mathop{\textstyle\trueint}\limits_{\partial{\mathbb{C}}_\mathrm{I}} {\rm e}^{ikx-ik^2t}
\big[\^q(-k,0)-2k\^f_0(k,T)\big]\,{\rm d} k\,.
\nonumber\\[-1ex]
\label{e:IBVPLSreconst} \end{eqnarray}
Note that one can replace $\^f_0(k,T)$
with $\^f_0(k,t)$. Also, the second integrand
in~\eref{e:IBVPLSreconst} is analytic and bounded for
$\mathop{\rm Im}\nolimits\,k\ge0\,\wedge\,\mathop{\rm Im}\nolimits\,k^2\le0$. Thus, one can deform the
integration contour on the second integral onto the real $k$-axis
and recover the sine transform
solution~\eref{e:LSIBVPsinetransform}. Unlike sine/cosine transform
approaches, however, the present method can be applied to solve
IBVPs with more complicated BCs, as we show next.
\paragraph{Robin BCs.}
Consider the IBVP for LS equation with Robin BCs:
\begin{equation}
\alpha q(0,t)+q_x(0,t)= h(t)\,,
\label{e:LSRobinBC}
\end{equation}
with $h(t)$ given and
where $\alpha\in{\mathbb{C}}$ is a nonzero but otherwise arbitrary constant.
In a similar way as shown in~\ref{s:Robin} for the discrete case,
one obtains \cite{PRSLA453p1411,IMA67p559}
\numparts
\begin{eqnarray}
\^F(k,t)= \frac{\^G(k,t)}{k-i\alpha}
+ \frac{k+i\alpha}{k-i\alpha}{\rm e}^{ik^2t}\^q(-k,t)\,,
\label{e:LSRobinF}
\\
\noalign{\noindent where} \^G(k,t)= 2ik\^h(k,t)-(k+i\alpha)\^q(-k,0)
\end{eqnarray} \endnumparts contains the known portion of~\eref{e:LSRobinF} and where
$\^h(k,t)$ is defined according to~\eref{e:LSktransforms}. Then,
again following similar steps as in the discrete case, one obtains
the solution of the IBVP as:
\begin{equation} \fl
q(x,t)=
\frac1{2\pi}\, \mathop{\textstyle\trueint}\limits_{\!\!-\infty}^{\infty\!\!} {\rm e}^{ikx-ik^2t}\^q(k,0)\,{\rm d} k
- \frac1{2\pi} \mathop{\textstyle\trueint}\limits_{\partial {\mathbb{C}}_\mathrm{I}}{\rm e}^{ikx-ik^2t} {\^G(k,t) \over k-i\alpha} \,{\rm d} k
+ i\nu_\alpha {\rm e}^{-\alpha x+i\alpha^2t} \^G(i\alpha,t) \,,
\label{e:LSRobinBCsoln}
\end{equation}
where $\nu_\alpha=1$ for
$-\pi/2<\arg\alpha<0$, $\nu_\alpha=1/2$ for $\arg\alpha=0,-\pi/2$
and $\nu_\alpha=0$ for $0<\arg\alpha<3\pi/2$, and where the integral
along $\partial {\mathbb{C}}_\mathrm{I}$ is to be taken in the principal value
sense when $\arg\alpha=0,-\pi/2$. (The last term in the RHS
of~\eref{e:LSRobinBCsoln} is missing in
Refs.~\cite{PRSLA453p1411,IMA67p559}. One can easily show, however,
that without this term $q(x,t)$ does \textit{not} satisfy the BC at
$x=0$.)
\subsection{Nonlinear Schr\"odinger equation: initial value problem}
As in the linear case we assume that $q(x,0)\in{\cal S}({\mathbb{R}})$.
Recall that the Lax pair for the NLS equation~\eref{e:NLS}
is given by~\eref{e:NLSLP}
with $p(x,t)=\nu q^*(x,t)$.
For the present purposes, we consider $\Phi(x,t,k)$ to be a $2\times2$ matrix.
\paragraph{Analyticity.}
Introduce a modified eigenfunction
which has a well-defined limit as $x\to\pm\infty$:
\begin{equation}
\mu(x,t,k)=\Phi(x,t,k)\,{\rm e}^{-i\theta(x,t,k)\sigma_3},
\label{e:NLSmudef}
\end{equation}
with $\theta(x,t,k)=kx-2k^2t$.
Note $\mu(x,t,k)$ satisfies the following modified Lax pair:
\begin{eqnarray}
\mu_x - ik[\sigma_3,\mu] = \_Q\mu\,,
\qquad
\mu_t + 2ik^2[\sigma_3,\mu]= \_H\mu\,.
\label{e:NLSLPm}
\end{eqnarray}
Then, letting $\mu(x,t,k)= \esh{i\theta}\Psi(x,t,k)$,
we obtain the simplified Lax pair:
$\Psi_x= \esh{-i\theta}(\_Q)\,\Psi$
and
$\Psi_t= \esh{-i\theta}(\_H)\,\Psi\,$.
We then define the Jost eigenfunctions as the solutions of~\eref{e:NLSLPm}
that reduce to the identity as $x\to\pm\infty$:
\numparts
\label{e:JostNLS}
\begin{eqnarray}
\mu\o1(x,t,k)=
\_I + \mathop{\textstyle\trueint}\limits_{-\infty}^x \esh{ik(x-x')}\big(\_Q(x',t)\mu\o1(x',t,k)\big)\,{\rm d} x'\,,
\\[-0.4ex]
\mu\o2(x,t,k)=
\_I - \mathop{\textstyle\trueint}\limits_x^\infty \esh{ik(x-x')}\big(\_Q(x',t)\mu\o2(x',t,k)\big)\,{\rm d} x'\,.
\label{e:JostNLSb}
\end{eqnarray}
\endnumparts
We have the following regions of analyticity and boundedness~\cite{APT2003}:
\begin{eqnarray}
\mu\o{1,L},~\mu\o{2,R}\!:\quad &\mathop{\rm Im}\nolimits k<0\,,\qquad
\mu\o{1,R},~\mu\o{2,L}\!:\quad &\mathop{\rm Im}\nolimits k>0\,,
\nonumber
\end{eqnarray}
where $\mu\o{j}(x,t,k)=\big(\mu\o{j,L}\,,\mu\o{j,R}\big)$,
as before.
The analyticity properties of
$\Phi\o{j}(x,t,k)=\mu\o{j}(x,t,k)\,{\rm e}^{i\theta\sigma_3}$, $j=1,2$,
follow trivially.
\paragraph{Scattering matrix.}
Note $\det\Phi\o{j}=\det\mu\o{j}=1$ for $j=1,2$.
Thus $\Phi\o1$ and $\Phi\o2$ are
both fundamental solutions of~\eref{e:NLSLP} $\forall k\in{\mathbb{R}}$.
Hence $\Phi\o1(x,t,k)=\Phi\o2(x,t,k)\,\_A(k)$,
where $\_A(k)$ is the scattering matrix.
Equivalently,
\begin{equation}
\mu\o1(x,t,k)= \mu\o2(x,t,k)\,\esh{i\theta}\_A(k)\,.
\label{e:scatteringNLS}
\end{equation}
Note that $\_A(k)$ is indeed independent of time,
and $\det \_A(k)=1$.
Moreover,
\begin{equation}
\_A(k)= \_I + \mathop{\textstyle\trueint}\limits_{-\infty}^\infty \esh{-i(kx-2k^2t)}\big(\_Q(x,t)\mu\o1(x,t,k)\big)\,{\rm d} x\,,
\label{e:NLFT}
\end{equation}
and
\numparts
\label{e:NLSwronskians}
\begin{eqnarray}
a_{11}(k)=\mathop{\rm Wr}\nolimits(\Phi\o{1,L},\Phi\o{2,R})\,,\quad
a_{12}(k)=\mathop{\rm Wr}\nolimits(\Phi\o{1,R},\Phi\o{2,R})\,,
\\
a_{21}(k)= -\mathop{\rm Wr}\nolimits(\Phi\o{1,L},\Phi\o{2,L})\,,\quad
a_{22}(k)=-\mathop{\rm Wr}\nolimits(\Phi\o{1,R},\Phi\o{2,L})\,.
\end{eqnarray} \endnumparts
Thus,
$a_{11}(k)$ and $a_{22}(k)$ can be analytically continued
respectively on $\mathop{\rm Im}\nolimits k<0$ and $\mathop{\rm Im}\nolimits k>0$, but $a_{12}(k)$ and
$a_{21}(k)$ are nowhere analytic, in general.
\paragraph{Symmetries.}
When $p(x,t)=\nu q^*(x,t)$, with $\nu=\pm1$, the scattering
problem~\eref{e:NLSLP} admits an involution expressed via the matrix
$\sigma_\nu$ in~\eref{e:sigmanudef}: if $\Phi(x,t,k)$ is a solution
of~\eref{e:NLSLP1}, so is
\begin{equation} \Phi'(x,t,k)= \sigma_\nu
\Phi^*(x,t,k^*)\,. \label{e:NLSsymmetriesPhi} \end{equation}
Comparing the
behavior of the Jost eigenfunctions as $x\to\pm\infty$ we then have
\begin{eqnarray}
\Phi\o{j,L}(x,t,k)=\sigma_\nu\big(\Phi\o{j,R}(x,t,k^*)\big)^*\,,\quad
\Phi\o{j,R}(x,t,k)=\nu\sigma_\nu\big(\Phi\o{j,L}(x,t,k^*)\big)^*\,
\nonumber
\\[-1ex]
\label{e:NLSsymmetriesPhij}
\end{eqnarray}
for $j=1,2$\,.
Hence the following relations hold for the elements of the scattering matrix
$\_A(k)$:
\begin{equation}
a_{22}(k)= a_{11}^*(k^*)\,,\qquad
a_{21}(k)= \nu\,a_{12}^*(k^*)\,.
\label{e:NLSsymmetries}
\end{equation}
Note that, since $\det\_A(k)=1$, \eref{e:NLSsymmetries} imply
$|a_{11}(k)|^2-\nu|a_{12}(k)|^2=1$ $\forall k\in{\mathbb{R}}$.
\paragraph{Asymptotics.}
The asymptotics of the Jost solutions as $k\to\infty$
in their half planes is:
\numparts
\label{e:asympNLS}
\begin{eqnarray}
\mu\o1(x,t,k)= \_I - \frac1{2ik}\sigma_3\_Q
+ \frac1{2ik}\,\sigma_3\mathop{\textstyle\trueint}\limits_{-\infty}^x q(x',t)p(x',t)\,{\rm d} x' + O(1/k^2)\,,\\
\mu\o2(x,t,k)= \_I - \frac1{2ik}\sigma_3\_Q
- \frac1{2ik}\,\sigma_3\mathop{\textstyle\trueint}\limits_x^\infty q(x',t)p(x',t)\,{\rm d} x' + O(1/k^2)\,.
\label{e:asympNLSb}
\end{eqnarray}
\endnumparts
Moreover, from~\eref{e:NLSwronskians} and
\eref{e:asympNLS} one also obtains
\begin{eqnarray}
a_{22}(k)= 1 - \frac1{2ik}\mathop{\textstyle\trueint}\limits_{-\infty}^\infty q(x,t)p(x,t)\,{\rm d} x + O(1/k^2)\,.
\label{e:asympNLSa}
\end{eqnarray}
\paragraph{Inverse problem.}
The inverse problem is the RHP defined by~\eref{e:scatteringNLS} for $k\in{\mathbb{R}}$:%
\begin{eqnarray}
\_M^-(x,t,k)= \_M^+(x,t,k)(\_I-\_J(k,t))\,,
\label{e:NLSRHP}
\\[0.2ex]
\noalign{\noindent where the matrix-valued sectionally meromorphic functions are}
\nonumber
\\[-2ex]
\fl
\_M^+(x,t,k)= \bigg(\mu\o{2,L}(x,t,k)\,,\frac{\mu\o{1,R}(x,t,k)}{a_{22}(k)}\bigg)\,,
\qquad
\_M^-(x,t,k)= \bigg(\frac{\mu\o{1,L}(x,t,k)}{a_{11}(k)}\,,\,\mu\o{2,R}(x,t,k)\bigg)\,,
\nonumber
\\
\noalign{\noindent the jump matrix is}
\_J(k,t)= \begin{pmatrix}\rho_1(k)\rho_2(k)&{\rm e}^{2i\theta}\rho_2(k)\\
-{\rm e}^{-2i\theta}\rho_1(k)&0\end{pmatrix},
\nonumber
\\[0.2ex]
\noalign{\noindent and the reflection coefficients, defined $\forall k\in{\mathbb{R}}$, are}
\rho_1(k)= {a_{21}(k)}/{a_{11}(k)} \,, \qquad
\rho_2(k)= {a_{12}(k)}/{a_{22}(k)}\,.
\nonumber
\end{eqnarray}
Of course \eref{e:NLSsymmetries} imply $\rho_1(k)=
\nu\rho_2^*(k^*)$ when $p(x,t)= \nu q^*(x,t)$.
In the absence of a discrete spectrum [i.e., if $a_{11}(k)\ne0$
$\forall \mathop{\rm Im}\nolimits k<0$ and $a_{22}(k)\ne0$ $\forall\mathop{\rm Im}\nolimits k>0$]
the matrix functions $\_M^\pm(x,t,k)-\_I$ are sectionally analytic
in their respective half planes, and they vanish as $k\to\infty$.
Therefore the RHP~\eref{e:NLSRHP} is solved via the Cauchy
projectors~$P^\pm$, as for the linear case:
\begin{equation}
\_M^+(x,t,k)= \_I + \frac1{2\pi i}\mathop{\textstyle\trueint}\limits_{-\infty}^\infty \_M^+(x,t,k')\frac{\_J(k',t)}{k'-k}{\rm d} k'\,.
\label{e:NLSRHPsoln}
\end{equation}
The asymptotic behavior of $\_M(x,t,k)$ as $k\to\infty$ is easily obtained
from~\eref{e:NLSRHPsoln}: for $\mathop{\rm Im}\nolimits\,k>0$,
\begin{equation}
\_M^+(x,t,k)= \_I - \frac1{2i\pi k}\mathop{\textstyle\trueint}\limits_{-\infty}^\infty \_M^+(x,t,k')\_J(k',t){\rm d} k' + O(1/k^2)\,.
\label{e:RHPasympNLS}
\end{equation}
Comparing the $(1,2)$-components of~\eref{e:RHPasympNLS}
and~\eref{e:asympNLS} then yields the
reconstruction formula:
\begin{equation}
q(x,t)= \frac1\pi \mathop{\textstyle\trueint}\limits_{-\infty}^\infty {\rm e}^{2i(kx-2k^2t)}\rho_2(k)
\big(\mu\o2(x,t,k)\big)_{11}\,{\rm d} k\,.
\label{e:NLSreconstruction}
\end{equation}
\paragraph{Linear limit.}
If $\_Q(x,t)=O(\epsilon)$
one has
$\mu(x,t,k)=\_I+O(\epsilon)$ and, to $O(\epsilon)$,
\[
\_A(k)= \_I + \mathop{\textstyle\trueint}\limits_{-\infty}^\infty \esh{-i(kx-2k^2t)}\_Q(x,t)\,{\rm d}
x\,.
\]
From here and~\eref{e:NLSreconstruction} one then obtains, to $O(\epsilon)$,
\begin{eqnarray}
q(x,t)= \frac1\pi \mathop{\textstyle\trueint}\limits_{-\infty}^\infty
e^{2i(kx-2k^2t)}\rho_2(k)\,{\rm d} k\,,
\qquad
\rho_2(k)= \mathop{\textstyle\trueint}\limits_{-\infty}^\infty {\rm e}^{-2ikx}q(x,0)\,{\rm d} x\,,
\nonumber
\end{eqnarray}
which, with the familiar rescaling $k'=2k$, coincide with the Fourier
transform pair~\eref{e:FTpair}.
\subsection{Nonlinear Schr\"odinger equation: initial-boundary value problem}
\label{s:IBVPNLS}
We now discuss the IBVP for the NLS equation~\eref{e:NLS} on the half line.
As in the linear case, we assume $q(x,0)\in{\cal S}({\mathbb{R}}^+)$ and
$q(0,t)\in{\cal C}({\mathbb{R}}^+)$.
\paragraph{Eigenfunctions and analyticity.}
Introduce three Jost eigenfunctions as the solutions of
\eref{e:NLSLPm} that reduce to the identity respectively at
$(x,t)=(0,0)$, $(x,t)\to(\infty,t)$ and $(x,t)=(0,T)$:
\par\kern-2\medskipamount
\numparts
\label{e:NLSLPsolutions}
\begin{eqnarray}
\fl
\mu\o1(x,t,k)= \_I + \mathop{\textstyle\trueint}\limits_0^x \esh{ik(x-x')}\big(\_Q(x',t)\mu\o1(x',t,k)\big)\,{\rm d} x'
\nonumber\\[-2ex]\kern8em{ }
+ \mathop{\textstyle\trueint}\limits_0^t \esh{i[kx-2k^2(t-t')]}\big(\_H(0,t',k)\mu\o1(0,t',k)\big)\,{\rm d} t'\,,
\label{e:NLSIBVPmu1}
\\
\fl
\mu\o2(x,t,k)= \_I - \mathop{\textstyle\trueint}\limits_x^\infty\esh{ik(x-x')}\big(\_Q(x',t)\mu\o2(x',t,k)\big)\,{\rm d} x'\,,
\\
\fl
\mu\o3(x,t,k)= \_I + \mathop{\textstyle\trueint}\limits_0^x \esh{ik(x-x')}\big(\_Q(x',t)\mu\o3(x',t,k)\big)\,{\rm d} x'
\nonumber\\[-2ex]\kern8em{ }
- \mathop{\textstyle\trueint}\limits_t^T \esh{i[kx-2k^2(t-t')]}\big(\_H(0,t',k)\mu\o3(0,t',k)\big)\,{\rm d} t'\,.
\end{eqnarray}
\endnumparts
Note that $\mu\o1(x,t,k)$ and $\mu\o3(x,t,k)$ are entire functions
of~$k$, while $\mu\o2(x,t,k)$ coincides with~\eref{e:JostNLSb}.
Moreover,
\eref{e:NLSLPsolutions} imply the
following domains of analyticity and boundedness:
\begin{eqnarray}
\mu\o{1,L}:~~ {\mathbb{C}}_\mathrm{III}\,,\qquad
\mu\o{1,R}:~~ {\mathbb{C}}_\mathrm{II}\,,\qquad
\mu\o{3,L}:~~ {\mathbb{C}}_\mathrm{IV}\,,\qquad
\mu\o{3,R}:~~ {\mathbb{C}}_\mathrm{I}\,,
\nonumber
\\
\mu\o{2,L}:~~ {\mathbb{C}}_{\mathrm{I}+\mathrm{II}}\,,\qquad\!\!
\mu\o{2,R}:~~ {\mathbb{C}}_{\mathrm{III}+\mathrm{IV}}\,.
\nonumber
\end{eqnarray}
\paragraph{Scattering matrices.}
We still have $\det\Phi\o{j}(x,t,k)=1$ for
all $x,t\in{\mathbb{R}}^+$ and for all $j=1,2,3$. Hence the matrices
$\Phi\o{j}(x,t,k)$, $j=1,2,3$ are three fundamental solutions of the
Lax pair~\eref{e:NLSLP}, and they must be proportional to each
other. In terms of the modified eigenfunctions:
\numparts
\label{e:NLSIBVPscattering}
\begin{eqnarray}
\mu\o2(x,t,k)= \mu\o1(x,t,k)\,\esh{i(kx-2k^2t)}\_s(k)\,,
\label{e:NLSIBVPscattering1}
\\
\mu\o3(x,t,k)= \mu\o1(x,t,k)\,\esh{i(kx-2k^2t)}\_S(k,T)\,.
\label{e:NLSIBVPscattering2}
\end{eqnarray}
\endnumparts
Note that the first column of~\eref{e:NLSIBVPscattering1} is
defined $\forall k\in\={\mathbb{C}}_{\mathrm{I}+\mathrm{II}}$, the second column
$\forall k\in\={\mathbb{C}}_{\mathrm{III}+\mathrm{IV}}$ and~\eref{e:NLSIBVPscattering2}
holds $\forall k\in{\mathbb{C}}$.
Also, $\det\,\_s(k)=\det\,\_S(k,T)=1$.
The scattering matrices $\_s(k)$
and~$\_S(k,T)$ are obtained from the boundary values of the
eigenfunctions, namely, $\forall k\in{\mathbb{C}}$,
\begin{equation}
\_s(k)= \mu\o2(0,0,k)\,,\qquad \_S(k,T)=
\big(\esh{2ik^2T}\mu\o1(0,T,k)\big)^{-1}\,.
\label{e:NLSIBVPscattering3}
\end{equation}
Then, from~\eref{e:NLSLPsolutions} we have the following integral
representations of the scattering matrices:
\numparts
\label{e:scattdata}
\begin{eqnarray}
\_s(k)= \_I - \mathop{\textstyle\trueint}\limits_0^\infty \esh{-ikx}\big(\_Q(x,0)\mu\o2(x,0,k)\big)\,{\rm d} x\,,
\\[-1ex]
\_S^{-1}(k,T)= \_I + \mathop{\textstyle\trueint}\limits_0^T \esh{2ik^2t}\big(\_H(0,t,k)\mu\o1(0,t,k)\big)\,{\rm d} t\,.
\label{e:scattdataS}
\end{eqnarray}
\endnumparts
These imply that:
$\_s_L(k)$ and $\_s_R(k)$ are analytic respectively
for $k\in{\mathbb{C}}_{\mathrm{I}+\mathrm{II}}$ and $k\in{\mathbb{C}}_{\mathrm{III}+\mathrm{IV}}$, and
their restriction to these domains
are continuous and bounded on the boundary;
$\_S(k,T)$ is entire,
and $\_S_L(k,T)$ and $\_S_R(k,T)$ are bounded
respectively for $k\in\={\mathbb{C}}_{\mathrm{II}+\mathrm{IV}}$ and
$k\in\={\mathbb{C}}_{\mathrm{I}+\mathrm{III}}$.
\paragraph{Symmetries, discrete spectrum and asymptotics.}
When $p(x,t)=\nu q^*(x,t)$, \eref{e:NLSsymmetriesPhi} still holds,
as does~\eref{e:NLSsymmetriesPhij} for $j=1,2,3$.
This implies that the scattering matrices can be expressed as
\[
\_s(k)= \begin{pmatrix} a(k) &\nu b^*(k^*)\\ b(k)
&a^*(k^*)\end{pmatrix},\qquad \_S(k,T)= \begin{pmatrix} A(k,T) &\nu
B^*(k^*,T)\\ B(k,T) &A^*(k^*,T)\end{pmatrix}.
\]
The properties of $a(k)$, $b(k)$, $A(k,T)$ and $B(k,T)$
follow trivially from those of $\_s(k)$ and $\_S(k,T)$.
Also, one can show that $\mu\o{j}(x,t,k)= \_I+ O(1/k)$ for $j=1,2,3$ as
$k\to\infty$ in the respective domains of boundedness of their
columns. The asymptotics of the eigenfunctions then determines that
of the scattering matrices. In particular, $a(k)=1+O(1/k)$ and
$b(k)=O(1/k)$ as $k\to\infty$ in $\={\mathbb{C}}_{\mathrm{I}+\mathrm{II}}$, and
$A(k,T)=1+O(1/k)$ and $B(k,T)=O(1/k)$ as $k\to\infty$ in
$\={\mathbb{C}}_{\mathrm{III}+\mathrm{IV}}$.
\paragraph{Riemann-Hilbert problem, solution and reconstruction formula.}
Equations~\eref{e:NLSIBVPscattering} allow us to formulate the
following RHP:
\begin{equation}
\_M^-(x,t,k)=\_M^+(x,t,k)\,(\_I-\_J(k,t))\,, \qquad k\in L\,,
\label{e:NLSIBVPRHP}
\end{equation}
with
$L=\partial{\mathbb{C}}_\mathrm{I}\cup\partial{\mathbb{C}}_\mathrm{III}=L_1\cup L_2\cup L_3\cup L_4$,
where
\[
L_1=\={\mathbb{C}}_\mathrm{I}\cap\={\mathbb{C}}_\mathrm{II}\,,\quad
L_2=\={\mathbb{C}}_\mathrm{II}\cap\={\mathbb{C}}_\mathrm{III}\,,\quad
L_3=\={\mathbb{C}}_\mathrm{III}\cap\={\mathbb{C}}_\mathrm{IV} \,,\quad
L_4=\={\mathbb{C}}_\mathrm{I}\cap\={\mathbb{C}}_\mathrm{IV}\,,
\]
and where \begin{eqnarray} \fl \_M^+(x,t,k)= \left\{\!
\begin{array}{l}\displaystyle \left(\mu\o{2,L}, {\mu\o{3,R}\over
d(k)}\right)\,,\quad
k \in {\mathbb{C}}_\mathrm{I}\,,
\\[0.6ex]\displaystyle
\left({\mu\o{1,L}\over a^*(k^*)}, \mu\o{2,R}\right)\,,\quad
k \in {\mathbb{C}}_\mathrm{III}\,,
\end{array}\right.
\qquad
\_M^-(x,t,k)= \left\{\! \begin{array}{l}\displaystyle
\left(\mu\o{2,L}, {\mu\o{1,R} \over a(k)}\right)\,,\quad
k \in {\mathbb{C}}_\mathrm{II}\,\,,
\\[0.6ex]\displaystyle
\left({\mu\o{3,L} \over d^*(k^*)}, \mu\o{2,R}\right)\,,\quad
k \in {\mathbb{C}}_\mathrm{IV}\,.
\end{array}\right.
\nonumber
\end{eqnarray}
The jump matrices $\_J_{\!j}(k,t)$, each defined for $k\in L_j$, are:
\begin{eqnarray}
\_J_1(k,t)=\begin{pmatrix} 0 &\nu {\rm e}^{2i\theta} \Gamma^* (k^*)
\\ 0 & 0 \end{pmatrix} \,,
\qquad
\_J_2(k,t)=\begin{pmatrix} 0 & {\rm e}^{2i\theta}\gamma (k)
\\ -\nu {\rm e}^{-2i\theta}\gamma^*(k) & \nu |\gamma (k)|^2
\end{pmatrix}\,,
\nonumber
\\
\_J_3(k,t)=\begin{pmatrix} 0 & 0
\\ -{\rm e}^{-2i\theta}\Gamma (k) & 0 \end{pmatrix} \,,
\qquad
\_J_4(k,t)= \_I- (\_I-\_J_1)(\_I-\_J_2)^{-1}(\_I-\_J_3)\,,
\nonumber
\end{eqnarray}
and the reflection coefficients are
\[\fl
\gamma(k)={\nu b^*(k) \over a(k)} \,,\quad
d(k)=a(k)A^*(k^*,T)-\nu b(k)B^*(k^*,T) \,, \quad \Gamma(k)={ B(k,T)
\over a^*(k^*)d^*(k^*)}\,.
\]
Note that $d(k)$ is defined $\forall k\in\={\mathbb{C}}_{\mathrm{I}+\mathrm{II}}$,
$\Gamma(k)$ for $k\in L_3\cup L_4$ and $\gamma(k)$ $\forall
k\in{\mathbb{R}}$. Their asymptotics as $k\to\infty$ follow trivially from
those of $\_s(k)$ and $\_S(k,T)$. As a result, $\_M(x,t,k)\to\_I$ as
$k\to\infty$. Hence, in the absence of a discrete spectrum [that is,
assuming that $a(k)$ and $d(k)$ have no zero respectively for
$k\in{\mathbb{C}}_\mathrm{II}$ and $k\in{\mathbb{C}}_\mathrm{I}$], the
RHP~\eref{e:NLSIBVPRHP} is solved by Cauchy projectors: \begin{equation}
\_M^+(x,t,k)=
\_I+ \frac1{2\pi i} \mathop{\textstyle\trueint}\limits_L \_M^+(x,t,k'){\_J(k',t) \over k'-k} \,{\rm d} k'\,.
\label{e:NLSIBVPRHPsoln}
\end{equation}
Substituting the asymptotic expansion for $\_M(x,t,k)$ into the
$x$-part of the Lax pair and comparing the $(1,2)$ components,
we have
\begin{equation}
q(x,t)=-2i\lim_{k\to\infty}
k\big(\_M(x,t,k)-\_I\big)_{12}\,.
\label{e:NLSIBVPreconstructionq}
\end{equation}
Using the asymptotic expansion for $\_M(x,t,k)$ as $k\to\infty$,
from~\eref{e:NLSIBVPRHPsoln} and comparing the $(1,2)$
components, we obtain the solution of the IBVP for the NLS equation as
\begin{eqnarray}
\fl
q(x,t)=\frac1{\pi}\mathop{\textstyle\trueint}\limits_{\partial {\mathbb{C}}_\mathrm{I}} \nu
{\rm e}^{2i\theta(x,t,k')} \Gamma^*(k'^*)\_M^+_{11}(x,t,k') \,{\rm d} k'
\nonumber
\\[-1ex
- \frac1{\pi}\mathop{\textstyle\trueint}\limits_{\!\!-\infty}^{\infty\!\!}{\rm e}^{2i\theta(x,t,k')}\gamma (k')
\_M^+_{11}(x,t,k') \,{\rm d} k'
- \frac1{\pi}\mathop{\textstyle\trueint}\limits_0^{\infty \!\!} \nu
|\gamma (-k')|^2\_M^+_{12}(x,t,-k') \,{\rm d} k'.
\label{e:NLSIBVPrepresentationq}
\end{eqnarray}
\paragraph{Linear limit.}
Supppose that $\_Q=O(\epsilon)$.
From~\eref{e:NLSLPsolutions} and~\eref{e:NLSIBVPrepresentationq}
we have
$\mu=\_I+O(\epsilon)$ and $\_M=\_I+O(\epsilon)$.
Also, \eref{e:scattdata} imply
$\gamma(k)=-\^q(2k,0)+O(\epsilon^2)$ and
$d(k)=1+O(\epsilon^2)$, as well as
\[
\Gamma^*(k^*)=\nu\big(2k\^f_0(2k,T)-i\^f(2k,T)\big)
+O(\epsilon^2) \,.
\]
Thus~\eref{e:NLSIBVPrepresentationq} yields, to $O(\epsilon)$,
\begin{eqnarray}
\fl
q(x,t)=\frac1{\pi} \mathop{\textstyle\trueint}\limits_{\partial {\mathbb{C}}_\mathrm{I}}
{\rm e}^{2i(k'x-2k'^2t)}\big(2k'\^f_0(2k',T)-i\^f_1(2k',T)\big) \,{\rm d} k'
+\frac1{\pi} \mathop{\textstyle\trueint}\limits_{\!\!-\infty}^{\infty\!\!}
{\rm e}^{2i(k'x-2k'^2t)}\^q(2k',0) \,{\rm d} k'\,.
\nonumber
\end{eqnarray}
Performing the change of variable $2k'=k$, we then see that,
to leading order, this expression yields exactly the
solution of the linear Schr\"odinger equation on the half line,
namely~\eref{e:IBVPreconstruction}.
\paragraph{Global relation and Dirichlet-to-Neumann map.}
Equations~\eref{e:scattdata} involve all initial and boundary data for
$\_Q(x,t)$.
These values are not all independent, however, since they satisfy
the global relation
\begin{eqnarray}
\fl
\mathop{\textstyle\trueint}\limits_0^T \esh{2ik^2t}\big(\_H(0,t,k)\mu(0,t,k)\big)\,{\rm d} t
+ \esh{2ik^2T}\mathop{\textstyle\trueint}\limits_0^\infty \esh{-ikx}\big(\_Q(x,T)\mu(x,T,k)\big)\,{\rm d} x
\nonumber\\[-2ex]\kern14em
= \mathop{\textstyle\trueint}\limits_0^\infty \esh{-ikx}\big(\_Q(x,0)\mu(x,0,k)\big)\,{\rm d} x\,.
\label{e:NLSglobal}
\end{eqnarray}
When~\eref{e:NLSglobal} is evaluated with $\mu\equiv\mu\o2(x,t,k)$,
its first column is defined $\forall k\in\={\mathbb{C}}_{\mathrm{I}+\mathrm{II}}$,
its second column $\forall k\in\={\mathbb{C}}_{\mathrm{III}+\mathrm{IV}}$.
Moreover, when $\mu(x,t,k)=\mu\o2(x,t,k)$,
the RHS of~\eref{e:NLSglobal} equals~$\_I-\_s(k)$.
Using~\eref{e:NLSIBVPscattering2} in the LHS,
one then obtains a relation between the
scattering matrices:
\begin{eqnarray}
\_S^{-1}(k,T)\_s(k)= \_I - \esh{2ik^2T}\_G(k,T)\,,
\label{e:NLSglobal2}
\\
\noalign{\noindent where} \_G(k,t)= \mathop{\textstyle\trueint}\limits_0^\infty
\esh{-ikx}\big(\_Q(x,t)\mu\o2(x,t,k)\big)\,{\rm d} x\,, \nonumber \end{eqnarray}
and $\_G_L(k,t)$ and $\_G_R(k,t)$ are analytic respectively
for $k\in{\mathbb{C}}_{\mathrm{I}+\mathrm{II}}$ and $k\in{\mathbb{C}}_{\mathrm{III}+\mathrm{IV}}$,
and continuous and bounded on the boundary of these domains.
In particular,
for $k\in{\mathbb{C}}_{\mathrm{III}+\mathrm{IV}}$ we have
\[
A^*(k^*,T)b^*(k^*)-B^*(k^*,T)a^*(k)=
-\nu{\rm e}^{4ik^2T}\mathop{\textstyle\trueint}\limits_0^\infty {\rm e}^{-2ikx}q(x,T)\,\mu\o2_{22}(x,T,k)\,{\rm d} x\,.
\]
Since the integral term in the RHS is of $O(1/k)$,
as $k\to\infty$ in ${\mathbb{C}}_\mathrm{III}$, integrating along $\partial{\mathbb{C}}_\mathrm{III}$
we obtain the following integral relation:
\begin{equation}
\mathop{\textstyle\trueint}\limits_{\partial{\mathbb{C}}_{\mathrm{III}}}
k\,{\rm e}^{-4ik^2t}\big( B^*(k^*)-r(k^*)A^*(k^*)\big) \,{\rm d} k =0\,,
\label{e:NLSGRintegral}
\end{equation}
where $r(k)=b(k)/a(k)$.
As shown in \cite{CPAM58p639}, this relation can be solved to obtain the
Dirichlet-to-Neumann map, which expresses the unknown boundary datum
$q_x(0,t)$ in terms of the known one, $q(0,t)$.
\paragraph{Linearizable BCs and soliton solutions.}
One can write $\_S(k,t)=
\~\Phi^{-1}(k,t)\esh{-2ik^2t\sigma_3}$,
where $\~\Phi(k,t)= \Phi\o1(0,t,k)$ solves the $t$-part of the Lax
pair~\eref{e:NLSLP2} for $x=0$, namely
\begin{equation}
\~\Phi_t + 2ik^2\sigma_3\~\Phi= \_H(0,t,k)\,\~\Phi\,,
\label{e:Phitilde}
\end{equation}
with $\~\Phi(k,0)=\_I$.
The matrix $\~\Phi(-k,t)$ solves an equation identical to~\eref{e:Phitilde}
except that $\_H(0,t,k)$ is replaced by $\_H(0,t,-k)$.
If there is an invertible time-independent matrix $\_N(k)$ such that
\begin{equation}
\_N(k)\,\big(2ik^2\sigma_3-\_H(0,t,k)\big)=
\big(2ik^2\sigma_3-\_H(0,t,-k)\big)\,\_N(k)\,,
\label{e:NLSNdef}
\end{equation}
it then is easy to see that
$\~\Phi(-k,t)= \_N(k)\,\~\Phi(k,t)\,\_N^{-1}(k)\,$.
One can show that
a suitable matrix $\_N(k)$ only exists for homogeneous Robin BCs,
namely,
\[
q_x(0,t)-\chi q(0,t)=0\,,
\]
with $\chi\in{\mathbb{R}}$ arbitrary.
In that case, \eref{e:NLSNdef} implies $N_{12}=N_{21}=0$ and
$N_{11}=f(k)\,N_{22}$, where
$f(k)=-(2ik-\chi)/(2ik+\chi)\,$, which in turn imply
$A^*(k^*,T)=A^*(-k^*,T)$ and $B^*(k^*,T)=f(k)B^*(-k^*,T)$. From
here, similar arguments to those used in the discrete problem can be
applied to the analysis of linearizable BCs.
As in the discrete case, the poles for the IBVP occur at the zeros of
$a(k)$ in ${\mathbb{C}}_\mathrm{II}$ and those of $d(k)$ in ${\mathbb{C}}_\mathrm{I}$, plus
their complex conjugates in ${\mathbb{C}}_\mathrm{III}$ and ${\mathbb{C}}_\mathrm{IV}$
\cite{NLTY18p1771}.
Each of these pairs of zeros, by itself, generates the well-known
one-soliton solution of NLS:
\begin{equation}
q(x,t)= 2\eta{\rm e}^{2i\xi x-4i(\xi^2-\eta^2)t +i(\phi-\pi/2)}
\mathop{\rm sech}\nolimits(2\eta x-8\xi\eta t-2\delta)\,,
\end{equation}
where $k_1=\xi+i\eta$ is the zero of $a(k)$ or of $d(k)$, and
$C_1= 2\eta\,{\rm e}^{2\delta+i\phi}$ is the norming constant
(see \cite{NLTY18p1771} for further details).
\section{Conclusion}
\label{s:conclusion}
In conclusion, we have demonstrated a method to solve initial-boundary
value problems for linear and integrable nonlinear discrete evolution
equations.
We have done so by solving the IBVP for the discrete linear Schr\"odinger
(DLS) and integrable discrete nonlinear Schr\"odinger (IDNLS) equations
on the natural numbers.
Moreover, we have illustrated the similarities and differences between
the method for differential-difference equations and PDEs
by showing explicitly the correspondence between the discrete and
its continuum limit.
While the differential form representation of the continuum is lost,
the essential ideas of the method can be carried over to the discrete,
but the actual implementation of the method presents some
additional difficulties.
In particular,
the jump location in the nonlinear case differs because of the rescaling
$z'=z^2$ in the dispersion relation $\omega(z)$ when going from the
linear to the nonlinear case.
This is a significant difference from continuum limit, where the jumps
in the nonlinear case are given by the union of those for the linear
problem and its adjoint
(cf.\ sections~\ref{s:IDNLS} and~\ref{s:continuum}).
Also, the limit $k\to\infty$ in the continumm becomes
$z\to0$ (for $\mathop{\rm Im}\nolimits k>0$) and $z\to\infty$ (for $\mathop{\rm Im}\nolimits k<0$) in the
discrete.
As a consequence, the behavior of the eigenfuncions and spectral
data as $z\to0$ in the discrete problem must also be studied
in addition to that as $z\to\infty$.
This is the why the point $z=0$ plays such a special role
in the discrete problem, similarly to Ref.~\cite{IP23p1711},
and is one of the reason why discrete problems are more complicated
than their continuum counterparts.
For the DLS, in addition to solving the IBVP with Dirichlet-type
BCs we have shown that, contrary to Fourier series approaches,
the method can deal with more complicated kinds of BCs
just as effectively.
For the IDNLS, in addition to solving the IBVP (showing explicitly
how to eliminate the unknown boundary datum),
we have characterized the linear limit, the linearizable BCs
(showing how they fit within the IST framework),
and we have obtained explicitly the soliton solutions.
It should be clear that, similarly to the continuum,
the method can be generalized to solve IBVPs for both the DLS and
IDNLS equation defined on a finite set of integers.
It would also be straightforward to generalize
this method to any discrete linear evolution equation and to
other integrable discrete nonlinear evolution equations.
Several interesting questions can now be effectively addressed
using the present method.
For example, one can use the expression for the solution to study its
long-time asymptotics,
using the Deift-Zhou method \cite{BAMS26p119},
or to study the ``small dispersion'' or ``anti-continuum'' limit
(i.e., the limit $h\to\infty$),
e.g., using the Deift-Venakides-Zhou method~\cite{IMRN6p286}.
Doing so is a nontrivial task, however,
which is beyond the scope of this work.
\section*{Acknowledgements}
It is a pleasure to thank Mark Ablowitz, Athanassios Fokas,
Beatrice Pelloni and Barbara Prinari for many insightful discussions.
This work was partially supported by the National Science Foundation
under grant number DMS-0506101.
\setcounter{section}0
\defAppendix~\Alph{section}{Appendix~\Alph{section}}
\def\Alph{section}.\arabic{subsection}{\Alph{section}.\arabic{subsection}}
\def\numparts{\refstepcounter{equation}%
\setcounter{eqnval}{\value{equation}}%
\setcounter{equation}{0}%
\def\Alph{section}.\arabic{equation}{\Alph{section}.\arabic{eqnval}{\it\alph{equation}}}}
\def\endnumparts{\def\Alph{section}.\arabic{equation}{\Alph{section}.\arabic{equation}}%
\setcounter{equation}{\value{eqnval}}}
\def\Alph{section}.\arabic{equation}{\Alph{section}.\arabic{equation}}
\section{Notation and frequently used formulae}
\label{s:notations}
We denote the closure, interior and boundary of
a domain $D$ respectively by $\=D$, $D^o$ and $\partial D$,
where as usual $\partial D$ is oriented so as to leave $D$ to its left.
We also occasionally refer to punctured regions of the complex plane,
which we denote as $R^{\,[\raise0.08ex\hbox{\scriptsize$\slash$}\kern-0.34em0]}=R{-}\{0\}$.
As usual, $[\_A,\_B]=\_A\_B-\_B\_A$ is the commutator of two matrices
$\_A$ and $\_B$.
We use a superscript asterisk to denote
the complex conjugate $z^*$ of a complex number~$z$,
and $|z|^2=z^*z$.
Throughout, ${\mathbb{R}}^+=\{x\in{\mathbb{R}}:x>0\}$ and ${\mathbb{R}}^+_0={\mathbb{R}}^+\cup\{0\}$.
Similarly,
${\mathbb{N}}=\{1,2,3,\dots\}$ and ${\mathbb{N}}_0={\mathbb{N}}\cup\{0\}$.
Finally, we denote by ${\mathbb{C}}_\mathrm{I},\dots,{\mathbb{C}}_\mathrm{IV}$ the first,
second, third and fourth quadrants of the complex plane:
${\mathbb{C}}_\mathrm{I}=\{k\in{\mathbb{C}}:\mathop{\rm Re}\nolimits k>0\,\wedge\,\mathop{\rm Im}\nolimits k>0\}$, etc.
Similarly,
we denote by ${\mathbb{C}}_{\mathrm{I}+\mathrm{II}}=\{k\in{\mathbb{C}}:\mathop{\rm Im}\nolimits k>0\}$ and
${\mathbb{C}}_{\mathrm{III}+\mathrm{IV}}=\{k\in{\mathbb{C}}:\mathop{\rm Im}\nolimits k<0\}$
the upper-half and lower-half planes, respectively.
The nonlinear Schr\"odinger (NLS) equation~\eref{e:NLS} is a reduction
of the system
\numparts
\label{e:NLSsystem}
\begin{eqnarray}
iq_t + q_{xx} + 2q^2p=0\,,
\\
-ip_t + p_{xx} + 2p^2q=0\,.
\end{eqnarray}
\endnumparts
That is, \eref{e:NLS} follows by imposing $p(x,0)=\nu q^*(x,0)$
in~\eref{e:NLSsystem}, which then implies that $p(x,t)=\nu q^*(x,t)$
$\forall t>0$ and $q(x,t)$ is a solution of~\eref{e:NLS}.
A Lax pair for~\eref{e:NLSsystem} is given by:
\numparts
\label{e:NLSLP}
\begin{eqnarray}
\Phi_x - ik\sigma_3\Phi = \_Q\,\Phi\,,
\label{e:NLSLP1}
\\
\Phi_t + 2ik^2\sigma_3\Phi= \_H\,\Phi\,,
\label{e:NLSLP2}
\end{eqnarray}
\endnumparts
where $\Phi(x,t,k)$ is either a 2-component vector or a $2\times2$ matrix,
and where
\numparts
\label{e:NLSLPpotentials}
\begin{eqnarray}
\sigma_3= \begin{pmatrix}1 &0\\0&-1\end{pmatrix},\qquad
\_Q(x,t)= \begin{pmatrix}0 &q\\p&0\end{pmatrix},
\label{e:sigma3Q}
\\
\_H(x,t,k)=
i\sigma_3(\_Q_x - \_Q^2)-2k\_Q=
\begin{pmatrix} -iqp& \!\!\!iq_x-2kq\\ -ip_x-2kp &iqp\end{pmatrix}.
\end{eqnarray}
\endnumparts
(The present pair differs from that in Ref.~\cite{CMP230p1} by
the rescaling $k\to-k$,
and from that in Ref.~\cite{APT2003} by $k\to-k$ and $t\to-t$.)
Similarly, the integrable discrete NLS equation~\eref{e:IDNLS} is a
reduction of the system of differential-difference equations
\numparts
\label{e:AL}
\begin{eqnarray}
i\.q_n+ (q_{n+1}-2q_n+q_{n-1})-q_np_n(q_{n+1}+q_{n-1})=0\,,
\\
i\.p_n+ (p_{n+1}-2p_n+p_{n-1})-p_nq_n(p_{n+1}+p_{n-1})=0\,.
\end{eqnarray}
\endnumparts
That is, imposing $p_n(0)=\nu\,q_n^*(0)$ on~\eref{e:AL}
yields~$p_n(t)=\nu\,q_n^*(t)$ $\forall t>0$,
with $q_n(t)$ satisfying~\eref{e:IDNLS}.
In the literature,
the name Ablowitz-Ladik (AL) is associated to both~\eref{e:IDNLS}
and \eref{e:AL}.
To avoid confusion, here we
will simply refer to~\eref{e:IDNLS} as the IDNLS equation,
reserving the name AL for the more general system~\eref{e:AL}.
A Lax pair for the AL system~\eref{e:AL} is:
\numparts
\label{e:ALLP}
\begin{eqnarray}
\Phi_{n+1} - \_Z \Phi_n = \_Q_n\,\Phi_n\,,
\label{e:ALLP1}
\\
\.\Phi_n - \txtfrac i2(z-1/z)^2\sigma_3\,\Phi_n = \_H_n\,\Phi_n\,,
\label{e:ALLP2}
\end{eqnarray}
\endnumparts
where $\Phi_n(z,t)$ is either a two-component column vector
or a $2\times2$ matrix,
and where
\numparts
\label{e:QH}
\begin{eqnarray}
\_Z= {\rm e}^{\sigma_3\,\log z}= \begin{pmatrix}z &0\\ 0 &1/z\end{pmatrix},
\qquad
\_Q_n(t)= \begin{pmatrix} 0 & q_n \\ p_n & 0 \end{pmatrix},
\label{e:ZQn}
\\
\_H_n(z,t)=
i\sigma_3\big( \_Q_n\_Z^{-1}\! - \_Q_{n-1}\_Z - \_Q_n\_Q_{n-1}\big)
= i\begin{pmatrix} -q_np_{n-1} &zq_n-q_{n-1}/z\\
zp_{n-1}-p_n/z &p_nq_{n-1} \end{pmatrix}. \label{e:defHn}
\nonumber\\[0ex]\kern12em{ }
\end{eqnarray}
\endnumparts
In sections~\ref{s:IDNLS} and~\ref{s:continuum}
we make frequent use of the integrating factors
\begin{eqnarray}
\^{\_Z}(\_A)=\_Z\,\_A\,\_Z^{-1}=
\begin{pmatrix}a_{11}&z^2a_{12}\\a_{21}/z^2&a_{22}\end{pmatrix},
\qquad
\^\sigma_3\_A=
\begin{pmatrix}a_{11}&-a_{12}\\-a_{21}&a_{22}\end{pmatrix},
\\
\esh{i\theta}(\_A)={\rm e}^{i\theta\sigma_3}\_A\,{\rm e}^{-i\theta\sigma_3}=
\begin{pmatrix}a_{11}&{\rm e}^{2i\theta}a_{12}\\{\rm e}^{-2i\theta}a_{21}&a_{22}\end{pmatrix}\,.
\label{e:ehs}
\end{eqnarray}
For any matrix $\_A$, we write $\_A=(\_A\o{L},\_A\o{R})$,
where the superscripts $L$~and~$R$ (left and right)
denote respectively the first and second column of $\_A$.
We also write $\_A= \_A_D+\_A_O$, where $\_A_D$ and $\_A_O$
denote respectively the diagonal and off-diagonal part of~$\_A$.
Note that
\numparts
\label{e:AOD}
\begin{eqnarray}
(\_A\mu)_D=\_A_D\mu_D + \_A_O\mu_O \,,\quad
(\_A\mu)_O=\_A_O\mu_D+\_A_D\mu_O \,,
\\
(\_Q\mu)_D=\_Q\mu_O \,, \qquad (\_Q\mu)_O=\_Q\mu_D \,,
\end{eqnarray}
\endnumparts
and in particular
\begin{eqnarray}
\_H_{n,D}(z,t)= -i\sigma_3\_Q_n\_Q_{n-1}\,,\quad
\_H_{n,O}(z,t)= i\,\big(\_Z\sigma_3\_Q_n+\_Q_{n-1}\_Z\sigma_3\big)\,.
\label{e:HOD}
\end{eqnarray}
Note also that $\_Z\_A_O=\_A_O\_Z^{-1}$ and $\sigma_3\_A_O= -\_A_O\sigma_3$.
The ``involution symmetry'' of the scattering problems of
NLS and IDNLS is expressed through the matrix
\numparts
\begin{eqnarray}
\sigma_\nu= \begin{pmatrix}0 &1\\\nu&0\end{pmatrix}.
\label{e:sigmanudef}
\end{eqnarray}
That is, when $p(x,t)=\nu q^*(x,t)$ in~\eref{e:sigma3Q},
or $p_n(t)=\nu q_n^*(t)$ in~\eref{e:ZQn}, it is, respectively:
\begin{eqnarray}
\sigma_\nu \_Q^*= \_Q\sigma_\nu\,,\qquad
\sigma_\nu \_Q_n^*= \_Q_n\sigma_\nu\,.
\end{eqnarray}
\endnumparts
Note also that
$\sigma_\nu\_Z= \_Z^{-1}\sigma_\nu$,
$\sigma_\nu\sigma_3= -\sigma_3\sigma_\nu$, and
$\sigma_\nu^{-1}=\sigma_\nu^t=\nu\sigma_\nu$.
When discussing the asymptotic behavior of the eigenfunctions,
the behavior of the matrix product $\_A\_Z$
motivates the following definitions:
for any matrix $\_A=(\_A\o L,\_A\o R)$, we write
$\_A=O(\_Z^m)$ as $z\to(0,\infty)$ if $\_A\o L=O(z^m)$ as $z\to 0$
and $\_A\o R=O(1/z^m)$ as $z\to \infty$. Similarly, we write
$\_A=O(\_Z^m)$ as $z\to(\infty,0)$ if $\_A\o L=O(z^m)$ as $z\to
\infty$ and $\_A\o R=O(1/z^m)$ as $z\to 0$.
\section{Spectral analysis of the $t$-part of the Lax pair of the DLS}
\label{e:ztransfinverse}
The inversion formulae for the spectral functions~\eref{e:DLSztransforms}
in the linear problem can be obtained by performing
spectral anlaysis of the individual parts of the Lax
pair~\eref{e:LaxpairL}.
The first of~\eref{e:LSinvztransf} can be
derived from similar steps as in section~\ref{s:1.3}.
As for the second of~\eref{e:LSinvztransf},
consider the following spectral problem
\begin{equation}
\mu_t+i\omega(z)\mu = f(t)\,,
\label{e:simpleLPt}
\end{equation}
where $\omega(z)=2-(z+1/z)$.
The Jost solutions are easily obtained,
and are:
\begin{eqnarray}
\mu\o1(z,t)=\mathop{\textstyle\trueint}\limits_0^t {\rm e}^{-i\omega(z)(t-t')}f(t')\,{\rm d} t'\,,
\qquad
\mu\o2(z,t)=-\mathop{\textstyle\trueint}\limits_t^T {\rm e}^{-i\omega(z)(t-t')}f(t')\,{\rm d} t'\,.
\nonumber
\end{eqnarray}
Note that $\mu\o1$ and $\mu\o2$ are analytic for $z\notin D_+$ and
$z\in D_+$, respectively, where $D_+$ is the same as in
section~\ref{s:1.4}. Also, the jump condition is
\begin{eqnarray} \mu\o1-\mu\o2={\rm e}^{-i\omega(z)t}\^f(z,T)\,,\quad z\in \partial
D_+\,, \label{e:RHPsimpleLPt}
\\
\noalign{\noindent where}
\^f(z,t)=\mathop{\textstyle\trueint}\limits_0^t {\rm e}^{i\omega(z)t'}f(t')\,
{\rm d} t'\,.
\nonumber
\end{eqnarray}
Using integration by parts, one can show that $\mu^\pm =O(1/z)$ as
$z\to\infty$ in their corresponding domains. Hence the solution of
the RHP~\eref{e:RHPsimpleLPt} is given by
\[
\mu(z,t)=\frac1{2\pi i} \mathop{\textstyle\trueint}\limits_{\partial D_+}
{{\rm e}^{-i\omega(\zeta)t}\^f(\zeta,T) \over \zeta -z}\,{\rm d}\zeta\,.
\]
Substituting this
into~\eref{e:simpleLPt},
we then find the reconstruction formula
\[
f(t)=-\frac1{2\pi i}\mathop{\textstyle\trueint}\limits_{\partial D_+}{\omega(\zeta)-\omega(z)
\over \zeta -z} {\rm e}^{-i\omega(\zeta)t}\^f(\zeta,T)\,{\rm d}\zeta\,.
\]
Recall that $\partial D_+=\partial D_{+\#in}\cup\partial D_{+\#out}$.
Also note that $\partial D_{+\#in}$ can be deformed to $\partial D_{+\#out}$
by letting $z\to 1/z$,
and $\omega(z)$ and $\^f(z,t)$ are invariant under this transformation.
After some algebra, we then obtain
\[
f(t)=\frac1{2\pi}\mathop{\textstyle\trueint}\limits_{\partial D_{+\#out}}\bigg( \frac1{z^2}-1\bigg)\,{\rm e}^{-i\omega(z)t}\^f(z,T)\,{\rm d} z\,.
\]
Replacing $f(z,t)$ by $q_n(t)$, we finally obtain
the second of~\eref{e:LSinvztransf}.
Both of~\eref{e:LSinvztransf} could also be obtained by more
direct methods.
The first of~\eref{e:LSinvztransf} of course just defines
the coefficients of the principal part in the Laurent expansion
of $\^q(z,t)$.
As for the second of~\eref{e:LSinvztransf}, it can be obtained
as follows. Define $\~q(t)$ to be the function which equals $q_n(t)$
for $0\le t\le T$ and is 0 otherwise. Also, let\, $\~Q(\omega)=
\mathop{\textstyle\trueint}\limits\nolimits_{-\infty}^\infty {\rm e}^{i\omega t}\~q_n(t)\,{\rm d} t$\, be
its Fourier transform. Then, for all $0<t<T$ it is $q_n(t)=
(1/2\pi)\mathop{\textstyle\trueint}\limits\nolimits_{-\infty}^\infty {\rm e}^{-i\omega
t}\~Q(\omega)\,{\rm d}\omega$. Note however that the transformation
$z\to\omega(z)$ maps $\partial D_{+\#out}$ onto the real $\omega$-axis,
with $\omega(z)$ decreasing monotonically as $\mathop{\rm Re}\nolimits\,z$ increases.
Moreover, $\~Q(\omega(z))= \^f_n(z,T)$. Hence we can rewrite the
previous integral as $q_n(t)= (1/2\pi)\mathop{\textstyle\trueint}\limits\nolimits_{\partial D_{+\#out}}
\omega'(z)\,{\rm e}^{-i\omega(z)t}\^f_n(z,T)\,{\rm d} z$.
\section{IBVPs for DLS with Robin-type boundary conditions}
\label{s:Robin}
Consider the DLS equation~\eref{e:DLS} for $n\in{\mathbb{N}}_0$ and
$t\in{\mathbb{R}}^+$ with mixed BCs.
The spectral transform of~\eref{e:DLSRobinBC} yields,
$\forall z\in{\mathbb{C}}^{\,[\raise0.08ex\hbox{\scriptsize$\slash$}\kern-0.34em0]}$,
\begin{equation}
\^f_{-1}(z,t) - \alpha\^f_0(z,t)= \^h(z,t)\,,
\label{e:DLSRobinBCtransf}
\end{equation}
where the $\^f_j(z,t)$ are given by~\eref{e:DLSztransforms},
and $\^h(z,t)$ is defined similarly.
Recall that the reconstruction formula~\eref{e:IBVPsoln}
contains the quantity $\^F(z,t)= i(z\^f_0(z,t)-\^f_{-1}(z,t))$.
Use of~\eref{e:DLSRobinBCtransf} and the transformed
global relation~\eref{e:fm1LS} allows one to eliminate $\^f_0(z,t)$ and
$\^f_{-1}(z,t)$ and express $\^F(z,t)$, for all $0<|z|\le1$, as
\begin{equation}
\^F(z,t)= \frac{\^G(z,t)}{1/z-\alpha}
- \frac{z-\alpha}{1/z-\alpha}{\rm e}^{i\omega(z)t}\^q(1/z,t)\,,
\label{e:DLSRobinF} \end{equation} where $\^G(z,t)$, which contains the known
portion of the RHS, was given in~\eref{e:DLSRobinGdef}. Now recall
that, in~\eref{e:IBVPsoln}, $\^F(z,t)$ is integrated along $\partial
D_{\!+\#in}$. Three possible situations can arise: (i)~$\alpha\in
D_{\!+\#out}$, (ii)~$\alpha\in\partial D_{\!+\#out}$,
(iii)~$\alpha\notin\=D_{\!+\#out}$. We discuss each of these cases
in turn.
If $\alpha\notin\=D_{\!+\#out}$, the denominator
of~\eref{e:DLSRobinF} never vanishes in $\=D_{\!+\#in}$. Thus the
second part of the RHS of~\eref{e:DLSRobinF}, when inserted
in~\eref{e:IBVPsoln}, gives rise to an integrand that is analytic
and bounded in $\=D_{\!+\#in}$. Hence, that part of the integral is
zero. As a result, the solution of the IBVP is simply \begin{eqnarray} \fl
q_n(t)= \frac1{2\pi i}\!\mathop{\textstyle\trueoint}\limits_{|z|=1}
z^{n-1}{\rm e}^{-i\omega(z)t}\,\^q(z,0)\,{\rm d} z
- \frac1{2\pi i}\!\mathop{\textstyle\trueint}\limits_{\partial D_{\!+\#in}} z^{n-1}{\rm e}^{-i\omega(z)t}\,
\frac{\^G(z,T)}{1/z-\alpha}\,{\rm d} z\,,
\label{e:IBVPsolnRobin1} \end{eqnarray} with $\^G(z,t)$ again given
by~\eref{e:DLSRobinGdef}. Now suppose $\alpha\in D_{\!+\#out}$. In
this case $1/z-\alpha$ vanishes at $z=1/\alpha\in D_{\!+\#in}$. Even
though each of the two terms in the RHS of~\eref{e:DLSRobinF} has a
simple pole at this point, however, their sum is finite there, since
$\^F(z,t)$ is analytic in ${\mathbb{C}}^{\,[\raise0.08ex\hbox{\scriptsize$\slash$}\kern-0.34em0]}$. Thus, \begin{eqnarray} \fl
\frac1{2\pi i}\!\mathop{\textstyle\trueint}\limits_{\partial D_{\!+\#in}}
\!z^{n-1}\,\frac{z-\alpha}{1/z-\alpha}\^q(1/z,t)\,{\rm d} z=
\mathop{\rm Res}\limits_{z=1/\alpha}\bigg[
z^{n-1}\frac{z-\alpha}{1/z-\alpha}\^q(1/z,t)\bigg]
\nonumber\\[-1ex]\kern7.6em{ }
= \mathop{\rm Res}\limits_{z=1/\alpha}\bigg[z^{n-1}{\rm e}^{-i\omega(z)t}\frac{\^G(z,t)}{1/z-\alpha}\bigg]
= -\alpha^{-n-1}{\rm e}^{-i\omega(\alpha)t}\^G(1/\alpha,t)\,,
\nonumber
\end{eqnarray}
which implies the solution of the IBVP as
\begin{eqnarray}
\fl
q_n(t)= \frac1{2\pi i}\!\mathop{\textstyle\trueoint}\limits_{|z|=1}
z^{n-1}{\rm e}^{-i\omega(z)t}\,\^q(z,0)\,{\rm d} z
- \frac1{2\pi i}\!\mathop{\textstyle\trueint}\limits_{\partial D_{\!+\#in}} z^{n-1}{\rm e}^{-i\omega(z)t}\,
\frac{\^G(z,T)}{1/z-\alpha}\,{\rm d} z
-\alpha^{-n-1}{\rm e}^{-i\omega(\alpha)t}\^G(1/\alpha,t)\,.
\nonumber\\[-2ex]
\label{e:IBVPsolnRobin2}
\end{eqnarray}
Finally, if $\alpha\in\partial D_{\!+\#out}$, the pole is along
the integration contour.
In this case one should go back to the RHP and subtract the pole contribution.
In this way, the solution of the IBVP can be obtained as
\begin{eqnarray}
\fl
q_n(t)= \frac1{2\pi i}\!\mathop{\textstyle\trueoint}\limits_{|z|=1}
z^{n-1}{\rm e}^{-i\omega(z)t}\,\^q(z,0)\,{\rm d} z
- \frac1{2\pi i}\!\mathop{\textstyle\trueint}\limits_{\partial D_{\!+\#in}} z^{n-1}{\rm e}^{-i\omega(z)t}\,
\frac{\^G(z,T)}{1/z-\alpha}\,{\rm d} z
-\frac12\alpha^{-n-1}{\rm e}^{-i\omega(\alpha)t}\^G(1/\alpha,t)\,.
\nonumber\\[-2ex]
\label{e:IBVPsolnRobin3}
\end{eqnarray}
Combining \eref{e:IBVPsolnRobin1}, \eref{e:IBVPsolnRobin2}
and \eref{e:IBVPsolnRobin3} one then obtains~\eref{e:DLSRobinsoln}.
\section{Asymptotic behavior of the eigenfunctions of the IBVP}
\label{s:asymptotics}
\paragraph{DLS.}
We first compute the asymptotics for for $n=0$
(where no summation is present),
then consider the case $n\ge1$.
Note that $\omega(z)= -1/z+O(1/z^2)$ as $z\to0$.
Integration by parts yields, as $z\to0$
with $\mathop{\rm Im}\nolimits z\le0$,
\begin{eqnarray}
\phi_0\o1(z,t)
= q_{-1}(t)-{\rm e}^{-i\omega(z)t}q_{-1}(0)+O(z)\,,
\nonumber
\\
\noalign{\noindent while as $z\to0$ with $\mathop{\rm Im}\nolimits z\ge0$ it is}
\phi_0\o3(z,t)=
q_{-1}(t)-{\rm e}^{i\omega(z)(t-T)}q_{-1}(T)+O(z)\,.
\nonumber
\end{eqnarray}
Using these in~\eref{e:PhiIBVP} with $n\ge1$ we then have immediately
$\phi_n\o{j}(z,t)= q_{n-1}(t)+O(z)$ as $z\to0$ with $\mathop{\rm Im}\nolimits z\le0$ for $j=1$
and $\mathop{\rm Im}\nolimits z\ge0$ for $j=3$.
Note also that
$\phi_0\o1(z,t)-\phi_0\o3(z,t)= -{\rm e}^{-i\omega(z)t}
\big(q_{-1}(0)-{\rm e}^{i\omega(z)T}q_{-1}(T)\big)+O(z)$ as $z\to0$,
implying that the ratio $\^F(z,T)/z$ in~\eref{e:Phijumps13} remains
bounded as $z\to0$ along the real axis.
As for $\phi_n\o2(z,t)$,
\eref{e:PhiIVPL} implies immediately $\phi_n\o2(z,t)= O(1/z)$
as $z\to\infty$.
\paragraph{IDNLS.}
\label{s:asymptoticsdiscrete}
The determination of the asymptotic behavior in the nonlinear case
is considerably more involved, and requires the use of a Neumann
series approach:
\begin{equation}
\mu_n\o{j}(z,t)=\mathop{\textstyle\truesum}\limits_{m=0}^{\infty}\mu_n\o{j,m}(z,t)\,.
\label{e:Nseriesmunj}
\end{equation}
We now show that, $\forall n\in{\mathbb{N}}_0$, $m\ge 0$ and $j=1,3$,
as $z\to (\infty, 0)$ it is
\numparts
\label{e:NeumannAsympmunj}\begin{eqnarray}
\mu_{n,D}\o{j,2m-1}(z,t)=O(\_Z^{-2m})\,,\qquad
\mu_{n,O}\o{j,2m-1}(z,t)=O(\_Z^{-2m+1})\,,
\\
\mu_{n,D}\o{j,2m}(z,t)=O(\_Z^{-2m})\,,\qquad
\mu_{n,O}\o{j,2m}(z,t)=O(\_Z^{-2m-1})\,.
\end{eqnarray}
\endnumparts
The proof
proceeds by induction. Consider $\mu_n\o1(z,t)$ first.
Separating~\eref{e:ALmu1IBVPsolns} into its diagonal and
off-diagonal components then yields $\mu_{n,D}\o{1,0}(z,t)=\_I$ and
$\mu_{n,O}\o{1,0}(z,t)=\_O$, as well as
\numparts
\label{e:ALIBVPmu1}
\begin{eqnarray} \fl
\mu_{n,D}\o{1,m+1}(z,t)=
\mathop{\textstyle\truesum}\limits_{n'=0}^{n-1}\_Q_{n'}(t)\mu_{n',O}\o{1,m+1}(z,t)\_Z^{-1}
+ \mathop{\textstyle\trueint}\limits_0^t(\_H_{0,D}\mu_{0,D}\o{1,m}+\_H_{0,O}\mu_{0,O}\o{1,m+1})(z,t') \,{\rm d} t' \,,
\label{e:ALIBVPmu1D}
\\
\fl
\mu_{n,O}\o{1,m+1}(z,t)=
\mathop{\textstyle\truesum}\limits_{n'=0}^{n-1}\_Q_{n'}(t)\mu_{n',D}\o{1,m}(z,t)\_Z^{-2(n-n')+1}
\nonumber\\[-1ex]\kern4em{ }
+ \mathop{\textstyle\trueint}\limits_0^t{\rm e}^{-i\omega(z)(t-t')\^\sigma_3}\big(\_H_{0,O}\mu_{0,D}\o{1,m}+\_H_{0,D}\mu_{0,O}\o{1,m}\big)(z,t')\,\_Z^{-2n}\,{\rm d} t' \,.
\label{e:ALIBVPmu1O}
\end{eqnarray}
\endnumparts
Note that
\[
\frac1{2\omega(z)}\,\_I=-\_Z^{-2}+O(\_Z^{-4})\,,\quad{\rm as}~z\to (\infty,0).
\]
First consider the case $n=0$. Using integration by parts
in~\eref{e:ALIBVPmu1O}, we obtain, as $z\to(\infty,0)$,
\numparts
\label{e:estimatemun1}
\begin{eqnarray}
\fl
\mu_{0,O}\o{1,m+1}(z,t)=\big\{\_Q_{-1}(t)\mu_{0,D}\o{1,m}(z,t)
-{\rm e}^{-i\omega(z)t\^\sigma_3}\big[\_Q_{-1}(0)\mu_{0,D}\o{1,m}(z,0)\big]\big\}\,\_Z^{-1}
\nonumber\\{ }
+ \big\{(\_Q_0\_Q_{-1})(t)\mu_{0,O}\o{1,m}(z,t)-{\rm e}^{-i\omega(z)t\^\sigma_3}
\big[(\_Q_0\_Q_{-1})(0)\mu_{0,O}\o{1,m}(z,0)\big]
\big\}\,\_Z^{-2}\,,
\label{e:estimatemun1O}
\end{eqnarray}
plus higher order terms.
Substituting~\eref{e:estimatemun1O} into~\eref{e:ALIBVPmu1D} with $n=0$,
one finds
\begin{eqnarray}
\fl
\mu_{0,D}\o{1,m+1}(z,t)=-i\mathop{\textstyle\trueint}\limits_0^t
\sigma_3(\_Q_0\_Q_{-1})(t')\mu_{0,D}\o{1,m}(z,t') \,{\rm d} t' +i\mathop{\textstyle\trueint}\limits_0^t
\sigma_3\_Q_0(t')\mu_{0,O}\o{1,m+1}(z,t')\_Z \,{\rm d} t'
\nonumber\\{ }
- i\mathop{\textstyle\trueint}\limits_0^t \sigma_3\_Q_{-1}(t')\mu_{0,O}\o{1,m+1}(z,t')\_Z^{-1} \,{\rm d} t' \,.
\label{e:estimatemun1D}
\end{eqnarray}
\endnumparts
Using~\eref{e:estimatemun1}
one can then obtain~\eref{e:NeumannAsympmunj} for $n=0$
and all $m\in{\mathbb{N}}_0$ inductively.
Note also that, for $m=0$, \eref{e:estimatemun1O} yields
\eref{e:mu1Oasymp}. Similarly, repeating the same arguments, one
obtains~\eref{e:mu3Oasymp}.
Next consider the case $n\ge 1$. The integrals in~\eref{e:ALIBVPmu1}
are exactly the same as when $n=0$ except for the fact that the one
in~\eref{e:ALIBVPmu1O} is followed by $\_Z^{-2n}$. Using the same
arguments as before, we obtain \numparts \label{e:estmun1sum}\begin{eqnarray}
\mu_{n,O}\o{1,m+1}(z,t)=\_Q_{n-1}(t)\mu_{n-1,D}\o{1,m}(z,t)\_Z^{-1}
+ \mu_{0,O}\o{1,m+1}(z,t)\_Z^{-2n} \,+\,\cdots
\,,\label{e:estmun1Osum}
\\
\mu_{n,D}\o{1,m+1}(z,t)=\mathop{\textstyle\truesum}\limits_{l=0}^{n-1}\_Q_l(t)\mu_{l,O}\o{1,m+1}(z,t)\_Z^{-1}
+\mu_{0,D}\o{1,m+1}(z,t)\label{e:estmun1Dsum}\,. \end{eqnarray} \endnumparts The
induction with~\eref{e:estmun1sum}, one can
derive~\eref{e:NeumannAsympmunj} for $n\ge 1$. Similarly, one
obtains~\eref{e:NeumannAsympmunj} for $\mu_n\o3$. This completes the
proof of~\eref{e:NeumannAsympmunj}.
The above results imply that $\mu_n\o1(z,t)=\_I+O(\_Z^{-1})$ as
$z\to (\infty,0)$.
In particular,
computing the $O(\_Z^{-1})$ terms explicitly one obtains the first
of~\eref{e:ALIBVPmu13asymp}. Similarly, using the same arguments,
one can show that $\mu_n\o3(z,t)=\_I+O(\_Z^{-1})$ as $z\to
(\infty,0)$ and verify the second of~\eref{e:ALIBVPmu13asymp}. In
the IVP, the integrals in the RHS of~\eref{e:ALIBVPmu1D}
and~\eref{e:ALIBVPmu1O} are absent, and the summation starts from
$n'=-\infty$. Hence in this case one simply obtains
\eref{e:ALasymp}.
The determination of the asymptotic behavior of $\mu_n\o2(z,t)$
requires a slightly different approach,
since following the above steps for $\mu_n\o2(z,t)$,
yields a $O(1)$ term involving the summation of $\_Q_n$ in the RHS.
To circumvent this difficulty, note that~\eref{e:ALLP1} implies
$\mu_n\o2=\big(\_Z+\_Q_n(t)\big)^{-1}\mu_{n+1}\o2\,$.
For $\~\mu_n(z,t)=C_n\mu_n\o2(z,t)$ we have
\begin{equation}
\~\mu_n-\^{\_Z}^{-1}\~\mu_{n+1}=-\_Q_n\~\mu_n\_Z\,,
\label{e:modifiedmu2}
\end{equation}
with $\~\mu_n(z,t)\to\_I$ as $n\to\infty$
thanks to \eref{e:ALmu1IBVPsolns} and \eref{e:ALphidet}. Introducing
the auxiliary function $\Psi_n(z,t)=\^{\_Z}^{-n}\~\mu_n(z,t)$, it is
easy to check that $\Psi_n(z,t)$ satisfies the equation
$\Psi_{n+1}-\Psi_n= \_Z\,\^{\_Z}^{-(n+1)}(\_Q_n)\Psi_{n+1}\,$, which
can be integrated to obtain the modified Jost solution as
\begin{equation}
\~\mu_n(z,t)=
\_I-\_Z\mathop{\textstyle\truesum}\limits_{n'=n+1}^\infty\^{\_Z}^{n-n'}\big(\_Q_{n'-1}(t)\~\mu_{n'}(z,t)\big)\,.
\label{e:mutilde}
\end{equation}
Then, applying the same Neumann series
approach as described above to~\eref{e:mutilde}, one finds the
asymptotic expansion for $\mu_n\o2$ as \eref{e:ALasympmu2}.
Since $\mu_n\o2(z,t)$ is the same in the IVP and in the IBVP;
this asymptotic behavior applies to both problems.
Note that the above results also imply that $a(z)$ and $d(z)$ are
even functions in $D_{\pm\#in}$ and the following symmetries of
$\_M_n^{\pm}$: \numparts \label{e:symmMn} \begin{eqnarray}
\_M_{n,11}^+(-z,t)=\_M_{n,11}^+(z,t)\,,\quad
\_M_{n,12}^+(-z,t)=-\_M_{n,12}^+(z,t)\,, \\
\_M_{n,11}^-(-z,t)=\_M_{n,11}^-(z,t)\,,\quad
\_M_{n,12}^-(-z,t)=-\_M_{n,12}^-(-z,t)\,.
\end{eqnarray}
\endnumparts
\section{Independence of the solution on $T$}
\label{s:IndependenceT}
\let\phi=\varphi
The solution of a DDE does not depend on future values of the BCs.
Hence, for any $T_0<T$ the solution of the IBVP resulting from the RHP
obtained by replacing $T$ with~$T_0$ must be equivalent for all $0<t<T_0$
to the solution of the IBVP obtained from the original RHP.
We show next that is indeed the case because the RHP obtained from $T_0$ and
$T$ are related.
Let $\_M_n(z,t)$ satisfy the RHP \eref{e:IBVPALsystemRHP}, and let
$\_M_n^{\pm\#in}(z,t)$ and $\_M_n^{\pm\#out}(z,t)$ denote the
restrictions of $\_M_n(z,t)$ to the domains $D_{\pm\#in}$ and
$D_{\pm\#out}$, respectively. Moreover, let $A(z,T_0)$ and
$B(z,T_0)$ be the spectral coefficients obtained by replacing $T$
with $T_0$ in~\eref{e:ALdefinitionAB}, and let
$\~{\_J\!}_n\o1(z,t),\dots,\~{\_J\!}_n\o4(z,t)$ denote the jump
matrices obtained by replacing $A(z,T)$ and $B(z,T)$ with $A(z,T_0)$
and $B(z,T_0)$. Finally, let $\~{\_M}_n(z,t)$ satisfy the RHP with
the jump matrices $J_n\o1,\dots,J_n\o4$ replaced by
$\~{\_J}_n\o1,\dots,\~{\_J}_n\o4$. It is straightforward to see the
relations \begin{eqnarray} \_M_n^{+\#in}=\~{\_M}_n^{+\#in}\,(
\_I-\~{\_J}_n\o1)\,\big(\_I-\_J_n\o1\big)^{-1}\,,\qquad
\_M_n^{-\#in}=\~{\_M}_n^{-\#in}\,,
\nonumber
\\
\_M_n^{+\#out}=\~{\_M}_n^{+\#out}\,,\qquad
\_M_n^{-\#out}=\~{\_M}_n^{-\#out}\,
\big(\_I-\~{\_J}_n\o3\big)^{-1}\,(\_I-\_J_n\o3)\,.
\nonumber \end{eqnarray} Now recall that $q_n(t)$ can be obtained from the
eigenfunctions via~\eref{e:ALreconstruction} or
\eref{e:ALIBVPmu13asymp} with $j=1$.
Note also that
$\mu_n\o{1,R}(z,t)$ enters $\_M_n^{-\#in}$ via~\eref{e:IBVPALdefM2}.
Below, we show that the matrices
$(\_I-\~{\_J}_n\o1)\,\big(\_I-\_J_n\o1\big)^{-1}$ and
$\big(\_I-\~{\_J}_n\o3\big)^{-1}(\_I-\_J_n\o3)$ are analytic and
bounded for $z\in D_{+\#in}$ and $z\in D_{+\#out}$, respectively.
Since $\_M_n(z,t)=\~{\_M}_n(z,t)$ for $z\in D_{-\#in}$, it then
follows that the solutions $q_n(t)$ obtained from $\_M_n$ and
$\~{\_M}_n$ coincide.
To show that $(\_I-\~{\_J}_n\o1)\,\big(\_I-\_J_n\o1\big)^{-1}$
is analytic and bounded for $z\in D_{+\#in}$,
note first that
\begin{equation}
(\_I-\~{\_J_n\o1\!\!})\, \big(\_I-\_J_n\o1\big)^{-1}=
\begin{pmatrix} 1 &
\nu z^{2n}{\rm e}^{-2i\omega(z)t}\big(\Gamma^*(1/z^*,T)-\Gamma^*(1/z^*,T_0)\big)
\\ 0 & 1 \end{pmatrix} \,,
\label{e:jumpmatrixY1}
\end{equation}
and the $(1,2)$ component of \eref{e:jumpmatrixY1} can be written as
\begin{equation}
X_n(z)=
\nu z^{2n}{\rm e}^{-2i\omega(z)t}{A^*(1/z^*,T_0)B^*(1/z^*,T)-A^*(1/z^*,T)B^*(1/z^*,T_0)
\over d(z,T)d(z,T_0)}\,.
\label{e:ALjumpdiff12comp}
\end{equation}
Now note that~\eref{e:ALscattdef} and~\eref{e:ALIBVPscattdef}
define the scattering data $A(z,T)$ and $B(z,T)$ as
\begin{equation}
\mu_0\o{1,R}(z,T)=\begin{pmatrix} -\nu{\rm e}^{-2i\omega(z)T}B^*(1/z^*,T) \\
A(z,T) \end{pmatrix} =:\begin{pmatrix} \mu_1(z,T) \\
\mu_2(z,T) \end{pmatrix}\,,
\label{e:ALdefinitionAB}
\end{equation}
Hence
\begin{eqnarray}
X_n(z)= z^{2n}{\rm e}^{2i\omega(z)(T_0-t)}
{\mu_2^*(1/z^*,T)\mu_1(z,T_0) -\mu_2^*(1/z^*,T_0)\mu_1(z,T){\rm e}^{2i\omega(z)(T-T_0)}
\over d(z,T)d(z,T_0)}\,.
\nonumber
\end{eqnarray}
Also, $\mu_0\o{1,R}(z,t)$
satisfies the second column of the $t$-part of the Lax pair~\eref{e:ALLP}
at $n=0$:
\numparts \label{e:equationsmu} \begin{eqnarray} \.\mu_1(z,t)+2i\omega(z)\mu_1(z,t) =
H_{0,11}(z,t)\mu_1(z,t)+H_{0,12}(z,t)\mu_2(z,t)\,,\\
\.\mu_2(z,t)= H_{0,21}(z,t)\mu_1(t,k)+H_{0,22}(z,t)\mu_2(z,t)\,.
\end{eqnarray}
\endnumparts
Then, introducing
\numparts
\begin{eqnarray}
\phi_1(z,t)=\mu_2^*(1/z^*,T)\mu_1(z,t)
-\mu_1(z,T)\mu_2^*(1/z^*,t)\,{\rm e}^{2i\omega(z)(T-t)}\,,\\
\phi_2(z,t)=\mu_2^*(1/z^*,T)\mu_2(z,t)
-\nu\mu_1(z,T)\mu_1^*(1/z^*,t)\,{\rm e}^{2i\omega(z)(T-t)}\,,
\end{eqnarray}
\endnumparts
we can rewrite the $(1,2)$ component of
$(\_I-\~{\_J}_n\o{1})\,\big(\_I-\_J_n\o1\big)^{-1}$ as
\begin{equation}
X_n(z)= {z^{2n}{\rm e}^{2i\omega(z)(T_0-t)}
\over d(z,T)d(z,T_0)} \,\phi_1(z,T_0)\,.
\label{e:ALequation12comp} \end{equation} It is therefore enough to show that
$\phi_1(z,t)$ is analytic and bounded for $z\in D_+$. The symmetries
of $\_H_0(z,t)$ [namely, $\_H_{0,12}(z,t)=\nu\_H_{0,21}^*(1/z^*,t)$
and $\_H_{0,11}(z,t)=\_H_{0,22}(1/z^*,t)$] imply that
$(\phi_1,\phi_2)^t$ satisfies the $t$-part of the Lax
pair~\eref{e:ALLP2} with $n=0$. Since $\phi_1(z,T)=0$ and
$\phi_2(z,T)=1$, we then have the following linear integral
equations \numparts \begin{eqnarray} \phi_1(z,t)=-\mathop{\textstyle\trueint}\limits_t^T {\rm e}^{2i\omega(z)(t'-t)}
\big(\_H_{0,11}\phi_1+\_H_{0,12}\phi_2\big)(z,t') \,{\rm d} t' \,,\\
\phi_2(z,t)=1-\mathop{\textstyle\trueint}\limits_t^T
\big(\_H_{0,21}\phi_1+\_H_{0,22}\phi_2\big)(z,t') \,{\rm d} t'\,, \end{eqnarray}
\endnumparts From here one can show that $\phi_1$ and $\phi_2$ are analytic
and bounded for $z\in D_+$. As a result, the RHS
of~\eref{e:ALjumpdiff12comp} is analytic and bounded for $z\in D_+$.
Thus $(\_I-\~{\_J}_n\o1)\,\,\big(\_I-\_J_n\o1\big)^{-1}$ is analytic
and bounded for $z\in D_{+\#in}$. The result for
$\big(\_I-\~{\_J}_n\o3\,\,\big)^{-1}\,(\_I-\_J_n\o3)$ follows from
symmetry considerations.
\section{Linearizable BCs for $T<\infty$}
Here we verify that~\eref{e:ALratioAB} can be used to express
$\Gamma^*(1/z^*)$ also when $T<\infty$.
To do so, we use the same approach that we used to show that
the solution of the IDNLS equation does not depend on $T$.
Denote by $X_n(z)$
the difference between the contributions to the RHP obtained
from $T=\infty$ and $T<\infty$, namely:
\begin{equation}
X_n(z)=
\nu z^{2n}{\rm e}^{-2i\omega(z)t}\big(\Gamma^*(1/z^*)-\Gamma_o^*(1/z^*)\big)\,,
\label{e:ALjump12diff}
\end{equation}
where $\Gamma_o^*(1/z^*)$ is obtained by neglecting the second term
in the RHS of~\eref{e:ALratioAB1}. We can write~\eref{e:ALjump12diff} as
\[
X_n(z)=
\nu z^{2n} {\rm e}^{-2i\omega(z)t}\,{R(z,T)-R_o(z,T)\over d(z)d_o(z)/A^*(1/z^*,T)A_o^*(1/z^*,T)} \,,
\]
with $R(z,T)= B^*(1/z^*,T)/A^*(1/z^*,T)$ as before, and where
$R_o(z)= B_o^*(1/z^*)/A_o^*(1/z^*)$ is computed using only the first
term in the RHS of~\eref{e:ALratioAB1} and
$d_o(z)=a(z)A_o^*(1/z^*,T)-\nu b(z)B_o^*(1/z^*,T)$. Also,
$A_o^*(1/z^*,T)$ and $B_o^*(1/z^*,T)$ are defined by~\eref{e:ALABLin}.
Now, using~\eref{e:ALratioAB1}, we find
\begin{equation}
X_n(z)=
z^{2n}{\rm e}^{2i\omega(z)(T-t)}\,f(1/z)\,{G(1/z,T) \over d(z)\Delta(1/z)}\,.
\label{e:differenceGammas}
\end{equation}
In the solitonless case, however, we
can assume that $d(z)$ and $\Delta(1/z)$ never vanish in
$\=D_{+\#in}$.
Then the RHS of~\eref{e:differenceGammas} is analytic and bounded in
$D_{+\#in}$ due to the exponential term and now we know that the
additional term in~\eref{e:ALratioAB1} does not affect the solution
of the RHP. Note that $f(1/z)$ has a pole at $z=\pm 1/\chi^{1/2}$.
When $1<\chi$, or $\chi <-1$, these points belong to $D_{+\#in}$.
Note, however, that since $a(z)$ and $b(z)$ are bounded in
$\=D_{\pm\#in}$, if $f(1/z)$ has a pole, $\Delta(1/z)$ does too, and
hence the terms causing the poles in~\eref{e:differenceGammas} to
cancel out.
\catcode`\@ 11
\def\journal#1,#3 (#4){\begingroup \let\journal={\rm d}@mmyjournal {\frenchspacing\sl #1\/\unskip\,} {\bf\ignorespaces #2}\rm, #3 (#4)\endgroup}
\def{\rm d}@mmyjournal{\errmessage{Reference foul up: nested \journal macros}}
\def\title#1{{``#1''}}
\def\@biblabel#1{#1.}
\section*{References} |
1909.03302 | \section{Introduction}
Tests for goodness-of-fit, homogeneity and independence are central to statistical inferences. Numerous techniques have been developed for these tasks and are routinely used in practice. In recent years, there is a renewed interest on them from both statistics and other related fields as they arise naturally in many modern applications where the performance of the classical methods are less than satisfactory. In particular, nonparametric inferences via the embedding of distributions into a reproducing kernel Hilbert space (RKHS) have emerged as a popular and powerful technique to tackle these challenges. The approach immediately allows for easy access to the rich machinery for RKHS and has found great successes in a wide range of applications from causal discovery to deep learning. See, e.g., \cite{muandet2017kernel} for a recent review.
More specifically, let $K(\cdot,\cdot)$ be a symmetric and positive definite function defined over $\mathcal X\times\mathcal X$, that is $K(x,y)=K(y,x)$ for all $x,y\in \mathcal X$, and the Gram matrix $[K(x_i,x_j)]_{1\le i,j\le n}$ is positive definite
for any distinct $x_1,\ldots,x_n\in \mathcal X$. The Moore-Aronszajn Theorem indicates that such a function, referred to as a kernel, can always be uniquely identified with a RKHS $\mathcal{H}_K$ of functions over $\mathcal X$. The embedding
$$
\mu_{\mathbb P}(\cdot) :=\int_{\mathcal X} K(x,\cdot)\mathbb P(dx),
$$
maps a probability distribution $\mathbb P$ into $\mathcal{H}_K$. The difference between two probability distributions $\mathbb P$ and $\mathbb Q$ can then be conveniently measured by
$$
\gamma_K(\mathbb P,\mathbb Q):=\|\mu_\mathbb P-\mu_\mathbb Q\|_{\mathcal{H}_K}.
$$
Under mild regularity conditions, it can be shown that $\gamma_K(\mathbb P,\mathbb Q)$ is an integral probability metric so that it is zero if and only if $\mathbb P=\mathbb Q$, and
$$
\gamma_K(\mathbb P,\mathbb Q)=\sup_{f\in \mathcal H_K: \|f\|_{\mathcal{H}_K}\le 1} \int_{\mathcal X} fd\left(\mathbb P-\mathbb Q\right).
$$
As such, $\gamma_K(\mathbb P,\mathbb Q)$ is often referred to as the \emph{maximum mean discrepancy} (MMD) between $\mathbb P$ and $\mathbb Q$. See, \textit{e.g.}, \cite{sriperumbudur2010hilbert} or \cite{gretton2012kernel} for details. In what follows, we shall drop the subscript $K$ whenever its choice is clear from the context. It was noted recently that MMD is also closely related to the so-called energy distance between random variables \citep{szekely2007measuring, szekely2009brownian} commonly used to measure independence. See, e.g., \cite{sejdinovic2012equivalence, lyons2013distance}.
Given a sample from $\mathbb P$ and/or $\mathbb Q$, estimates of the $\gamma(\mathbb P,\mathbb Q)$ can be derived by replacing $\mathbb P$ and $\mathbb Q$ with their respective empirical distributions. These estimates can subsequently be used for various statistical inferences. Here are several notable examples that we shall focus on in this work.
\paragraph{Goodness-of-fit tests.} The goal of goodness-of-fit tests is to check if a sample comes from a pre-specified distribution. Let $X_1,\cdots,X_n$ be $n$ independent $\mathcal X$-valued samples from a certain distribution $\mathbb P$. We are interested in testing if the hypothesis $H_0^{\rm GOF}:\ \mathbb P=\mathbb P_0$ holds for a fixed $\mathbb P_0$. Deviation from $\mathbb P_0$ can be conveniently measured by $\gamma(\mathbb P,\mathbb P_0)$ which can be readily estimated by:
\begin{equation*}
\gamma(\widehat{\mathbb P}_n,\mathbb P_0):=\sup_{f\in \mathcal H(K): \|f\|_K\le 1} \int_\mathcal X fd\left(\widehat{\mathbb P}_n-\mathbb P_0\right),
\end{equation*}
where $\widehat{\mathbb P}_n$ is the empirical distribution of $X_1,\cdots,X_n$. A natural procedure is to reject $H_0$ if the estimate exceeds a threshold calibrated to ensure a certain significance level, say $\alpha$ ($0<\alpha<1$).
\paragraph{Homogeneity tests.} Homogeneity tests check if two independent samples come from a common population. Given two independent samples $X_1,\cdots,X_n\sim_{\rm iid}\mathbb P$ and $Y_1,\cdots, Y_m\sim_{\rm iid}\mathbb Q$, we are interested in testing if the null hypothesis $H_0^{\rm HOM}: \mathbb P=\mathbb Q$ holds. Discrepancy between $\mathbb P$ and $\mathbb Q$ can be measured by $\gamma(\mathbb P, \mathbb Q)$, and similar to before, it can be estimated by the MMD between $\widehat{\mathbb P}_n$ and $\widehat{\mathbb Q}_m$:
\begin{equation*}
\gamma(\widehat{\mathbb P}_n,\widehat{\mathbb Q}_m):=\sup_{f\in \mathcal H(K): \|f\|_K\le 1} \int_\mathcal X fd\left(\widehat{\mathbb P}_n-\widehat{\mathbb Q}_m\right).
\end{equation*}
Again we reject $H_0$ if the estimate exceeds a threshold calibrated to ensure a certain significance level.
\paragraph{Independence tests.} How to measure or test of independence among a set of random variables is another classical problem in statistics. Let $X=(X^1,\ldots, X^k)^\top \in \mathcal X_1\times\cdots\times\mathcal X_k$ be a random vector. If the random vectors $X^1,\ldots,X^k$ are jointly independent, then the distribution of $X$ can be factorized:
$$H_0^{\rm IND}:\qquad \mathbb P^{X}=\mathbb P^{X^1}\otimes \cdots\otimes \mathbb P^{X^k}.$$
Dependence among $X^1,\ldots, X^k$ can be naturally measured by the difference between the joint distribution and the product distribution evaluated under MMD:
$$
\gamma(\mathbb P^{X},\mathbb P^{X^1}\otimes \cdots\otimes \mathbb P^{X^k})=\|\mu_{\mathbb P^{X}}-\mu_{\mathbb P^{X^1}\otimes \cdots\otimes \mathbb P^{X^k}}\|_{\mathcal{H}_K}.
$$
When $d=2$, $\gamma^2(\mathbb P^{X},\mathbb P^{X^1}\otimes \mathbb P^{X^2})$ can be expressed as the squared Hilbert-Schmidt norm of the cross-covariance operator associated with $X^1$ and $X^2$ and is therefore referred to as Hilbert-Schmidt independence criterion \citep[HSIC;][]{gretton2005measuring}. The more general case as given above is sometimes referred to as dHSIC \citep[see, e.g.,][]{pfister2018kernel}. As before, we proceed to reject the independence assumption when $\gamma(\widehat{\mathbb P}^{X}_n,\widehat{\mathbb P}^{X^1}_{n}\otimes \cdots\otimes \widehat{\mathbb P}^{X^k}_n)$ exceed a certain threshold where $\widehat{\mathbb P}_n^{X}$ and $\widehat{\mathbb P}^{X^j}_{n}$ are the empirical distribution of $X$ and $X^j$ respectively.
\vskip 20pt
In all these cases the test statistic, namely $\gamma^2(\widehat{\mathbb P}_n,\mathbb P_0)$, $\gamma^2(\widehat{\mathbb P}_n,\widehat{\mathbb Q}_m)$ or $\gamma^2(\widehat{\mathbb P}_n,\widehat{\mathbb P}_n^{X^1}\otimes\cdots\otimes\widehat{\mathbb P}_n^{X^k})$, is a V-statistic. Following standard asymptotic theory for V-statistics \citep[see, e.g.,][]{serfling2009approximation}, it can be shown that under mild regularity conditions, when appropriately scaled by the sample size, they converge to a mixture of $\chi^2_1$ distribution with weights determined jointly by the underlying probability distribution and the choice of kernel $K$. In contrast, it can also be derived that for a fixed alternative, $\gamma^2(\widehat{\mathbb P}_n,\mathbb P_0)\to_p \gamma^2(\mathbb P,\mathbb P_0)$, $\gamma^2(\widehat{\mathbb P}_n,\widehat{\mathbb Q}_m)\to_p \gamma^2(\mathbb P,\mathbb Q)$ and $\gamma^2(\widehat{\mathbb P}_n,\widehat{\mathbb P}_n^{X^1}\otimes\cdots\otimes\widehat{\mathbb P}_n^{X^k})\to_p\gamma^2(\mathbb P,\mathbb P^{X^1}\otimes\cdots\times \mathbb P^{X^k})$. This immediately suggests that all aforementioned tests are consistent against fix alternatives in that their power tends to one as sample sizes increase. Although useful, such consistency results do not tell the full story about the power of these tests, and if there are yet more powerful methods.
For example, as recently shown by \cite{balasubramanian2017optimality}, any goodness-of-fit test based on statistic $\gamma^2_K(\widehat{\mathbb P}_n,\mathbb P_0)$ with a \emph{fixed} kernel $K$ is necessarily suboptimal. \cite{balasubramanian2017optimality} also argued that much more powerful tests can be constructed by \emph{regularized embedding}. The appropriate regularization they employed, however, relies on the knowledge of $\mathbb P_0$, and therefore is specialized to goodness-of-fit tests. While it is plausible that MMD based tests for homogeneity or independence may suffer from similar deficiencies, it remains unclear how to construct tests that are more powerful in these settings. The goal of the current work is specifically to address this question. In particular, we show that embedding using Gaussian kernel with an appropriately chosen scaling parameter provides a unified treatment to all three testing problems.
When data are continuous, e.g. $\mathcal X={\mathbb R}^d$, Gaussian kernels are arguably the most popular and successful choice in practice. On the one hand, we show that this choice of kernel is justified because in all three scenarios, MMD based tests can be optimal for testing against smooth alternatives provided that an appropriate scaling parameter is elicited. On the other hand, we argue that existing ways of selecting the scaling parameter may not exploit the full potential of Gaussian kernel based approaches and yet more powerful tests can be constructed.
In particular, we investigate how the power of these tests increases with the sample size by characterizing the asymptotic behavior of the smallest amount of departure from the null hypothesis that can be consistently detected. More specifically, we adopt the minimax hypothesis testing framework pioneered by \cite{burnashev1979minimax, ingster1987minimax, ingster1993asymptotically}. See also \cite{ermakov1991minimax, spokoiny1996adaptive, lepski1999minimax, ingster2000minimax, ingster2000adaptive, baraud2002non, fromont2006adaptive, fromont2012kernels, fromont2013two}, and references therein. Within this framework, we consider testing against alternatives getting closer and closer to the null hypothesis as the sample size increases. The smallest departure from the null hypotheses that can be detected consistently, in a minimax sense, is referred to as the optimal detection boundary. In all three settings, goodness of fit, homogeneity and independence testing, we show that Gaussian kernels with an appropriately chosen scaling parameter yield tests that are rate optimal in detecting smooth departures from null hypotheses. Our results not only provide rigorous justifications to the practical successes of Gaussian kernels based testing procedures but also offer guidelines on how to choose the scaling parameter in a principled way.
The critical importance of selecting an appropriate scaling parameter is widely recognized in practice. Yet, the way it is done is usually ad hoc and how to do so in a more principled way remains one of the chief practical challenges. See, e.g., \cite{gretton2008kernel, fukumizu2009kernel, gretton2012optimal, sutherland2016generative}. Our analysis shows that it is essential that we take a diverging scaling parameter as the sample size increases, and the choice of the scaling parameter may determine against which types of deviation from the null hypothesis the resulting test is most powerful.
This also naturally brings about the issue of adaptation and whether or not there is an agnostic approach towards testing of the aforementioned null hypotheses without the need to specify a scaling parameter. To address this challenge, we introduce a simple testing procedure by maximizing a studentized MMD over a pre-specified range of scaling parameters. Similar idea of maximizing MMD over a class of kernels was first introduced by \cite{sriperumbudur2009kernel}. Our analysis, however, suggests that it is more desirable to maximize \emph{normalized} MMD instead. More specifically, we show that the proposed procedure can attain the optimal rate, up to an iterated logarithmic factor, simultaneously over the collection of parameter spaces corresponding to different levels of smoothness.
The rest of this paper is organized as follows. In the next three sections, we shall investigate the statistical properties of Gaussian kernel based tests for goodness-of-fit, homogeneity and independence respectively, and show that with appropriate choice of the scaling parameter, these tests are minimax optimal if the underlying densities are smooth. Since the optimal choice of scaling parameter requires the knowledge of smoothness which is rarely available, in Section \ref{sec:adapt}, we introduce new tests that do not require such knowledge yet attain optimal power, up to an iterated logarithmic factor, for a wide range of smooth alternatives. Numerical experiments presented in Section \ref{sec:sim} further illustrate the practical merits of our method and theoretical developments. We conclude with some summary discussion in Section \ref{sec:disc} and all proofs are relegated to Section \ref{sec:proof}.
\section{Test for Goodness-of-fit}
\label{sec:gof}
Among the three testing problems that we consider, it is instructive to begin with the case of goodness-of-fit. Obviously, the choice of kernel $K$ plays an essential role in kernel embedding of distributions. In particular, when data are continuous, Gaussian kernels are commonly used. More specifically, a Gaussian kernel with a scaling parameter $\nu>0$ is given by
$$
G_{d,\nu}(x,y)=\exp\left(-\nu\|x-y\|_d^2\right),\qquad \forall x,y\in {\mathbb R}^d.
$$
Hereafter $\|\cdot\|_d$ stands for the usual Euclidean norm in ${\mathbb R}^d$. For brevity, we shall suppress the subscript $d$ in both $\|\cdot\|$ and $G$ when the dimensionality is clear from the context. When $\mathbb P$ and $\mathbb Q$ are probability distributions defined over $\mathcal X={\mathbb R}^d$, we shall write the MMD between them with a Gaussian kernel and scaling parameter $\nu$ as $\gamma_\nu(\mathbb P,\mathbb Q)$ where the subscript signifies the specific value of the scaling parameter.
We shall restrict our attention to distributions with smooth densities. Denote by $\mathcal W^{s,2}_d$ the $s$th order Sobolev space in ${\mathbb R}^d$, that is
$$
\mathcal W^{s,2}_d=\left\{f:{\mathbb R}^d\to {\mathbb R}\big|f\ \text{is almost surely continuous and} \int (1+\|\omega\|^2)^{s/2} \|\mathcal F(f)(\omega)\|^2d\omega<\infty\right\},
$$
where $\mathcal F(f)$ is the Fourier transform of $f$:
$$
\mathcal F(f)(\omega)=\frac{1}{(2\pi)^{d/2}}\int_{{\mathbb R}^d} f(x)e^{-i x^\top\omega}dx.
$$
In what follows, we shall again abbreviate the subscript $d$ in $\mathcal W^{s,2}_d$ when it is clear from the context. For any $f\in \mathcal W^{s,2}$, we shall write
$$
\|f\|_{\mathcal W^{s,2}}^2=\int_{{\mathbb R}^d} (1+\|\omega\|^2)^s \|\mathcal F(f)(\omega)\|^2d\omega.
$$
Let $p$ and $p_0$ be the density functions of $\mathbb P$ and $\mathbb P_0$ respectively. We are interested in the case when both $p$ and $p_0$ are elements from $\mathcal W^{s,2}$.
Note that we can rewrite the null hypothesis $H_0^{\rm GOF}$ in terms of density functions: $H_0^{\rm GOF}: p=p_0$ for some prespecified denstiy $p_0\in \mathcal W^{s,2}$. To better quantify the power of a test, we shall consider testing against an alternative that is increasingly closer to the null as the sample size $n$ increases:
$$
H_1^{\rm GOF}(\Delta_n;s): p\in \mathcal W^{s,2}(M), \quad \|p-p_0\|_{L_2}\ge \Delta_n,
$$
where
$$\mathcal W^{s,2}(M)=\left\{f\in \mathcal W^{s,2}: \|f\|_{\mathcal W^{s,2}}\le M\right\}.$$
and
$$
\|f\|_{L_2}^2=\int_{{\mathbb R}^d} f^2(x)dx.
$$
The alternative hypothesis $H_1^{\rm GOF}(\Delta_n; s)$ is composite and the power of a test $\Phi$ based on $X_1,\ldots, X_n\sim p$ is therefore defined as
$$
{\rm power}(\Phi; H_1^{\rm GOF}(\Delta_n;s)):=\inf_{p\in \mathcal W^{s,2}(M), \|p-p_0\|_{L_2}\ge \Delta_n}\mathbb P\{\Phi {\rm \ rejects\ } H_0^{\rm GOF}\}
$$
Of particular interest here is the smallest $\Delta_n$ so that a test is consistent in that the above quantity converges to one.
Consider embedding with Gaussian kernel and a fixed scaling parameter $\nu>0$. Following standard asymptotic theory for V-statistics \citep[see, e.g.,][]{serfling2009approximation}, it can be shown that under $H_0^{\rm GOF}$ and certain regularity conditions,
$$
n\gamma^2_\nu(\widehat{\mathbb P},\mathbb P_0)\to_d \sum_{k\ge 1}\lambda_k^2 Z_k^2
$$
where $\lambda_1\ge\lambda_2\ge\cdots$ are the singular values of the linear operator:
$$
{\mathcal L}_\nu f=\int_{{\mathbb R}^d} \bar{G}_\nu(x,x')f(x')dx',\qquad \forall f\in L_2({\mathbb R}^d)
$$
and
$$
\bar{G}_\nu(x,y;\mathbb P_0)=G_\nu(x,y)-{\mathbb E}_{X\sim \mathbb P_0}G_\nu(X,y)-{\mathbb E}_{X\sim \mathbb P_0}G_\nu(x,X)+{\mathbb E}_{X,X'\sim_{\rm iid}\mathbb P_0}G_\nu(X,X').
$$
and $Z_k$s are independent standard normal random variables. Hereafter, for brevity, we shall omit the last argument of $\bar{G}$ when it is clear from the context. As such, we may proceed to reject $H_0^{\rm GOF}$ if and only if $n\widehat{\gamma}_\nu^2(\widehat{\mathbb P}_n, \mathbb P_0)$ exceeds the upper $\alpha$ quantile of its asymptotic distribution, which yields an (asymptotic) $\alpha$-level test. Following the same argument as that from \cite{balasubramanian2017optimality}, we can show that under mild regularity conditions such a test has power tending to one if and only if $\Delta_n\gg n^{-1/4}$. In addition, as shown by \cite{balasubramanian2017optimality}, much more powerful tests exist when assuming that the underlying densities are compactly supported and bounded away from 0 and 1. Here we show that the same is true for broader classes of distributions using Gaussian kernel embedding with a diverging scaling parameter.
Recall that
$$
\gamma^2_\nu(\widehat{\mathbb P}_n,\mathbb P_0)={1\over n^2}\sum_{i,j=1}^n \bar{G}_\nu(X_i,X_j).
$$
It is not hard to see that this is a biased estimate of $\gamma^2_\nu(\mathbb P,\mathbb P_0)$ due to the oversized influence of the summands when $i=j$. It is often common to correct for bias and use instead the following $U$-statistic:
$$
\widehat{\gamma_\nu^2}(\mathbb P,\mathbb P_0):={1\over n(n-1)}\sum_{1\le i\neq j\le n}^n \bar{G}_\nu(X_i,X_j),
$$
which we shall focus on in what follows.
The choice of the scaling parameter $\nu$ is essential when using RKHS embedding for goodness-of-fit test. While the importance of data-driven choice of $\nu$ is widely recognized in practice, almost all existing theoretical studies assume that a fixed kernel, therefore a fixed scaling parameter, is used. Here we shall demonstrate the benefit of using a data-driven scaling parameter, and especially choosing a scaling parameter that diverges with the sample size.
More specifically, we argue that, with appropriate scaling, $\widehat{\gamma_\nu^2}(\mathbb P,\mathbb P_0)$ can be viewed as an estimate of $\|p-p_0\|_{L_2}^2$ when $\nu\to\infty$ as $n\to\infty$. Note that
$$
\int(p-p_0)^2=\int p^2 -2\int p\cdot p_0+\int p_0^2.
$$
The first term can be estimated by
$$
\int p^2\approx {1\over n}\sum_{i=1}^n p(X_i)\approx{1\over n}\sum_{i=1}^n \widehat{p}_{h,-i}(X_i)
$$
where $\widehat{p}_{h,-i}$ is a kernel density estimate of $p$ with the $i$th observation removed and bandwidth $h$:
$$
\widehat{p}_{h,-i}(x)={1\over n(2\pi h^2)^{d/2}}\sum_{j\neq i} G_{(2h^2)^{-1}}(x-X_j).
$$
Thus, we can estimate $\int p^2$ by
$$
{1\over n(n-1)(2\pi h^2)^{d/2}}\sum_{1\le i\neq j\le n} G_{(2h^2)^{-1}}(X_i,X_j).
$$
Similarly, the cross-product term can be estimated by
$$
\int p\cdot p_0\approx \int \widehat{p}_h(x)p_0(x)dx={1\over n(2\pi h^2)^{d/2}}\sum_{i=1}^n \int G_{(2h^2)^{-1}}(x,X_i)p_0(x)dx.
$$
Together, we can view
$$
{1\over n(n-1)(2\pi h^2)^{d/2}}\sum_{1\le i\neq j\le n} \bar{G}_{(2h^2)^{-1}}(X_i,X_j)
$$
as an estimate of $\int (p-p_0)^2$. Following standard asymptotic properties of the kernel density estimator \citep[see, e.g.,][]{tsybakov2008introduction}, we know that
$$
(\pi/\nu)^{-d/2} \widehat{\gamma_\nu^2}(\mathbb P,\mathbb P_0)\to_p \|p-p_0\|_{L_2}^2
$$
if $\nu\to \infty$ in such a fashion that $\nu=o(n^{4/d})$. Motivated by this observation, we shall now consider testing $H_0^{\rm GOF}$ using $\widehat{\gamma_\nu^2}(\mathbb P,\mathbb P_0)$ with a diverging $\nu$. To signify the dependence of $\nu$ on the sample size, we shall add a subscript $n$ in what follows.
Under $H_0^{\rm GOF}$, it is clear ${\mathbb E}\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb P_0)=0$. Note also that
\begin{align}
&{\rm var}(\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb P_0))\notag\\=&{2\over n(n-1)}{\mathbb E}\left[\bar{G}_{\nu_n}(X_1,X_2)\right]^2 \nonumber\\
=&{2\over n(n-1)}\left[{\mathbb E}\left[G_{\nu_n}(X_1,X_2)\right]^2-2{\mathbb E}[G_{\nu_n}(X_1,X_2)G_{\nu_n}(X_1,X_3)]+\left({\mathbb E}\left[G_{\nu_n}(X_1,X_2)\right]\right)^2\right] \nonumber\\
=&{2\over n(n-1)}\left[{\mathbb E} G_{2\nu_n}(X_1,X_2)-2{\mathbb E}[G_{\nu_n}(X_1,X_2)G_{\nu_n}(X_1,X_3)]+\left({\mathbb E}\left[G_{\nu_n}(X_1,X_2)\right]\right)^2\right]. \label{eq:var}
\end{align}
Simple calculations yield:
$$
{\rm var}(\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb P_0))={2(\pi/(2\nu_n))^{d/2}\over n^2}\cdot \|p_0\|_{L_2}^2\cdot(1+o(1)),
$$
assuming that $\nu_n\to\infty$. We shall show that
$$
\frac{n}{\sqrt{2}}\left(2\nu_n\over \pi\right)^{d/4}\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb P_0)\to_d N\left(0,\|p_0\|_{L_2}^2\right).
$$
To use this as a test statistic, however, we will need to estimate ${\rm var}(\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb P_0))$. To this end, it is natural to consider estimating each of the three terms on the rightmost hand side of \eqref{eq:var} by $U$-statistics:
\begin{align*}
\tilde{s}^2_{n,\nu_n}=&\frac{1}{n(n-1)}\sum\limits_{1\leq i\neq j\leq n}G_{2\nu_n}(X_i,X_j)\\&-\frac{2(n-3)!}{n!}\sum\limits_{\substack{1\le i,j_1,j_2\le n\\ |\{i,j_1,j_2\}|=3}}G_{\nu_n}(X_i,X_{j_1})G_{\nu_n}(X_i,X_{j_2})\\&+\frac{(n-4)!}{n!}\sum\limits_{\substack{1\le i_1,i_2,j_1,j_2\le n\\ |\{i_1,i_2,j_1,j_2\}|=4}}G_{\nu_n}(X_{i_1},X_{j_1})G_{\nu_n}(X_{i_2},X_{j_2}).
\end{align*}
Note that $\tilde{s}^2_{n,\nu_n}$ is not always positive. To avoid a negative estimate of the variance, we can replace it with a sufficiently small value, say $1/n^2$, whenever it is negative or too small. Namely, let
$$
\widehat{s}^2_{n,\nu_n}=\max\left\{\tilde{s}^2_{n,\nu_n},1/n^2\right\},$$
and consider a test statistic:
$$
T_{n,\nu_n}^{\rm GOF}:={n\over\sqrt{2}}\widehat{s}_{n,\nu_n}^{-1}\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb P_0).
$$
We have
\begin{theorem}
\label{th:gofnull}
Let $\nu_n\to \infty$ as $n\to\infty$ in such a fashion that $\nu_n=o(n^{4/d})$ . Then, under $H_0^{\rm GOF}$,
\begin{equation}
\label{eq:gofnull1}
\frac{n}{\sqrt{2}}\left(2\nu_n\over \pi\right)^{d/4}\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb P_0)\to_d N(0,\|p_0\|_{L_2}^2).
\end{equation}
Moreover,
\begin{equation}
\label{eq:gofnull2}
T_{n,\nu_n}^{\rm GOF}\to_d N(0,1).
\end{equation}
\end{theorem}
Theorem \ref{th:gofnull} immediately implies a test, denoted by $\Phi^{\rm GOF}_{n,\nu_n,\alpha}$ $(\alpha\in(0,1))$, that rejects $H_0^{\rm GOF}$ if and only if $T_{n,\nu_n}^{\rm GOF}$ exceeds $z_\alpha$, the upper $1-\alpha$ quantile of the standard normal distribution, is an asymptotic $\alpha$-level test.
We now proceed to study its power against a smooth alternative. Following the same argument as before, it can be shown that
$$
{1\over n(n-1)(\pi/\nu_n)^{d/2}}\sum_{1\le i\neq j\le n} \bar{G}_{\nu_n}(X_i,X_j)\to_p \|p-p_0\|_{L_2}^2,
$$
and
$$
(2\nu_n/\pi)^{d/2}\widehat{s}^2_{n,\nu_n}\to_p \|p\|_{L_2}^2,
$$
so that
$$
n^{-1}(\nu_n/(2\pi))^{d/4}T_n^{\rm GOF}\to_p \|p-p_0\|_{L_2}^2/\|p\|_{L_2}.
$$
This immediately implies that, if $\nu_n\to\infty$ in such a manner that $\nu_n=o(n^{4/d})$, then $\Phi_{n,\nu_n,\alpha}^{\rm GOF}$ is consistent for a fixed $p\neq p_0$ in that its power converges to one. In fact, as $n$ increases, more and more subtle deviation from $p_0$ can be detected by $\Phi_{n,\nu_n,\alpha}^{\rm GOF}$. A refined analysis of the asymptotic behavior of $T_{n,\nu_n}^{\rm GOF}$ yields that
\begin{theorem}
\label{th:gofpower}
Assume that $n^{2s/(d+4s)}\Delta_n\to \infty$. Then for any $\alpha\in (0,1)$,
$$
\lim_{n\to\infty}{\rm power}\{\Phi^{\rm GOF}_{n,\nu_n,\alpha}; H_1^{\rm GOF}(\Delta_n; s)\}\to 1,
$$
provided that $\nu_n\asymp n^{{4}/(d+4s)}$.
\end{theorem}
In other words, $\Phi_{n,\nu_n,\alpha}^{\rm GOF}$ has a detection boundary of the order $O(n^{-2s/(d+4s)})$ which turns out to be minimax optimal in that no other tests could attain a detection boundary with faster rate of convergence. More precisely, we have
\begin{theorem}
\label{th:goflower}
Assume that $p_0$ is density such that $\|p_0\|_{\mathcal W^{s,2}}<M$, and $\lim\inf_{n\to\infty}n^{2s/(d+4s)}\Delta_n<\infty$. Then there exists some $\alpha\in(0,1)$ such that for any test $\Phi_n$ of level $\alpha$ (asymptotically) based on $X_1,\ldots,X_n\sim p$,
$$
\liminf_{n\to\infty}{\rm power}\{\Phi_n; H_1^{\rm GOF}(\Delta_n; s)\}<1.
$$
\end{theorem}
Together, Theorems \ref{th:gofpower} and \ref{th:goflower} suggest that Gaussian kernel embedding of distributions is especially suitable for testing against smooth alternatives, and it yields a test that could consistently detect the smallest departures, in terms of rate of convergence, from the null distribution. The idea can also be readily applied to testing of homogeneity and independence which we shall examine next.
\section{Test for Homogeneity}
\label{sec:hom}
As in the case of goodness of fit test, we shall consider the case when the underlying distributions have smooth densities so that we can rewrite the null hypothesis as $H_0^{\rm HOM}: p=q\in \mathcal W^{s,2}(M)$, and the alternative hypothesis as
$$
H_1^{\rm HOM}(\Delta_n; s): p, q\in \mathcal W^{s,2}(M),\quad \|p-q\|_{L_2}\ge \Delta_n.
$$
The power of a test $\Phi$ based on $X_1,\ldots, X_n\sim p$ and $Y_1,\ldots,Y_m\sim q$ is given by
$$
{\rm power}(\Phi; H_1^{\rm HOM}(\Delta_n; s)):=\inf_{p,q\in \mathcal W^{s,2}(M), \|p-q\|_{L_2}\ge \Delta_n}\mathbb P\{\Phi {\rm \ rejects\ } H_0^{\rm HOM}\}
$$
To fix ideas, we shall also assume that $c\le m/n\le C$ for some constants $0<c\le C<\infty$. In addition, we shall express explicitly only the dependence on $n$ and not $m$, for brevity. Our treatment, however, can be straightforwardly extended to more general situations.
Recall that
\begin{eqnarray*}
\gamma_{\nu_n}^2(\widehat{\mathbb P}_n,\widehat{\mathbb Q}_m)={1\over n^2}\sum_{1\le i,j\le n}G_{\nu_n}(X_i,X_j)+{1\over m^2}\sum_{1\le i,j\le m}G_{\nu_n}(Y_i,Y_j)\\
-{2\over mn}\sum_{i=1}^n\sum_{j=1}^mG_{\nu_n}(X_i,Y_j).
\end{eqnarray*}
As before, to reduce bias, we shall focus instead on a closely related estimate of $\gamma_{\nu_n}(\mathbb P,\mathbb Q)$:
\begin{eqnarray*}
\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb Q)={1\over n(n-1)}\sum_{1\le i\neq j\le n}G_{\nu_n}(X_i,X_j)+{1\over m(m-1)}\sum_{1\le i\neq j\le m}G_{\nu_n}(Y_i,Y_j)\\
-{2\over mn}\sum_{i=1}^n\sum_{j=1}^mG_{\nu_n}(X_i,Y_j).
\end{eqnarray*}
It is easy to see that under $H_0^{\rm HOM}$,
$$
{\mathbb E}\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb Q)=0,
$$
and
$$
{\rm var}\left(\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb Q)\right)=2\left(\frac{1}{n(n-1)}+\frac{2}{mn}+\frac{1}{m(m-1)}\right){\mathbb E}_{(X,Y)\sim \mathbb P\otimes\mathbb Q} \bar{G}_{\nu_n}^2(X,Y),
$$
where
$$
\bar{G}_{\nu_n}(x,y)=G_\nu(x,y)-{\mathbb E}_{X\sim \mathbb P}G_{\nu_n}(X,y)-{\mathbb E}_{Y\sim \mathbb Q}G_{\nu_n}(x,Y)+{\mathbb E}_{(X,Y)\sim \mathbb P\otimes\mathbb Q}G_{\nu_n}(X,Y).
$$
It is therefore natural to consider estimating the variance by $\widehat{s}_{n,m,\nu_n}^2=\max\left\{\tilde{s}_{n,m,\nu_n}^2,1/n^2\right\}$ where
\begin{align*}
\tilde{s}_{n,m,\nu_n}^2=&\frac{1}{N(N-1)}\sum\limits_{1\leq i\neq j\leq N}G_{2\nu_n}(Z_i,Z_j)\\&-\frac{2(N-3)!}{N!}\sum\limits_{\substack{1\le i,j_1,j_2\le N\\ |\{i,j_1,j_2\}|=3}}G_{\nu_n}(Z_i,Z_{j_1})G_{\nu_n}(Z_i,Z_{j_2})\\&+\frac{(N-4)!}{N!}\sum\limits_{\substack{1\le i_1,i_2,j_1,j_2\le N\\ |\{i_1,i_2,j_1,j_2\}|=4}}G_{\nu_n}(Z_{i_1},Z_{j_1})G_{\nu_n}(Z_{i_2},Z_{j_2}),
\end{align*}
$N=n+m$ and $Z_i=X_i$ if $i\le n$ and $Y_{i-n}$ if $i>n$. This leads to the following test statistic
\begin{eqnarray*}
T_{n,\nu_n}^{\rm HOM}={nm\over \sqrt{2}(n+m)}\cdot \widehat{s}_{n,m,\nu_n}^{-1}\cdot \widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb Q).
\end{eqnarray*}
As before, we can show
\begin{theorem}
\label{th:homnull}
Let $\nu_n\to \infty$ as $n\to \infty$ in such a fashion that $\nu_n=o(n^{4/d})$. Then under $H_0^{\rm HOM}: p=q\in \mathcal W^{s,2}(M)$,
$$
T_{n,\nu_n}^{\rm HOM}\to_d N(0,1),\qquad {\rm as\ }n\to\infty.
$$
\end{theorem}
Motivated by Theorem \ref{th:homnull}, we can consider a test, denoted by $\Phi_{n,\nu_n,\alpha}^{\rm HOM}$, that rejects $H_0^{\rm HOM}$ if and only if $T_{n,\nu_n}^{\rm HOM}$ exceeds $z_{\alpha}$. By construction, $\Phi^{\rm HOM}_{n,\nu_n,\alpha}$ is an asymptotic $\alpha$ level test. We now turn to study its power against $H_1^{\rm HOM}$. As in the case of goodness of fit test, we can prove that $\Phi^{\rm HOM}_{n,\nu_n,\alpha}$ is minimax optimal in that it can detect the smallest difference between $p$ and $q$ in terms of rate of convergence. More precisely, we have
\begin{theorem}
\label{th:hompower}
\begin{enumerate}[(i)]
\item Assume that $n^{2s/(d+4s)}\Delta_n\to \infty$. Then for any $\alpha\in(0,1)$,
$$
\lim_{n\to\infty}{\rm power}\{\Phi^{\rm HOM}_{n,\nu_n,\alpha}; H_1^{\rm HOM}(\Delta_n; s)\}\to 1,
$$
provided that $\nu_n\asymp n^{4/(d+4s)}$.
\item Conversely, if $\lim\inf_{n\to\infty}n^{2s/(d+4s)}\Delta_n<\infty$, then there exists some $\alpha\in(0,1)$ such that for any test $\Phi_n$ of level $\alpha$ (asymptotically) based on $X_1,\ldots,X_n\sim p$ and $Y_1,\ldots, Y_m\sim q$,
$$
\liminf_{n\to\infty}{\rm power}\{\Phi_n; H_1^{\rm HOM}(\Delta_n; s)\}<1.
$$
\end{enumerate}
\end{theorem}
\section{Test for Independence}
\label{sec:ind}
Similarly, we can also use Gaussian kernel embedding to construct minimax optimal tests of independence. Let $X=(X^1,\ldots, X^k)^\top \in {\mathbb R}^{d}$ be a random vector where the subvectors $X^j\in {\mathbb R}^{d_j}$ for $j=1,\ldots, k$ so that $d_1+\cdots+d_k=d$. Denote by $p$ the joint density function of $X$, and $p_j$ the marginal density of $X^j$. We assume that both the joint density and the marginal densities are smooth. Specifically, we shall consider testing
$$
H_0^{\rm IND}: p=p_1\otimes\cdots\otimes p_k,\ p_j\in \mathcal W^{s,2}(M_j),\ 1\leq j\leq k
$$
against a smooth departure from independence:
$$
H_1^{\rm IND}(\Delta_n; s): p\in \mathcal W^{s,2}(M),\ p_j\in\mathcal W^{s,2}(M_j),\ 1\leq j\leq k {\rm\ and\ } \|p-p_1\otimes\cdots\otimes p_k\|_{L_2}\ge \Delta_n,
$$
where $M=\prod\limits_{j=1}^k M_j$ so that $p_1\otimes\cdots\otimes p_k\in\mathcal W^{s,2}(M)$ under both null and alternative hypotheses.
Given a sample $\{X_1,\ldots, X_n\}$ of independent copies of $X$, we can naturally estimate the so-called dHSIC $\gamma_{\nu_n}^2(\mathbb P,\mathbb P^{X^1}\otimes\cdots\otimes \mathbb P^{X^k})$ by
\begin{eqnarray*}
\gamma_{\nu_n}^2(\widehat{\mathbb P}_n,\widehat{\mathbb P}_n^{X^1}\otimes\cdots\otimes \widehat{\mathbb P}_n^{X^k})&=&\frac{1}{n^2}\sum_{1\le i, j\le n}G_{\nu_n}(X_i,X_j)\\
&&+{1\over n^{2k}}\sum_{1\le i_1,\ldots,i_k, j_1\ldots, j_k\le n}G_{\nu_n}((X^1_{i_1},\ldots,X^k_{i_k}),(X^1_{j_1},\ldots,X^k_{j_k}))\\
&&-{2\over n^{k+1}}\sum_{1\le i, j_1,\ldots, j_k\le n}G_{\nu_n}(X_i,(X^1_{j_1},\ldots,X^k_{j_k})).
\end{eqnarray*}
To correct for the bias, we shall consider the following estimate of $\gamma_{\nu_n}^2(\mathbb P,\mathbb P^{X^1}\otimes\cdots\otimes \mathbb P^{X^k})$ instead.
\begin{eqnarray*}
\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb P^{X^1}\otimes\cdots\otimes \mathbb P^{X^k})&=&\frac{1}{n(n-1)}\sum_{1\leq i\neq j\leq n}G_{\nu_n}(X_i,X_j)\\
&&+{(n-2k)!\over n!}\sum_{\substack{1\leq i_1,\cdots,i_k,j_1,\cdots,j_k\leq n\\ |\{i_1,\cdots,i_k,j_1,\cdots,j_k\}|=2k}}G_{\nu_n}((X^1_{i_1},\ldots,X^k_{i_k}),(X^1_{j_1},\ldots,X^k_{j_k}))\\
&&-{2(n-k-1)!\over n!}\sum_{\substack{1\le i,j_1,\cdots,j_k\le n\\ |\{i,j_1,\cdots,j_k\}|=k+1}}G_{\nu_n}(X_i,(X^1_{j_1},\ldots,X^k_{j_k})).
\end{eqnarray*}
Under $H_0^{\rm IND}$, we have
$$
{\mathbb E}\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb P^{X^1}\otimes\cdots\otimes \mathbb P^{X^k})=0.
$$
Deriving its variance, however, requires a bit more work. Write
$$
h_j(x^j,y)={\mathbb E}_{X\sim \mathbb P^{X^1}\otimes\cdots\otimes\mathbb P^{X^k}} G_{\nu_n}((X^1,\ldots, X^{j-1},x^j,X^{j+1},\ldots, X^k), y)
$$
and
$$
g_j(x^j,y)=h_j(x^j,y)-{\mathbb E}_{X^j\sim \mathbb P^{X^j}}h_j(X^j, y)-{\mathbb E}_{Y\sim \mathbb P}h_j(x^j, Y)+{\mathbb E}_{(X^j,Y)\sim \mathbb P^{X^j}\otimes \mathbb P}h_j(X^j, Y).
$$
With slight abuse of notation, also denote by
\begin{align*}
h_{j_1,j_2}(x^{j_1},y^{j_2})={\mathbb E}_{X,Y\sim_{\rm iid} \mathbb P^{X^1}\otimes\cdots\otimes\mathbb P^{X^k}} G_{\nu_n}(&(X^1,\ldots, X^{j_1-1},x^{j_1},X^{j_1+1},\ldots, X^k),\\
&(Y^1,\ldots, Y^{j_2-1},y^{j_2},Y^{j_2+1},\ldots, Y^k))
\end{align*}
and
\begin{align*}
g_{j_1,j_2}(x^{j_1},y^{j_2})=&h_{j_1,j_2}(x^{j_1},y^{j_2})-{\mathbb E}_{X^{j_1}\sim \mathbb P^{X^{j_1}}}h_{j_1,j_2}(X^{j_1}, y^{j_2})\\
&-{\mathbb E}_{X^{j_2}\sim \mathbb P^{X^{j_2}}}h_{j_1,j_2}(x^{j_1}, X^{j_2})+{\mathbb E}_{(X^{j_1},Y^{j_2})\sim \mathbb P^{X^{j_1}}\otimes \mathbb P^{X^{j_2}}}h_{j_1,j_2}(X^{j_1}, Y^{j_2}).
\end{align*}
Then we have
\begin{lemma}\label{le:var}
Under $H_0^{\rm IND}$,
\begin{align}
{\rm var}\left(\widehat{\gamma^2_{\nu_n}}(\mathbb P,\mathbb P^{X^1}\otimes\cdots\otimes\mathbb P^{X^k})\right)\nonumber=&\frac{2}{n(n-1)}\bigg({\mathbb E} \bar{G}_{\nu_n}^2(X,Y)-2\sum\limits_{1\leq j\leq k}{\mathbb E}\left(g_j(X^{j},Y)\right)^2\nonumber\\
&+\sum\limits_{1\leq j_1,j_2\leq k}{\mathbb E} \left(g_{j_1,j_2}(X^{j_1},Y^{j_2})\right)^2\bigg)+O({\mathbb E} G_{2\nu_n}(X,Y)/n^3).\label{eq:var1}
\end{align}
\end{lemma}
In light of Lemma \ref{le:var}, a variance estimator can be derived by estimating the leading term on the righthand side of \eqref{eq:var1} term by term using $U$-statistics. Formulae for estimating the variance for general $k$ are tedious and we defer them to the appendix for space consideration. In the special case when $k=2$, the leading term on the righthand side of \eqref{eq:var1} takes a much simplified form:
$$
\frac{2}{n(n-1)}{\mathbb E}\bar{G}_{\nu_n}(X^1,Y^1)\cdot{\mathbb E}\bar{G}_{\nu_n}(X^2,Y^2),
$$
where $X^j,Y^j\sim_{\rm iid} \mathbb P^{X^j}$ for $j=1,2$. Thus, we can estimate ${\mathbb E}[\bar{G}_{\nu_n}(X^j,Y^j)]^2$ by
\begin{align*}
\tilde{s}^2_{n,j,\nu_n}=&\frac{1}{n(n-1)}\sum\limits_{1\leq i_1\neq i_2\leq n}G_{2\nu_n}(X_{i_1}^j,X^j_{i_2})\\&-\frac{2(n-3)!}{n!}\sum\limits_{\substack{1\le i,l_1,l_2\le n\\ |\{i,l_1,l_2\}|=3}}G_{\nu_n}(X_i^j,X_{l_1}^j)G_{\nu_n}(X_i^j,X_{l_2}^j)\\&+\frac{(n-4)!}{n!}\sum\limits_{\substack{1\le i_1,i_2,l_1,l_2\le n\\ |\{i_1,i_2,l_1,l_2\}|=4}}G_{\nu_n}(X_{i_1}^j,X_{l_1}^j)G_{\nu_n}(X_{i_2}^j,X_{l_2}^j)
\end{align*}
and ${\rm var}(\widehat{\gamma^2_{\nu_n}}(\mathbb P,\mathbb P^{X^1}\otimes\mathbb P^{X^2}))$ by $2/[n(n-1)]\widehat{s}^2_{n,\nu_n}$ where
$$
\widehat{s}^2_{n,\nu_n}:=\max\left\{\tilde{s}^2_{n,1,\nu_n}\tilde{s}^2_{n,2,\nu_n}, 1/n^2\right\}.
$$
so that a test statistic for $H_0^{\rm IND}$ is
$$
T_{n,\nu_n}^{\rm IND}:={n\over\sqrt{2}}\widehat{s}^{-1}_{n,\nu_n}\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb P^{X^1}\otimes\mathbb P^{X^2}).
$$
Test statistics for general $k>2$ can be defined accordingly. Again, we have
\begin{theorem}
\label{th:indnull}
Let $\nu_n\to \infty$ as $n\to \infty$ in such a fashion that $\nu_n=o(n^{4/d})$. Then under $H_0^{\rm IND}$,
$$
T_{n,\nu_n}^{\rm IND}\to_d N(0,1),\qquad {\rm as\ }n\to\infty.
$$
\end{theorem}
Motivated by Theorem \ref{th:indnull}, we can consider a test, denoted by $\Phi_{n,\nu_n,\alpha}^{\rm IND}$, that rejects $H_0^{\rm IND}$ if and only if $T_{n,\nu_n}^{\rm IND}$ exceeds $z_{\alpha}$. By construction, $\Phi^{\rm IND}_{n,\nu_n,\alpha}$ is an asymptotic $\alpha$ level test. We now turn to study its power against $H_1^{\rm IND}$. As in the case of goodness of fit test, we can prove that $\Phi^{\rm HOM}_{n,\nu_n,\alpha}$ is minimax optimal in that it can detect the smallest departure from independence in terms of rate of convergence. More precisely, we have
\begin{theorem}
\label{th:indpower}
\begin{enumerate}[(i)]
\item Assume that $n^{2s/(d+4s)}\Delta_n\to \infty$. Then for any $\alpha\in(0,1)$,
$$
\lim_{n\to\infty}{\rm power}\{\Phi^{\rm IND}_{n,\nu_n,\alpha}; H_1^{\rm IND}(\Delta_n; s)\}\to 1,
$$ provided that $\nu_n\asymp n^{4/(d+4s)}$.
\item Conversely, if $\lim\inf_{n\to\infty}n^{2s/(d+4s)}\Delta_n<\infty$, then there exists some $\alpha\in(0,1)$ such that for any test $\Phi_n$ of level $\alpha$ (asymptotically) based on $X_1,\ldots,X_n\sim p$,
$$
\liminf_{n\to\infty}{\rm power}\{\Phi_n; H_1^{\rm IND}(\Delta_n; s)\}<1.
$$
\end{enumerate}
\end{theorem}
\section{Adaptation}
\label{sec:adapt}
The results presented in the previous sections not only suggest that Gaussian kernel embedding of distributions is especially suitable for testing against smooth alternatives, but also indicate the importance of choosing an appropriate scaling parameter in order to detect small deviation from the null hypothesis. To achieve maximum power, the scaling parameter should be chosen according to the smoothness of underlying density functions. This, however, presents a practical challenge because the level of smoothness is rarely known a priori. This naturally brings about the questions of adaption: can we devise an agnostic testing procedure that does not require such knowledge but still attain similar performance? We shall show in this section that this is possible, at least for sufficiently smooth densities.
\subsection{Test for Goodness-of-fit}
We again begin with the test for goodness-of-fit. As we show in Section \ref{sec:gof}, under $H_0^{\rm GOF}$, $T_{n,\nu_n}^{\rm GOF}\to_d N(0,1)$ if $1\ll\nu_n\ll n^{4/d}$; whereas for any $p\in \mathcal W^{s,2}$ such that $\|p-p_0\|_{L_2}\gg n^{-2s/(d+4s)}$, $T_{n,\nu_n}^{\rm GOF}\to\infty$ provided that $\nu_n\asymp n^{4/(d+4s)}$. This motivates us to consider the following test statistic:
$$
T_n^{\rm GOF (adapt)}=\max_{1\le \nu_n\le n^{2/d}} T_{n,\nu_n}^{\rm GOF}.
$$
In light of earlier discussion, it is plausible that such a statistic could be used to detect any smooth departure from the null provided that the level of smoothness $s\ge d/4$. We now argue that this is indeed the case. More specifically, we shall proceed to reject $H_0^{\rm GOF}$ if and only if $T_n^{\rm GOF (adapt)}$ exceeds the upper $\alpha$ quantile, denoted by $q_{n,\alpha}^{\rm GOF}$, of its null distribution. In what follows, we shall call this test $\Phi^{\rm GOF (adapt)}$. Note that, even though it is hard to derive the analytic form for $q_{n,\alpha}^{\rm GOF}$, it can be readily evaluated via Monte Carlo method.
To study the power of $\Phi^{\rm GOF (adapt)}$ against $H_1^{\rm GOF}$ with different levels of smoothness, we shall consider the following alternative hypothesis
$$
H_1^{\rm GOF(adapt)}(\Delta_{n,s}: s\ge d/4): p\in \bigcup_{s\ge d/4} \{p\in \mathcal W^{s,2}(M): \|p-p_0\|_{L_2}\ge \Delta_{n,s}\}.
$$
The following theorem characterizes the power of $\Phi^{\rm GOF (adapt)}$ against $H_1^{\rm GOF(adapt)}(\Delta_{n,s}: s\ge d/4)$.
\begin{theorem}
\label{th:gofadapt}
There exists a constant $c>0$ such that if
$$\liminf_{n\to\infty} \Delta_{n,s}(n/\log\log n)^{2s/(d+4s)}>c,$$
then
$$
{\rm power}\{\Phi^{\rm GOF (adapt)}; H_1^{\rm GOF(adapt)}(\Delta_{n,s}: s\ge d/4)\}\to 1.
$$
\end{theorem}
Theorem \ref{th:gofadapt} shows that $\Phi^{\rm GOF (adapt)}$ has a detection boundary of the order $(\log\log n/n)^{\frac{2s}{d+4s}}$ when $p\in \mathcal W^{s,2}$ for any $s\ge d/4$. If $s$ is known in advance, as we show in Section \ref{sec:gof}, the optimal test is based on $T_{n,\nu_n}^{\rm GOF}$ with $\nu_n\asymp n^{4/(d+4s)}$ and has a detection boundary of the order $O(n^{-2s/(d+4s)})$. The extra polynomial of iterated logarithmic factor $(\log\log n)^{2s/(d+4s)}$ is the price we pay to ensure that no knowledge of $s$ is required and $\Phi^{\rm GOF (adapt)}$ is powerful against smooth alternatives for all $s\ge d/4$.
\subsection{Test for Homogeneity}
The treatment for homogeneity tests is similar. Instead of $T_{n,\nu_n}^{\rm HOM}$, we now consider a test based on
$$
T_n^{\rm HOM (adapt)}=\max_{1\le \nu_n\le n^{2/d}} T_{n,\nu_n}^{\rm HOM}.
$$
If $T_n^{\rm HOM (adapt)}$ exceeds the upper $\alpha$ quantile, denoted by $q_{n,\alpha}^{\rm HOM}$, of its null distribution, then we reject $H_0^{\rm HOM}$. In what follows, we shall refer to this test as $\Phi^{\rm HOM (adapt)}$. As before, we do not have a closed form expression for $q_{n,\alpha}^{\rm HOM}$, and it needs to be evaluated via Monte Carlo method. In particular, in the case of homogeneity test, we can approximate $q_{n,\alpha}^{\rm HOM}$ by permutation where we randomly shuffle $\{X_1,\ldots,X_n, Y_1,\ldots,Y_m\}$ and compute the test statistic as if the first $n$ shuffled observations are from the first population whereas the other $m$ are from the second population. This is repeated multiple times in order to approximate the critical value $q_{n,\alpha}^{\rm HOM}$.
The following theorem characterize the power of $\Phi^{\rm HOM (adapt)}$ against an alternative with different levels of smoothness
$$
H_1^{\rm HOM(adapt)}(\Delta_{n,s}: s\ge d/4): (p,q)\in \bigcup_{s\ge d/4} \{(p,q): p,q\in \mathcal W^{s,2}(M), \|p-q\|_{L_2}\ge \Delta_{n,s}\}.
$$
\begin{theorem}
\label{th:homadapt}
There exists a constant $c>0$ such that if
$$\liminf_{n\to\infty} \Delta_{n,s}(n/\log\log n)^{2s/(d+4s)}>c,$$
then
$$
{\rm power}\{\Phi^{\rm HOM (adapt)}; H_1^{\rm HOM(adapt)}(\Delta_{n,s}: s\ge d/4)\}\to 1.
$$
\end{theorem}
Similar to the case of goodness-of-fit test, Theorem \ref{th:homadapt} shows that $\Phi^{\rm HOM (adapt)}$ has a detection boundary of the order $O((n/\log\log n)^{-2s/(d+4s)})$ when $p\neq q\in \mathcal W^{s,2}$ for any $s\ge d/4$. In light of the results from Section \ref{sec:hom}, this is optimal up to an extra polynomial of iterated logarithmic factor. The main advantage is that $\Phi^{\rm HOM (adapt)}$ is powerful against smooth alternatives simultaneously for all $s\ge d/4$.
\subsection{Test for Independence}
Similarly, for independence test, we shall adopt the following test statistic
$$
T_n^{\rm IND (adapt)}=\max_{1\le \nu_n\le n^{2/d}} T_{n,\nu_n}^{\rm IND}.
$$
and reject $H_0^{\rm IND}$ if and only $T_n^{\rm IND (adapt)}$ exceeds the upper $\alpha$ quantile, denoted by $q_{n,\alpha}^{\rm IND}$, of its null distribution. In what follows, we shall refer to this test as $\Phi^{\rm HOM (adapt)}$. The critical value, $q_{n,\alpha}^{\rm HOM}$, can also be evaluated via permutation test. See, e.g., \cite{pfister2018kernel} for detailed discussions.
We now show that $\Phi^{\rm IND (adapt)}$ is powerful in testing against the alternative with different levels of smoothness
\begin{align*}
H_1^{\rm IND(adapt)}(\Delta_{n,s}: s\ge d/4): p\in \bigcup_{s\ge d/4} \Big\{p\in \mathcal W^{s,2}(M), p_j\in\mathcal W^{s,2}(M_j),1\leq j\leq k,\\ \|p-p_1\otimes\cdots\otimes p_k\|_{L_2}\ge \Delta_{n,s}\Big\}.
\end{align*}
More specifically, we have
\begin{theorem}
\label{th:indadapt}
There exists a constant $c>0$ such that if
$$\liminf_{n\to\infty} \Delta_{n,s}(n/\log\log n)^{2s/(d+4s)}>c,$$
then
$$
{\rm power}\{\Phi^{\rm IND (adapt)}; H_1^{\rm IND(adapt)}(\Delta_{n,s}: s\ge d/4)\}\to 1.
$$
\end{theorem}
Similar to before, Theorem \ref{th:indadapt} shows that $\Phi^{\rm IND (adapt)}$ is optimal up to an extra polynomial of iterated logarithmic factor for detecting smooth departure from independence simultaneously for all $s\ge d/4$.
\section{Numerical Experiments}
\label{sec:sim}
To further complement our theoretical development and demonstrate the practical merits of the proposed methodology, we conducted several sets of numerical experiments.
\subsection{Effect of Scaling Parameter}
Our first set of experiments were designed to illustrate the importance of the scaling parameter and highlight the potential room for improvement over the ``median'' heuristic -- one of the most common data-driven choice of the scaling parameter in practice \citep[see, \textit{e.g.},][]{gretton2008kernel,pfister2018kernel}.
\begin{itemize}
\item \textit{Experiment \uppercase\expandafter{\romannumeral1}}: the homogeneity test with underlying distributions being the normal distribution and the mixture of several normal distributions. Specifically, $$
p(x)=f(x;0,1),\quad q(x)=0.5\times f(x;0,1)+0.1\times\sum_{\mu\in\bm{\mu}}f(x;\mu,0.05)
$$
where $f(x;\mu,\sigma)$ denotes the density of $N(\mu,\sigma^2)$ and $\bm{\mu}=\{-1,-0.5,0,0.5,1\}$.
\item \textit{Experiment \uppercase\expandafter{\romannumeral2}:} the joint independence test of $X^1,\cdots,X^5$ where $$X^1,\cdots,X^{4},(X^5)'\sim_{\rm iid} N(0,1),\quad
X^5=\left|(X^5)'\right|\times \mathrm{sign}\left(\prod\limits_{l=1}^{4}X^l\right).
$$
Clearly $X^1,\cdots,X^5$ are jointly dependent since $\prod_{l=1}^dX^l\geq 0$.
\end{itemize}
In both experiments, our primary goal is to investigate how the power of Gaussian MMD based test is influenced by a pre-fixed scaling parameter. These tests are also compared to the ones with scaling parameter selected via ``median'' heuristic. In order to evaluate tests with different scaling parameters under a unified framework, we determined the critical values for each test via permutation test.
For Experiment \uppercase\expandafter{\romannumeral1}\ we fixed the sample size at $n=m=200$; and for Experiment \uppercase\expandafter{\romannumeral2}\ at $n=400$. The number of permutations was set at $100$, and significance level at $\alpha=0.05$. We first repeated the experiments $100$ times under the null to verify that permutation tests indeed yield the correct size, up to Monte Carlo error. Each experiment was then repeated for 100 times and the observed power ($\pm$ one standard error) for different choices of the scaling parameter. The results are summarized in Figure \ref{Fg:single}. It is perhaps not surprising that the scaling parameter selected via ``median heuristic'' has little variation across each simulation run, and we represent its performance by a single value.
\begin{figure*}[!htbp]
\begin{minipage}{0.5\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
height=6.5cm,
width=8cm,
grid=major,
xlabel = $\log(\nu)$,
ylabel=Power
xmax = 4,
xmin = -1,
ymax = 1,
ymin = 0,
xtick={-1,0,1,2,3,4},
legend style={at={(0.03,0.95)},anchor=north west
}
]
\addplot[mark=*,cyan,error bars/.cd,
y dir=both,y explicit]
coordinates {
(-1,0.23)+-(0,0.03) (-0.75,0.23)+-(0,0.03) (-0.5,0.24)+-(0,0.03) (-0.25,0.22)+-(0,0.029) (0,0.22)+-(0,0.029) (0.25,0.2)+-(0,0.028) (0.5,0.2)+-(0,0.028) (0.75,0.2)+-(0,0.028) (1,0.22)+-(0,0.029) (1.25,0.22)+-(0,0.029) (1.5,0.23)+-(0,0.03) (1.75,0.23)+-(0,0.03) (2,0.23)+-(0,0.03) (2.125,0.25)+-(0,0.031) (2.25,0.27)+-(0,0.031) (2.375,0.33)+-(0,0.033) (2.5,0.43)+-(0,0.035) (2.625,0.54)+-(0,0.035) (2.75,0.67)+-(0,0.033) (2.875,0.75)+-(0,0.031) (3,0.82)+-(0,0.027) (3.125,0.91)+-(0,0.02) (3.25,0.94)+-(0,0.017) (3.375,0.98)+-(0,0.01) (3.5,0.99)+-(0,0.007) (3.625,0.99)+-(0,0.007) (3.75,0.99)+-(0,0.007) (3.875,0.99)+-(0,0.007) (4,1)+-(0,0)
}; \addlegendentry{Single fixed $\nu$} ;
\addplot[mark size=3,mark=*,red,only marks,error bars/.cd,
y dir=both,y explicit]
coordinates {
(0.2,0.21)+-(0,0.04)
}; \addlegendentry{Median} ;
\end{axis}
\end{tikzpicture}
\end{minipage}
\begin{minipage}{0.5\textwidth}
\centering
\begin{tikzpicture}
\begin{axis}[
height=6.5cm,
width=8cm,
grid=major,
xlabel = $\log(\nu)$,
xmax = 2,
xmin = -3,
ymax = 1,
ymin = 0,
xtick={-3,-2,-1,0,1,2},
legend style={at={(0.03,0.95)},anchor=north west
}
]
\addplot[mark=*,cyan,error bars/.cd,
y dir=both,y explicit]
coordinates {
(-3,0.08)+-(0,0.014) (-2.75,0.08)+-(0,0.014) (-2.5,0.08)+-(0,0.014) (-2.25,0.07)+-(0,0.013) (-2,0.05)+-(0,0.011) (-1.75,0.06)+-(0,0.012) (-1.5,0.09)+-(0,0.014) (-1.25,0.11)+-(0,0.016) (-1,0.21)+-(0,0.02) (-0.75,0.27)+-(0,0.022) (-0.625,0.34)+-(0,0.024) (-0.5,0.45)+-(0,0.025) (-0.375,0.55)+-(0,0.025) (-0.25,0.62)+-(0,0.024) (-0.125,0.71)+-(0,0.023) (0,0.75)+-(0,0.022) (0.125,0.82)+-(0,0.019) (0.25,0.84)+-(0,0.018) (0.375,0.86)+-(0,0.017) (0.5,0.92)+-(0,0.014) (0.625,0.92)+-(0,0.014) (0.75,0.93)+-(0,0.013) (0.875,0.93)+-(0,0.013) (1,0.93)+-(0,0.013) (1.25,0.9)+-(0,0.015) (1.375,0.87)+-(0,0.017) (1.5,0.86)+-(0,0.017) (1.625,0.85)+-(0,0.018) (1.75,0.82)+-(0,0.019) (1.875,0.81)+-(0,0.02) (2,0.8)+-(0,0.02)
};
\addplot[mark size=3,mark=*,red,only marks,error bars/.cd,
y dir=both,y explicit]
coordinates {
(-2.15,0.07)+-(0,0.013)
};
\end{axis}
\end{tikzpicture}
\end{minipage}
\caption{Observed power against $\log(\nu)$ in Experiment \uppercase\expandafter{\romannumeral1}\ (left) and Experiment \uppercase\expandafter{\romannumeral2} (right).}\label{Fg:single}
\end{figure*}
The importance of the scaling parameter is evident from Figure \ref{Fg:single} with the observed power varies quite significantly for different choices. It is also of interest to note that in these settings the ``median'' heuristic typically does not yield a scaling parameter with great power. More specifically, in Experiment \uppercase\expandafter{\romannumeral1}, $\log(\nu_{\rm median})\approx 0.2$ and maximum power is attained at $\log(\nu)=4$; in Experiment \uppercase\expandafter{\romannumeral2}, $\log(\nu_{\rm median})\approx-2.15$ and maximum power is attained at $\log(\nu)=1$. This suggests that more appropriate choice of the scaling parameter may lead to much improved performance.
\subsection{Efficacy of Adaptation}\label{sec:sim_adapt}
Our second experiment aims to illustrate that the adaptive procedures we proposed in Section \ref{sec:adapt} indeed yield more powerful tests when compared with other alternatives that are commonly used in practice. In particular, we compare the proposed self-normalized adaptive test (\verb+S.A.+) with a couple of data-driven approaches, namely the ``median'' heuristic (\verb+Median+) and the unnormalized adaptive test (\verb+U.A.+) proposed in \cite{sriperumbudur2009kernel}. When computing both self-normalized and unnormalized test statistics, we first rescaled the squared distance $\|X_i-X_j\|^2$ by the dimensionality $d$ before taking maximum within a certain range of the scaling parameter. We considered two experiment setups:
\begin{itemize}
\item \textit{Experiment \uppercase\expandafter{\romannumeral3}}: the homogeneity test with the underlying distributions being
$$
P\sim N(\mathbf{0},I_d),\quad Q\sim N\left(\mathbf{0},\left(1+2d^{-1/2}\right)I_d\right).
$$
As the `signal strength', the ratio between the variances of $Q$ and $P$ in each single direction is set to decrease to $1$ at the order $1/\sqrt{d}$ with $d$, which is the decreasing order of variance ratio that can be detected by the classical $F$-test.
\item \textit{Experiment \uppercase\expandafter{\romannumeral4}}: the independence test of $X^1,X^2\in{\mathbb R}^{d/2}$, where $X=(X^1,X^2)$ follows a mixture of $$N\left(\mathbf{0},I_d\right)\quad \text{and}\quad N\left(\mathbf{0},(1+6d^{-3/5})I_d\right)$$
with mixture probability being $0.5$. Similarly, the ratio between the variances in each direction is set to decrease with $d$, but at a slightly higher rate.
\end{itemize}
To better compare different methods, we considered different combinations of sample size and dimensionality for each experiment. More specifically, for Experiment \uppercase\expandafter{\romannumeral3}, the sample sizes were set to be $m=n=25,50,75,\cdots,200$ and dimension $d=1,10,100,1000$; for Experiment \uppercase\expandafter{\romannumeral4}, the sample size were $n=100,200,\cdots,600$ and dimension $d=2,10,100,1000$. In both experiments, we fixed the significance level at $\alpha=0.05$, did $100$ permutations to calibrate the critical values as before. Again we simulated under $H_0$ to verify that the resulting tests have the targeted size, up to Monte Carlo error. The power of each method, estimated from $100$ such experiments, is reported in Figures \ref{Fg: adapt1} and \ref{Fg: adapt2}.
\begin{figure}[!htbp]
\begin{minipage}{0.285\textwidth}
\centering
\begin{tikzpicture
\begin{axis}[
height=6.5cm,
width=4.75cm,
grid=major,
xlabel = $n$,
ylabel=Power
xmax = 200,
xmin = 25,
ymax = 1,
ymin = 0,
xtick={50,100,150,200},
legend style={at={(0.98,0.02)},anchor=south eas
}
]
\addplot[mark=triangle*,red,error bars/.cd,
y dir=both,y explicit]
coordinates {
(25,0.3674)+-(0,0.027)
(50,0.6828)+-(0,0.025)
(75,0.8882)+-(0,0.011)
(100,0.96)+-(0,0.0098)
(125,0.9858)+-(0,0.0059)
(150,0.9976)+-(0,0.0018)
(175,0.9994)+-(0,0.00068)
(200,1)+-(0,0)
}; \addlegendentry{Median} ;
\addplot[mark=diamond*,blue,error bars/.cd,
y dir=both,y explicit]
coordinates {
(25,0.3256)+-(0,0.026)
(50,0.6252)+-(0,0.025)
(75,0.8522)+-(0,0.014)
(100,0.9454)+-(0,0.0093)
(125,0.98)+-(0,0.0055)
(150,0.9954)+-(0,0.0023)
(175,0.9994)+-(0,0.00068)
(200,1)+-(0,0)
}; \addlegendentry{U.A.} ;
\addplot[mark=*,cyan,error bars/.cd,
y dir=both,y explicit]
coordinates {
(25,0.2732)+-(0,0.023)
(50,0.564)+-(0,0.027)
(75,0.8112)+-(0,0.015)
(100,0.9316)+-(0,0.0097)
(125,0.9766)+-(0,0.0068)
(150,0.9954)+-(0,0.0023)
(175,0.9988)+-(0,0.00093)
(200,1)+-(0,0)
}; \addlegendentry{S.A.} ;
\end{axis}
\end{tikzpicture}
\end{minipage
\begin{minipage}{0.235\textwidth}
\centering
\begin{tikzpicture
\begin{axis}[
xlabel=$n$,
height=6.5cm,
width=4.75cm,
grid=major,
xmax = 200,
xmin = 25,
ymax = 1,
ymin = 0,
xtick={50,100,150,200},
legend style={at={(1,0)},anchor=south east
nodes={scale=0.7, transform shape}
}
]
\addplot[mark=triangle*,red,error bars/.cd,
y dir=both,y explicit]
coordinates {
(25,0.3974)+-(0,0.028)
(50,0.801)+-(0,0.02)
(75,0.9672)+-(0,0.0075)
(100,0.996)+-(0,0.0023)
(125,0.9996)+-(0,0.00056)
(150,1)+-(0,0)
(175,1)+-(0,0)
(200,1)+-(0,0)
};
\addplot[mark=diamond*,blue,error bars/.cd,
y dir=both,y explicit]
coordinates {
(25,0.5234)+-(0,0.028)
(50,0.8954)+-(0,0.014)
(75,0.99)+-(0,0.0038)
(100,0.9994)+-(0,0.00068)
(125,1)+-(0,0)
(150,1)+-(0,0)
(175,1)+-(0,0)
(200,1)+-(0,0)
};
\addplot[mark=*,cyan,error bars/.cd,
y dir=both,y explicit]
coordinates {
(25,0.5608)+-(0,0.024)
(50,0.9088)+-(0,0.012)
(75,0.9926)+-(0,0.0029)
(100,0.9996)+-(0,0.00056)
(125,1)+-(0,0)
(150,1)+-(0,0)
(175,1)+-(0,0)
(200,1)+-(0,0)
};
\end{axis}
\end{tikzpicture}
\end{minipage
\begin{minipage}{0.235\textwidth}
\centering
\begin{tikzpicture
\begin{axis}[
height=6.5cm,
width=4.75cm,
grid=major,
xlabel=$n$,
xmax = 200,
xmin = 25,
ymax = 1,
ymin = 0,
xtick={50,100,150,200},
legend style={at={(1,0)},anchor=south eas
}
]
\addplot[mark=triangle*,red,error bars/.cd,
y dir=both,y explicit]
coordinates {
((25,0.149)+-(0,0.015)
(50,0.3764)+-(0,0.024)
(75,0.658)+-(0,0.024)
(100,0.8316)+-(0,0.02)
(125,0.9346)+-(0,0.01)
(150,0.9828)+-(0,0.0044)
(175,0.996)+-(0,0.002)
(200,0.9992)+-(0,0.00096)
};
\addplot[mark=diamond*,blue,error bars/.cd,
y dir=both,y explicit]
coordinates {
(25,0.2178)+-(0,0.018)
(50,0.6132)+-(0,0.032)
(75,0.915)+-(0,0.013)
(100,0.9862)+-(0,0.0043)
(125,0.999)+-(0,0.00086)
(150,1)+-(0,0)
(175,1)+-(0,0)
(200,1)+-(0,0)
};
\addplot[mark=*,cyan,error bars/.cd,
y dir=both,y explicit]
coordinates {
(25,0.7086)+-(0,0.021)
(50,0.9804)+-(0,0.0041)
(75,0.9992)+-(0,0.00078)
(100,1)+-(0,0)
(125,1)+-(0,0)
(150,1)+-(0,0)
(175,1)+-(0,0)
(200,1)+-(0,0)
};
\end{axis}
\end{tikzpicture}
\end{minipage
\begin{minipage}{0.235\textwidth}
\centering
\begin{tikzpicture
\begin{axis}[
height=6.5cm,
width=4.75cm,
grid=major,
xlabel=$n$,
xmax = 200,
xmin = 25,
ymax = 1,
ymin = 0,
xtick={50,100,150,200},
legend style={at={(1,0)},anchor=south east
nodes={scale=0.7, transform shape}
}
]
\addplot[mark=triangle*,red,error bars/.cd,
y dir=both,y explicit]
coordinates {
(25,0.0926)+-(0,0.014)
(50,0.147)+-(0,0.017)
(75,0.1886)+-(0,0.018)
(100,0.257)+-(0,0.022)
(125,0.347)+-(0,0.023)
(150,0.4508)+-(0,0.032)
(175,0.5416)+-(0,0.032)
(200,0.647)+-(0,0.022)
};
\addplot[mark=diamond*,blue,error bars/.cd,
y dir=both,y explicit]
coordinates {
(25,0.0984)+-(0,0.015)
(50,0.1818)+-(0,0.02)
(75,0.2666)+-(0,0.025)
(100,0.4168)+-(0,0.029)
(125,0.607)+-(0,0.029)
(150,0.7782)+-(0,0.033)
(175,0.89)+-(0,0.024)
(200,0.9618)+-(0,0.0076)
};
\addplot[mark=*,cyan,error bars/.cd,
y dir=both,y explicit]
coordinates {
(25,0.8128)+-(0,0.017)
(50,0.9924)+-(0,0.0027)
(75,1)+-(0,0)
(100,1)+-(0,0)
(125,1)+-(0,0)
(150,1)+-(0,0)
(175,1)+-(0,0)
(200,1)+-(0,0)
};
\end{axis}
\end{tikzpicture}
\end{minipage
\caption{Observed power versus sample size in Experiment \uppercase\expandafter{\romannumeral3}\ for $d=1,10,100,1000$ from left to right.}\label{Fg: adapt1}
\end{figure}
\begin{figure}[!htbp]
\begin{minipage}{0.285\textwidth}
\centering
\begin{tikzpicture
\begin{axis}[
height=6.5cm,
width=4.75cm,
grid=major,
xlabel = $n$,
ylabel=Power
xmax = 600,
xmin = 100,
ymax = 1,
ymin = 0,
xtick={200,400,600},
legend style={at={(0.98,0.02)},anchor=south eas
}
]
\addplot[mark=triangle*,red,error bars/.cd,
y dir=both,y explicit]
coordinates {
(100,0.184)+-(0,0.039) (200,0.386)+-(0,0.034) (300,0.518)+-(0,0.029) (400,0.698)+-(0,0.023)
(500,0.83)+-(0,0.017) (600,0.89)+-(0,0.013)
}; \addlegendentry{Median} ;
\addplot[mark=diamond*,blue,error bars/.cd,
y dir=both,y explicit]
coordinates {
(100,0.164)+-(0,0.037) (200,0.352)+-(0,0.034) (300,0.484)+-(0,0.029) (400,0.634)+-(0,0.024)
(500,0.8)+-(0,0.018) (600,0.87)+-(0,0.014)
}; \addlegendentry{U.A.} ;
\addplot[mark=*,cyan,error bars/.cd,
y dir=both,y explicit]
coordinates {
(100,0.178)+-(0,0.038) (200,0.34)+-(0,0.033) (300,0.456)+-(0,0.029) (400,0.64)+-(0,0.024) (500,0.78)+-(0,0.019) (600,0.87)+-(0,0.014)
}; \addlegendentry{S.A.} ;
\end{axis}
\end{tikzpicture}
\end{minipage
\begin{minipage}{0.235\textwidth}
\centering
\begin{tikzpicture
\begin{axis}[
xlabel=$n$,
height=6.5cm,
width=4.75cm,
grid=major,
xmax = 600,
xmin = 100,
ymax = 1,
ymin = 0,
xtick={200,400,600},
legend style={at={(1,0)},anchor=south east
nodes={scale=0.7, transform shape}
}
]
\addplot[mark=triangle*,red,error bars/.cd,
y dir=both,y explicit]
coordinates {
(100,0.138)+-(0,0.034) (200,0.218)+-(0,0.029) (300,0.306)+-(0,0.027) (400,0.402)+-(0,0.025)
(500,0.531)+-(0,0.022) (600,0.688)+-(0,0.019)
};
\addplot[mark=diamond*,blue,error bars/.cd,
y dir=both,y explicit]
coordinates {
(100,0.356)+-(0,0.048) (200,0.662)+-(0,0.033) (300,0.852)+-(0,0.019) (400,0.968)+-(0,0.009) (500,1)+-(0,0) (600,1)+-(0,0)
};
\addplot[mark=*,cyan,error bars/.cd,
y dir=both,y explicit]
coordinates {
(100,0.256)+-(0,0.044) (200,0.556)+-(0,0.035) (300,0.796)+-(0,0.025) (400,0.904)+-(0,0.015)
(500,0.969)+-(0,0.008) (600,1)+-(0,0)
};
\end{axis}
\end{tikzpicture}
\end{minipage
\begin{minipage}{0.235\textwidth}
\centering
\begin{tikzpicture
\begin{axis}[
height=6.5cm,
width=4.75cm,
grid=major,
xlabel=$n$,
xmax = 600,
xmin = 100,
ymax = 1,
ymin = 0,
xtick={200,400,600},
legend style={at={(1,0)},anchor=south eas
}
]
\addplot[mark=triangle*,red,error bars/.cd,
y dir=both,y explicit]
coordinates {
(100,0.09)+-(0,0.029) (200,0.116)+-(0,0.023) (300,0.132)+-(0,0.02) (400,0.144)+-(0,0.018) (500,0.174)+-(0,0.017) (600,0.185)+-(0,0.016)
};
\addplot[mark=diamond*,blue,error bars/.cd,
y dir=both,y explicit]
coordinates {
(100,0.126)+-(0,0.033) (200,0.196)+-(0,0.028) (300,0.278)+-(0,0.026) (400,0.364)+-(0,0.024)
(500,0.457)+-(0,0.022) (600,0.63)+-(0,0.02)
};
\addplot[mark=*,cyan,error bars/.cd,
y dir=both,y explicit]
coordinates {
(100,0.264)+-(0,0.044) (200,0.564)+-(0,0.035) (300,0.794)+-(0,0.023) (400,0.916)+-(0,0.014) (500,0.978)+-(0,0.007) (600,1)+-(0,0)
};
\end{axis}
\end{tikzpicture}
\end{minipage
\begin{minipage}{0.235\textwidth}
\centering
\begin{tikzpicture
\begin{axis}[
height=6.5cm,
width=4.75cm,
grid=major,
xlabel=$n$,
xmax = 600,
xmin = 100,
ymax = 1,
ymin = 0,
xtick={200,400,600},
legend style={at={(1,0)},anchor=south east
nodes={scale=0.7, transform shape}
}
]
\addplot[mark=triangle*,red,error bars/.cd,
y dir=both,y explicit]
coordinates {
(100,0.106)+-(0,0.031) (200,0.108)+-(0,0.022) (300,0.086)+-(0,0.016) (400,0.096)+-(0,0.015) (500,0.12)+-(0,0.015) (600,0.14)+-(0,0.014)
};
\addplot[mark=diamond*,blue,error bars/.cd,
y dir=both,y explicit]
coordinates {
(100,0.116)+-(0,0.032) (200,0.116)+-(0,0.023) (300,0.086)+-(0,0.016) (400,0.102)+-(0,0.015) (500,0.12)+-(0,0.015) (600,0.16)+-(0,0.015)
};
\addplot[mark=*,cyan,error bars/.cd,
y dir=both,y explicit]
coordinates {
(100,0.218)+-(0,0.041) (200,0.356)+-(0,0.034) (300,0.55)+-(0,0.029) (400,0.668)+-(0,0.024) (500,0.79)+-(0,0.018) (600,0.91)+-(0,0.012)
};
\end{axis}
\end{tikzpicture}
\end{minipage
\caption{Observed power versus sample size in Experiment \uppercase\expandafter{\romannumeral4}\ for $d=2,10,100,1000$ from left to right.}\label{Fg: adapt2}
\end{figure}
As Figures \ref{Fg: adapt1} and \ref{Fg: adapt2} show, for both experiments, these tests are comparable in low-dimensional settings. But as $d$ increases, the proposed self-normalized adaptive test becomes more and more preferable to the two alternatives. For example, for Experiment \uppercase\expandafter{\romannumeral4}, when $d=1000$, the observed power of the proposed self-normalized adaptive test is about $90\%$ when $n=600$, while the other two tests have power around only $15\%$.
\subsection{Data Example}
Finally, we considered applying the proposed self-normalized adaptive test in a data example from \cite{mooij2016distinguishing}. The dataset consists of three variables, altitude (Alt), average temperature (Temp) and average duration of sunshine (Sun) from different weather stations. One goal of interest is to figure out the causal relationship among the three variables by figuring out a suitable directed acyclic graph (DAG) among them. Following \cite{peters2014causal}, if a set of random variables $X^1,\cdots$,$X^d$ follow a DAG $\mathcal G_0$, then we assume that they follow a sequence of additive models:
$$
X^l=\sum\limits_{r\in \mathrm{PA}^l}f_{l,r}(X^{r})+N^l,\quad\forall\ 1\leq l\leq d,
$$
where $N^l$'s are independent Gaussian noises and $\mathrm{PA}^l$ denotes the collection of parent nodes of node $l$ specified by $\mathcal G_0$. As shown by \citep{peters2014causal}, $\mathcal G_0$ is identifiable from the joint distribution of $X^1,\cdots,X^d$ under the assumption of $f_{l,r}$'s being non-linear. Therefore a natural method of deciding a specific DAG underlying a set of random variables is by testing the independence of the regression residuals after fitting the DAG induced additive models. In our case, there are totally $25$ possible DAGs for the three variables. We can apply independence tests for the residuals for each of the 25 DAGs and choose the one with the largest $p$-value as the most plausible underlying DAG. See \cite{peters2014causal} for more details.
As before, we considered three different ways for independence tests: the proposed self-normalized adaptive test (\verb+S.A.+), Gaussian kernel embedding based independent test with the scaling parameter determined by the ``median'' heuristic (\verb+Median+), and the unnormalized adaptive test from \cite{sriperumbudur2009kernel} (\verb+U.A.+). Note that the three variables have different scales and we standardize them before applying the tests of independence.
The overall sample size of the dataset is $349$. Each time we randomly select $150$ samples and compute the $p$-value associated with each DAG. The $p$-value is again computed based on $100$ permutations. We repeated the experiment for $1000$ times and recorded for each test the DAG with the largest $p$-value. All three tests agree on the top three most selected DAGs and they are shown in Figure \ref{fig:DAG}.
\begin{figure}[!htbp]
\begin{subfigure}[b]{.32\linewidth}
\resizebox{\linewidth}{!}{
\begin{tikzpicture}
\node[draw,circle,fill=blue!40,minimum size=1.5cm] (A) at (0,0) {Alt};
\node[draw,circle,fill=blue!40,minimum size=1.5cm] (T) at (-2,-4) {Temp};
\node[draw,circle,fill=blue!40,minimum size=1.5cm] (S) at (2,-4) {Sun};
\draw [-{Stealth[length=2mm]}]
(A) edge (T) (A) edge (S) (T) edge (S);
\end{tikzpicture}
}
\caption*{\textbf{DAG \uppercase\expandafter{\romannumeral1}}}
\end{subfigure}\hfill%
\begin{subfigure}[b]{.32\linewidth}
\resizebox{\linewidth}{!}{
\begin{tikzpicture}
\node[draw,circle,fill=blue!40,minimum size=1.5cm] (A) at (0,0) {Alt};
\node[draw,circle,fill=blue!40,minimum size=1.5cm] (T) at (-2,-4) {Temp};
\node[draw,circle,fill=blue!40,minimum size=1.5cm] (S) at (2,-4) {Sun};
\draw [-{Stealth[length=2mm]}]
(A) edge (T) (A) edge (S) (S) edge (T);
\end{tikzpicture}
}
\caption*{DAG \uppercase\expandafter{\romannumeral2}}
\end{subfigure}\hfill%
\begin{subfigure}[b]{.32\linewidth}
\resizebox{\linewidth}{!}{
\begin{tikzpicture}
\node[draw,circle,fill=blue!40,minimum size=1.5cm] (A) at (0,0) {Alt};
\node[draw,circle,fill=blue!40,minimum size=1.5cm] (T) at (-2,-4) {Temp};
\node[draw,circle,fill=blue!40,minimum size=1.5cm] (S) at (2,-4) {Sun};
\draw [-{Stealth[length=2mm]}]
(A) edge (T) (S) edge (A) (S) edge (T);
\end{tikzpicture}
}
\caption*{DAG \uppercase\expandafter{\romannumeral3}}
\end{subfigure}
\caption{DAGs with the top 3 highest probabilities of being selected.}
\label{fig:DAG}
\end{figure}
In addition, we report in Table \ref{tab:DAG} the frequencies that these three DAGs were selected by each of the tests. They are generally comparable with the proposed method more consistently selecting DAG I, the one heavily favored by all three methods.
\begin{table}[!htbp]
\centering
\begin{tabular}{?c?C{3cm}C{3cm}C{3cm}?}
\thickhline
\diagbox{Test}{Prob($\%$)}{DAG} & I & II & III \\
\thickhline
Median & 78.5 &4.7& 14.5 \\
U.A. & 81.4& 8.1&8.5 \\
S.A. &83.4 &9.8&4.7\\
\thickhline
\end{tabular}
\caption{Frequency that each DAG in Figure \ref{fig:DAG} was selected by three tests.}\label{tab:DAG}
\end{table}
\section{Concluding Remarks}
\label{sec:disc}
In this paper, we provide a systematic investigation of the statistical properties of Gaussian kernel embedding based nonparametric tests. Our contribution is twofold.
First of all, we provide theoretical justifications for this popular class of methods by showing that they are capable of detecting the smallest possible deviation from the null hypotheses in the context of goodness-of-fit, homogeneity, and independence test. Our analyses also suggest that the existing theoretical studies do not fully explain the practical success of these methods because they assume a fixed kernel or scaling parameter for Gaussian kernel and these methods, as we argue, are most powerful with a varying scaling parameter.
From a more practical viewpoint, we offer general guidelines on choosing the scaling parameter for Gaussian kernels: our results highlight the importance of using larger scaling parameter for larger sample size and establish the relationship between the smoothness of the underlying densities and the appropriate scaling parameter. Furthermore, we introduce new adaptive testing procedures for goodness-of-fit, homogeneity, and independence respectively that are optimal, up to a polynomial of iterated logarithmic factor, for a wide range of smooth densities while not needing to know the level of smoothness.
RKHS embedding has emerged as a powerful tool for nonparametric inferences and has found success in numerous applications. Our work here provides insights into their operating characteristics and leads to improved testing procedures within the framework.
\section{Proofs}
\label{sec:proof}
Throughout this section, we shall write $a_n\lesssim b_n$ if there exists a universal constant $C>0$ such that $a_n\leq Cb_n$. Similarly, we write $a_n\gtrsim b_n$ if $b_n\lesssim a_n$, and $a_n\asymp b_n$ if $a_n\lesssim b_n$ and $a_n\gtrsim b_n$. When the the constant depends on another quantity $D$, we shall write $a_n\lesssim_D b_n$. Relations $\gtrsim_D$ and $\asymp_D$ are defined accordingly.
\begin{proof}[Proof of Theorem \ref{th:gofnull}]
We begin with \eqref{eq:gofnull1}. Note that $\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb P_0)$ is a U-statistic. We can apply the general techniques for U-statistics to establish its asymptotic normality. In particular, as shown in \cite{hall1984central}, it suffices to verify the following four conditions:
\begin{align}
&\left(2\nu_n\over \pi\right)^{d/2}{\mathbb E} \bar{G}_{\nu_n}^2(X_1,X_2)\to\|p_0\|_{L_2}^2\label{lc0},\\
&{{\mathbb E} \bar{G}_{\nu_n}^4(X_1,X_2)\over n^2[{\mathbb E}\bar{G}_{\nu_n}^2(X_1,X_2)]^2}\rightarrow 0\label{lc1},\\
&{{\mathbb E} [\bar{G}_{\nu_n}^2(X_1,X_2)\bar{G}_{\nu_n}^2(X_1,X_3)]\over n[{\mathbb E}\bar{G}_{\nu_n}^2(X_1,X_2)]^2}\rightarrow 0\label{lc2},\\
&{{\mathbb E} H_{\nu_n}^2(X_1,X_2)\over [{\mathbb E}\bar{G}_{\nu_n}^2(X_1,X_2)]^2}\rightarrow 0\label{lc3},
\end{align}
as $n\rightarrow \infty$, where
$$
H_{\nu_n}(x,y)={\mathbb E}\bar{G}_{\nu_n}(x,X_3)\bar{G}_{\nu_n}(y,X_3),\quad\forall\ x,y\in {\mathbb R}^d.
$$
\paragraph{Verifying Condition \eqref{lc0}.} Note that
\begin{align*}
{\mathbb E}\bar{G}_{\nu_n}^2(X_1,X_2)={\mathbb E} G_{\nu_n}^2(X_1,X_2)-2{\mathbb E} \{{\mathbb E} [G_{\nu_n}(X_1,X_2)|X_1]\}^2+[{\mathbb E} G_{\nu_n}(X_1,X_2)]^2.
\end{align*}
By Lemma \ref{le:gausskernel},
\begin{align*}
{\mathbb E} G_{\nu_n}(X_1,X_2)=\left(\frac{\pi}{\nu_n}\right)^{\frac{d}{2}}\int\exp\left(-\frac{\|\omega\|^2}{4\nu_n}\right)\left\|\mathcal F{p_0}(\omega)\right\|^2d \omega,
\end{align*}
which immediately yields
$$
\left(\frac{\nu_n}{\pi}\right)^{\frac{d}{2}}{\mathbb E} G_{\nu_n}(X_1,X_2)\to \|p_0\|_{L_2}^2
$$
and
$$
\left(\frac{2\nu_n}{\pi}\right)^{\frac{d}{2}}{\mathbb E} G^2_{\nu_n}(X_1,X_2)=\left(\frac{2\nu_n}{\pi}\right)^{\frac{d}{2}}{\mathbb E} G_{2\nu_n}(X_1,X_2)\to \|p_0\|_{L_2}^2,
$$
as $\nu_n\to\infty$.
On the other hand,
\begin{align*}
&{\mathbb E} \{{\mathbb E} [G_{\nu_n}(X_1,X_2)|X_1]\}^2\\=&\int \left(\int G_{\nu_n}(x,x')G_{\nu_n}(x,x'')p_0(x)d x\right)p_0(x')p_0(x'')d x'd x''\\
=&\int \left(\int G_{2\nu_n}(x,(x'+x'')/2)p_0(x)d x\right)G_{\nu_n/2}(x',x'')p_0(x')p_0(x'')d x'd x''.
\end{align*}
Let $Z\sim N(0,4\nu_nI_d)$. Then
\begin{align*}
\int G_{2\nu_n}(x,(x'+x'')/2)p_0(x)d x&=(2\pi)^{d/2}{\mathbb E}\left[\mathcal F{p}_0(Z)\exp\left(\frac{x'+x''}{2}iZ\right)\right]\\
&\leq (2\pi)^{d/2}\sqrt{{\mathbb E}\left\|\mathcal F{p}_0(Z)\right\|^2}\\
&\lesssim_d \|p_0\|_{L_2}/\nu_n^{d/4}.
\end{align*}
Thus
$$
{\mathbb E} \{{\mathbb E} [G_{\nu_n}(X_1,X_2)|X_1]\}^2\lesssim_d \|p_0\|_{L_2}^3/\nu_n^{3d/4}.
$$
Condition \eqref{lc0} then follows.
\paragraph{Verifying Conditions \eqref{lc1} and \eqref{lc2}.} Since
$$
{\mathbb E} \bar{G}_{\nu_n}^2(X_1,X_2)\asymp_{d,p_0} \nu_n^{-d/2}.
$$
and
$$
{\mathbb E}\bar{G}_{\nu_n}^4(X_1,X_2)\lesssim {\mathbb E} G_{\nu_n}^4(X_1,X_2)\lesssim_d \nu_n^{-d/2},
$$
we obtain
$$
n^{-2}{\mathbb E} \bar{G}_{\nu_n}^4(X_1,X_2)/({\mathbb E}\bar{G}_{\nu_n}^2(X_1,X_2))^2\lesssim_{d,p_0} \nu_n^{d/2}/n^{2}\rightarrow 0.
$$
Similarly,
\begin{align*}
{\mathbb E} \bar{G}_{\nu_n}^2(X_1,X_2)\bar{G}_{\nu_n}^2(X_1,X_3)&\lesssim {\mathbb E} G_{\nu_n}^2(X_1,X_2)G_{\nu_n}^2(X_1,X_3)\\
&={\mathbb E} G_{2\nu_n}(X_1,X_2)G_{2\nu_n}(X_1,X_3)\\
&\lesssim_{d,p_0} \nu_n^{-3d/4}.
\end{align*}
This implies
$$
n^{-1}{\mathbb E}\bar{G}_{\nu_n}^2(X_1,X_2)\bar{G}_{\nu_n}^2(X_1,X_3)/({\mathbb E}\bar{G}_{\nu_n}^2(X_1,X_2))^2\lesssim_{d,p_0} \nu_n^{d/4}/n\rightarrow 0,
$$
which verifies \eqref{lc2}.
\paragraph{Verifying Condition \eqref{lc3}.} We now prove (\ref{lc3}). It suffices to show
$$
\nu_n^{d}{\mathbb E}({\mathbb E}(\bar{G}_{\nu_n}(X_1,X_2)\bar{G}_{\nu_n}(X_1,X_3)|X_2,X_3))^2\rightarrow 0
$$
as $n\rightarrow \infty$.
Note that
\begin{align*}
&{\mathbb E}({\mathbb E}(\bar{G}_{\nu_n}(X_1,X_2)\bar{G}_{\nu_n}(X_1,X_3)|X_2,X_3))^2\\\lesssim &{\mathbb E}({\mathbb E}(G_{\nu_n}(X_1,X_2)G_{\nu_n}(X_1,X_3)|X_2,X_3))^2\\=& {\mathbb E} G_{\nu_n}(X_1,X_2)G_{\nu_n}(X_1,X_3)G_{\nu_n}(X_4,X_2)G_{\nu_n}(X_4,X_3)\\
=&{\mathbb E}(G_{\nu_n}(X_1,X_4)G_{\nu_n}(X_2,X_3){\mathbb E}(G_{\nu_n}(X_1+X_4,X_2+X_3)|X_1-X_4,X_2-X_3)).
\end{align*}
Since for any $\delta>0$,
\begin{eqnarray*}
\nu_n^d{\mathbb E}(G_{\nu_n}(X_1,X_4)G_{\nu_n}(X_2,X_3){\mathbb E}(G_{\nu_n}(X_1+X_4,X_2+X_3)|X_1-X_4,X_2-X_3)\\
(\mathds{1}_{\{\|X_1-X_4\|>\delta\}}+\mathds{1}_{\|X_2-X_3\|>\delta\}}))\rightarrow 0,
\end{eqnarray*}
it remains to show that
\begin{eqnarray*}
\nu_n^d{\mathbb E}(G_{\nu_n}(X_1,X_4)G_{\nu_n}(X_2,X_3){\mathbb E}(G_{\nu_n}(X_1+X_4,X_2+X_3)|X_1-X_4,X_2-X_3)\\
\mathds{1}_{\{\|X_1-X_4\|\leq\delta,\|X_2-X_3\|\leq \delta\}}))\rightarrow 0
\end{eqnarray*}
for some $\delta>0$, which holds as long as
\begin{align}
{\mathbb E}(G_{\nu_n}(X_1+X_4,X_2+X_3)|X_1-X_4,X_2-X_3)\rightarrow 0\label{uc}
\end{align}
uniformly on $\{\|X_1-X_4\|\leq \delta,\|X_2-X_3\|\leq \delta\}$.
Let
$$
Y_1=X_1-X_4,\quad Y_2=X_2-X_3,\quad Y_3=X_1+X_4,\quad Y_4=X_2+X_3.
$$
Then
\begin{align*}
&{\mathbb E}(G_{\nu_n}(X_1+X_4,X_2+X_3)|X_1-X_4,X_2-X_3)\\= &\left(\frac{\pi}{\nu_n}\right)^{\frac{d}{2}}\int\exp\left(-\frac{\|\omega\|^2}{4\nu_n}\right)\mathcal F{p_{Y_1}}(\omega)\overline{\mathcal F{p_{Y_2}}}(\omega)d \omega\\
\leq&\sqrt{\left(\frac{\pi}{\nu_n}\right)^{\frac{d}{2}}\int\exp\left(-\frac{\|\omega\|^2}{4\nu_n}\right)\left\|\mathcal F{p_{Y_1}}(\omega)\right\|^2d \omega}\sqrt{\left(\frac{\pi}{\nu_n}\right)^{\frac{d}{2}}\int\exp\left(-\frac{\|\omega\|^2}{4\nu_n}\right)\left\|\mathcal F{p_{Y_2}}(\omega)\right\|^2d \omega}
\end{align*}
where
\begin{align*
p_{y}(y')=\frac{p(Y_1=y,Y_3=y')}{p(Y_1=y)}=\frac{p_0\left(\frac{y+y'}{2}\right)p_0\left(\frac{y'-y}{2}\right)}{\int p_0\left(\frac{y+y'}{2}\right)p_0\left(\frac{y'-y}{2}\right)d y'}
\end{align*}
is the conditional density of $Y_3$ given $Y_1=y$. Thus to prove (\ref{uc}), it suffices to show
\begin{align*}
h_n(y)&:=\left(\frac{\pi}{\nu_n}\right)^{\frac{d}{2}}\int\exp\left(-\frac{\|\omega\|^2}{4\nu_n}\right)\left\|\mathcal F{p_y}(\omega)\right\|^2d \omega\\
&=\pi^{\frac{d}{2}}\int\exp\left(-\frac{\|\omega\|^2}{4}\right)\left\|\mathcal F{p_y}(\sqrt{\nu_n}\omega)\right\|^2d \omega\\
&\rightarrow 0
\end{align*}
uniformly over $\{y:\ \|y\|\leq \delta\}$.
Note that
$$
h_n(y)={\mathbb E} G_{\nu_n}(X,X')
$$
where $X,X'\sim_{\rm iid} p_y$, which suggests
$
h_n(y)\rightarrow 0
$
pointwisely. To prove the uniform convergence of $h_n(y)$, we only need to show
$$
\lim\limits_{y_1\rightarrow y}\sup\limits_{n}|h_n(y_1)-h_n(y)|=0
$$
for any $y$.
Since $p_0\in L_2$, $P(Y_1=y)$ is continuous. Therefore, the almost surely continuity of $p_0$ immediately suggests that for every $y$,
$
p_{y_1}(\cdot)\rightarrow p_y(\cdot)
$
almost surely as $y_1\rightarrow y$. Considering that $p_{y_1}$ and $p_y$ are both densities, it follows that
$$
|\mathcal F{p_{y_1}}(\omega)-\mathcal F{p_y}(\omega)|\leq (2\pi)^{-d/2}\int |p_{y_1}(y')-p_y(y')|d y'\rightarrow 0,
$$
\textit{i.e.}, $\mathcal F{p_{y_1}}\rightarrow \mathcal F{p_y}$ uniformly as $y_1\rightarrow y$. Therefore we have
$$
\sup_{n\to\infty}|h_n(y_1)-h_n(y)|\lesssim\left\|\mathcal F{p_{y_1}}-\mathcal F{p_y}\right\|_{L_\infty}\rightarrow 0,
$$
which ensures the uniform convergence of $h_n(y)$ to $h(y)$ over $\{y:\ \|y\|\leq \delta\}$, and hence (\ref{lc3}).
Indeed, we have shown that
$$
\frac{n\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb P_0)}{\sqrt{2{\mathbb E}[\bar{G}_{\nu_n}(X_1,X_2)]^2}}\to_d N(0,1).
$$
By Slutsky Theorem, in order to prove (\ref{eq:gofnull2}), it sufficies to show
$$
\widehat{s}_{n,\nu_n}^2/{\mathbb E}[\bar{G}_{\nu_n}(X_1,X_2)]^2\to_p1,
$$
which is equivalent to
\begin{align}\label{consistent-est}
\tilde{s}_{n,\nu_n}^2/{\mathbb E}[\bar{G}_{\nu_n}(X_1,X_2)]^2\to_p 1
\end{align}
since $1/n^2=o({\mathbb E}[\bar{G}_{\nu_n}(X_1,X_2)]^2)$.
It follows from
$$
{\mathbb E} \left(\tilde{s}_{n,\nu_n}^2\right)={\mathbb E}[\bar{G}_{\nu_n}(X_1,X_2)]^2
$$
and
\begin{align*}
&{\rm var}\left(\tilde{s}_{n,\nu_n}^2\right)\\\lesssim &n^{-4}{\rm var}\left(\sum\limits_{1\leq i\neq j\leq n}G_{2\nu_n}(X_i,X_j)\right)+n^{-6}{\rm var}\left(\sum\limits_{\substack{1\le i,j_1,j_2\le n\\ |\{i,j_1,j_2\}|=3}}G_{\nu_n}(X_i,X_{j_1})G_{\nu_n}(X_i,X_{j_2})\right)\\&+n^{-8}{\rm var}\left(\sum\limits_{\substack{1\le i_1,i_2,j_1,j_2\le n\\ |\{i_1,i_2,j_1,j_2\}|=4}}G_{\nu_n}(X_{i_1},X_{j_1})G_{\nu_n}(X_{i_2},X_{j_2})\right)\\
\lesssim &n^{-2}{\mathbb E} G_{4\nu_n}(X_1,X_2)+n^{-1}{\mathbb E} G_{2\nu_n}(X_1,X_2)G_{2\nu_n}(X_1,X_3)+n^{-1}({\mathbb E} G_{2\nu_n}(X_1,X_2))^2\\
=\ &o(({\mathbb E}[\bar{G}_{\nu_n}(X_1,X_2)]^2)^2).
\end{align*}
that (\ref{consistent-est}) holds.
\end{proof}
\vskip 25pt
\begin{proof}[Proof of Theorem \ref{th:gofpower}] Recall that
\begin{align*}
\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb P_0)=&\frac{1}{n(n-1)}\sum_{i\neq j}\bar{G}_{\nu_n}(X_i,X_j;\mathbb P_0)\\
=&\gamma_{\nu_n}^2(\mathbb P,\mathbb P_0)+\frac{1}{n(n-1)}\sum_{i\neq j}\bar{G}_{\nu_n}(X_i,X_j;\mathbb P)\\
&+{2\over n}\sum_{i=1}^n\biggl({\mathbb E}_{X\sim \mathbb P} [G_{\nu_n}(X_i,X)|X_i]-{\mathbb E}_{X\sim \mathbb P_0} [G_{\nu_n}(X_i,X)|X_i]\\
&-{\mathbb E}_{X,X'\sim_{\rm iid} \mathbb P} G_{\nu_n}(X,X')+{\mathbb E}_{(X,Y)\sim \mathbb P\otimes\mathbb P_0} G_{\nu_n}(X,Y)\biggr).
\end{align*}
Denote by the last two terms on the rightmost hand side by $V_{\nu_n}^{(1)}$ and $V_{\nu_n}^{(2)}$ respectively. It is clear that ${\mathbb E} V_{\nu_n}^{(1)}={\mathbb E} V_{\nu_n}^{(2)}=0$. Then it suffices to show that
\begin{equation}
\sup_{\substack{p\in \mathcal W^{s,2}(M)\\ \|p-p_0\|\ge \Delta_n}}\frac{{\mathbb E}\left( V_{\nu_n}^{(1)}\right)^2+{\mathbb E} \left(V_{\nu_n}^{(2)}\right)^2}{\gamma_{\nu_n}^4(\mathbb P,\mathbb P_0)}\to 0\label{lc4}
\end{equation}
and
\begin{equation}
\inf_{\substack{p\in \mathcal W^{s,2}(M)\\ \|p-p_0\|\ge \Delta_n}}\frac{n\gamma^2_{G_{\nu_n}}(\mathbb P,\mathbb P_0)}{\sqrt{
{\mathbb E}\left(\widehat{s}_{n,\nu_n}^2\right)
}}\to\infty\label{lc5}
\end{equation}
as $n\to\infty$.
We first prove (\ref{lc4}). Note that $\|p\|_{L_2}\le \|p\|_{\mathcal W^{s,2}(M)}\le M$. Following arguments similar to those in the proof of Theorem \ref{th:gofnull}, we get
$$
{\mathbb E}\left( V_{\nu_n}^{(1)}\right)^2\lesssim n^{-2}{\mathbb E} G_{\nu_n}^2(X_1,X_2)\lesssim_d M^2 n^{-2}\nu_n^{-d/2},
$$
and
\begin{align*}
{\mathbb E}\left( V_{\nu_n}^{(2)}\right)^2&\leq {4\over n}{\mathbb E} \left[{\mathbb E}_{X\sim \mathbb P} [G_{\nu_n}(X_i,X)|X_i]-{\mathbb E}_{X\sim \mathbb P_0} [G_{\nu_n}(X_i,X)|X_i]\right]^2\\
&={4\over n}\int \left(\int G_{2\nu_n}(x,(x'+x'')/2)p(x)d x\right)G_{\nu_n/2}(x',x'')f(x')f(x'')d x'd x''\\
&\lesssim_d {4M\over n\nu^{d/4}} \int G_{\nu_n/2}(x',x'')|f(x')||f(x'')|d x'd x''\\
&\lesssim_d {4M\over n\nu^{3d/4}}\|f\|_{L_2}^2.
\end{align*}
By Lemma \ref{le:gaussmmd}, there exists a constant $C>0$ depending on $s$ and $M$ only such that for $f\in \mathcal W^{s,2}(M)$,
\begin{align*
\int\exp\left(-\frac{\|\omega\|^2}{4\nu_n}\right)\left\|\mathcal F{f}(\omega)\right\|^2 d \omega\geq \frac{1}{4}\|f\|_{L_2}^2
\end{align*}
given that $\nu_n\geq C\|f\|_{L_2}^{-2/s}$. Because $\nu_n\Delta_n^{s/2}\rightarrow \infty$, we obtain
$$
\gamma_{\nu_n}^2(\mathbb P,\mathbb P_0)\gtrsim_d \nu_n^{-d/2}\|f\|_{L_2}^2,
$$
for sufficiently large $n$. Thus
$$
\sup_{\substack{p\in \mathcal W^{s,2}(M)\\ \|p-p_0\|\ge \Delta_n}}\frac{{\mathbb E}\left( V_{\nu_n}^{(1)}\right)^2}{\gamma_{\nu_n}^4(\mathbb P,\mathbb P_0)}\lesssim_d M^2(n^2\nu_n^{-d/2}\Delta_n^4)^{-1}\rightarrow 0
$$
and
$$
\sup_{\substack{p\in \mathcal W^{s,2}(M)\\ \|p-p_0\|\ge \Delta_n}}\frac{{\mathbb E} \left(V_{\nu_n}^{(2)}\right)^2}{\gamma_{G_{\nu_n}}^4(\mathbb P,\mathbb P_0)}\lesssim_d M(n\nu_n^{-d/4}\Delta_n^2)^{-1}\rightarrow 0,
$$
as $n\rightarrow \infty$.
Next we prove (\ref{lc5}). It follows from
$$
{\mathbb E}\left(\widehat{s}_{n,\nu_n}^2\right)\leq {\mathbb E}\max\left\{\left|\tilde{s}_{n,\nu_n}^2\right|,1/n^2\right\}\lesssim{\mathbb E} G_{2\nu_n}(X_1,X_2)+1/n^2\lesssim_d M^2\nu_n^{-d/2}+1/n^2
$$
that (\ref{lc5}) holds.
\end{proof}
\vskip 25pt
\begin{proof}[Proof of Theorem \ref{th:goflower}]
This, in a certain sense, can be viewed as an extension of results from \cite{ingster1987minimax}, and the proof proceeds in a similar fashion. While \cite{ingster1987minimax} considered the case when $p_0$ is the uniform distribution on $[0,1]$, we shall show that similar bounds hold for a wider class of $p_0$.
For any $M>0$ and $p_0$ such that $\|p_0\|_{\mathcal W^{s,2}}<M$, let
\begin{align*}
H_1^{\rm GOF}(\Delta_n;s,M-\|p_0\|_{\mathcal W^{s,2}})^*&\\
:=\{p\in\mathcal W^{s,2}:&\ \|p-p_0\|_{\mathcal W^{s,2}}\leq M-\|p_0\|_{\mathcal W^{s,2}},\ \|p-p_0\|_{L_2}\geq \Delta_n\}.
\end{align*}
It is clear that $H_1^{\rm GOF}(\Delta_n;s)\supset H_1^{\rm GOF}(\Delta_n;s,M-\|p_0\|_{\mathcal W^{s,2}})^*$. Hence it suffices to prove Theorem \ref{th:goflower} with $H_1^{\rm GOF}(\Delta_n;s)$ replaced by $H_1^{\rm GOF}(\Delta_n;s,M)^*$ for an arbitrary $M>0$. We shall abbreviate $H_1^{\rm GOF}(\Delta_n;s,M)^*$ as $H_1^{\rm GOF}(\Delta_n;s)^*$ in the rest of the proof.
Since $p_0$ is almost surely continuous, there exists $x_0\in{\mathbb R}^d$ and $\delta,c>0$ such that
$$
p_0(x)\geq c>0,\quad\forall\ \|x-x_0\|\leq \delta.
$$
In light of this, we shall assume $p_0(x)\geq c>0$, for all $x\in[0,1]^d$ without loss of generality.
Let $\bm{a}_n$ be a multivariate random index. As proved in \cite{ingster1987minimax}, in order to prove the existence of $\alpha\in(0,1)$ such that no asymptotic $\alpha$-level test can be consistent, it suffices to identify $p_{n,\bm{a}_n} \in H_1^{\rm GOF}(\Delta_n;s)^*$ for all possible values of $\bm{a}_n$
such that \begin{align}\label{l2b
{\mathbb E}_{p_0}\left(\frac{p_n(X_1,\cdots,X_n)}{\prod_{i=1}^n p_0(X_i)}\right)^2=O(1),
\end{align}
where
$$
p_n(x_1,\cdots,x_n)={\mathbb E}_{\bm{a}_n}\left(\prod\limits_{i=1}^np_{n,\bm{a}_n}(x_i)\right),\ \forall\ x_1,\cdots,x_n,
$$
\textit{i.e.}, $p$ is the mixture of all $p_{n,\bm{a}_n}$'s.
Let $\mathds{1}_{\{x\in [0,1]^d\}},\phi_{n,1},\cdots,\phi_{n,B_n}$ be an orthonormal sets of functions in $L^2(\mathbb{R}^d)$ such that the supports of $\phi_{n,1},\cdots,\phi_{n,B_n}$ are disjoint and all included in $[0,1]^d$. Let $\bm{a}_n=(a_{n,1},\cdots,a_{n,B_n})$ satisfy that $a_{n,1},\cdots,a_{n,B_n}$ are independent and that
$$
p(a_{n,k}=1)=p(a_{n,k}=-1)=\frac{1}{2},\quad \forall\ 1\leq k\leq B_n.
$$
Define
$$
p_{n,\bm{a}_n}=p_0+r_n\sum\limits_{k=1}^{B_n}a_{n,k}\phi_{n,k}.
$$
Then
$$
\frac{p_{n,\bm{a}_n}}{p_0}=1+r_n\sum\limits_{k=1}^{B_n}a_{n,k}\frac{\phi_{n,k}}{p_0},
$$
where $1,\frac{\phi_{n,1}}{p_0},\cdots,\frac{\phi_{n,B_n}}{p_0}$ are orthogonal in $L_2(P_0)$.
By arguments similar to those in \cite{ingster1987minimax}, we find
\begin{align*}
{\mathbb E}_{p_0}\left(\frac{p_n(X_1,\cdots,X_n)}{\prod_{i=1}^n p_0(X_i)}\right)^2&\leq \exp\left(\frac{1}{2}B_nn^2r_n^4\max_{1\leq k\leq B_n}\left(\int \phi_{n,k}^2/p_0d x\right)^2\right)\\
&\leq \exp\left(\frac{1}{2c^2}B_nn^2r_n^4\right).
\end{align*}
In order to ensure (\ref{l2b}), it suffices to have
\begin{equation}\label{condition}
B_n^{1/2}nr_n^2=O(1).
\end{equation}
Therefore, given $\Delta_n=O\left(n^{-\frac{2s}{4s+d}}\right)$, once we can find proper $r_n$, $B_n$ and $\phi_{n,1},\cdots,\phi_{n,B_n}$ such that $p_{n,\bm{a}_n}\in H_1^{\rm GOF}(\Delta_n;s)^*$ for all $\bm{a}_n$ and (\ref{condition}) holds, the proof is finished.
Let $b_n=B_n^{1/d}$, $\phi$ be an infinitely differentiable function supported on $[0,1]^d$ that is orthogonal to $\mathds{1}_{\{x\in [0,1]^d\}}$ in $L_2$, and for each $x_{n,k}\in\{0,1,\cdots,b_n-1\}^{\otimes d}$, let
$$
\phi_{n,k}(x)=\frac{b_n^{d/2}}{\|\phi\|_{L_2}}\phi(b_nx-x_{n,k}),\quad \forall\ x\in\mathbb{R}^d.
$$
Then all $\phi_{n,k}$'s are supported on $[0,1]^d$ and
\begin{align*}
&\langle \phi_{n,k},1\rangle_{L_2}=\frac{b_n^{d/2}}{\|\phi\|_{L_2}}\int_{\mathbb{R}^d}\phi(b_nx-x_{n,k})dx=\frac{1}{b_n^{d/2}\|\phi\|_{L_2}}\int_{\mathbb{R}^d}\phi(x)dx=0,\\
&\|\phi_{n,k}\|_{L_2}^2=\frac{b_n^d}{\|\phi\|_{L_2}^2}\int_{[0,1/b_n]^d}\phi^2(b_nx)d x=1,\\
&\|\phi_{n,k}\|_{\mathcal W^{s,2}}^2\leq b_n^{2s}\frac{\|\phi\|_{\mathcal W^{s,2}}^2}{\|\phi\|_{L_2}^2}.
\end{align*}
Since for $k\neq k'$, the supports of $\phi_{n,k}$ and $\phi_{n,k'}$ are disjoint,
$$
\|p_{n,\bm{a}_n}-p_0\|_{\infty}=r_nb_n^{d/2}\frac{\|\phi\|_{\infty}}{\|\phi\|_{L_2}},
$$
and
$$
\langle \phi_{n,k},\phi_{n,k'}\rangle_{L_2}=0,\qquad \langle \phi_{n,k},\phi_{n,k'}\rangle_{\mathcal W^{s,2}}=0,
$$
from which we immediately obtain
\begin{align*}
&\|p_{n,\bm{a}_n}-p_0\|_{L_2}^2=r_n^2b_n^d\\
&\|p_{n,\bm{a}_n}-p_0\|_{\mathcal W^{s,2}}^2\leq r_n^2b_n^{d+2s}\frac{\|\phi\|_{\mathcal W^{s,2}}^2}{\|\phi\|_{L_2}^2}.
\end{align*}
To ensure $p_{n,\bm{a}_n}\in H_1^{\rm GOF}(\Delta_n;s)^*$, it suffices to make
\begin{align}
&r_nb_n^{d/2}\frac{\|\phi\|_{\infty}}{\|\phi\|_{L_2}}\rightarrow 0\ \text{as}\ n\rightarrow \infty,\label{condition2}\\& r_n^2b_n^d=\Delta_n^2,\label{condition3}\\ &r_n^2b_n^{d+2s}\frac{\|\phi\|_{\mathcal W^{s,2}}^2}{\|\phi\|_{L_2}^2}\leq M^2.\label{condition4}
\end{align}
Let
$$
b_n=\left\lfloor\left(\frac{M\|\phi\|_{L_2}^2}{\|\phi\|_{\mathcal W^{s,2}}}\right)^{1/s}\Delta_n^{-1/s}\right\rfloor,\quad r_n=\frac{\Delta_n}{b_n^{d/2}}.
$$
Then (\ref{condition3}) and (\ref{condition4}) are satisfied. Moreover, given $\Delta_n=O\left(n^{-\frac{2s}{4s+d}}\right)$,
$$
B_n^{1/2}nr_n^2=b_n^{-d/2}n\Delta_n^{2}\lesssim_{d,\phi,M}n\Delta_n^{\frac{4s+d}{2s}}=O(1),
$$
and
$$
r_nb_n^{d/2}\frac{\|\phi\|_{\infty}}{\|\phi\|_{L_2}}\lesssim_{\phi}\Delta_n=o(1)
$$
ensuring both (\ref{condition}) and (\ref{condition2}).
Finally, we show the existence of such $\phi$. Let
$$
\phi_0(x_1)=\begin{cases}
\exp\left(-\frac{1}{1-(4x_1-1)^2}\right) &0<x_1<\frac{1}{2}\\
-\exp\left(-\frac{1}{1-(4x_1-3)^2}\right) &\frac{1}{2}<x_1<1\\
0&\text{otherwise}
\end{cases}.
$$
Then $\phi_0$ is supported on $[0,1]$, infinitely differentiable and orthogonal to the indicator function of $[0,1]$.
Let
$$
\phi(x)=\prod\limits_{l=1}^{d}\phi_0(x_l),\quad \forall\ x=(x_1,\cdots,x_d)\in\mathbb{R}^d.
$$
Then $\phi$ is supported on $[0,1]^d$, infinitely differentiable and
$
\langle \phi, 1\rangle_{L_2}=\langle \phi_0,1\rangle_{L_2[0,1]}^d=0.
$
\end{proof}
\vskip 25pt
\begin{proof}[Proof of Theorem \ref{th:homnull}]
Let $N=m+n$ denote the total sample size. It suffices to prove the result under the assumption that $n/N\rightarrow r\in(0,1)$.
Note that under $H_0$,
\begin{align*
\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb Q)=&{1\over n(n-1)}\sum_{1\leq i\neq j\leq n} \bar{G}_{\nu_n}(X_i,X_j)+{1\over m(m-1)}\sum_{1\leq i\neq j\leq m} \bar{G}_{\nu_n}(Y_i,Y_j)\\&-{2\over nm}\sum_{1\leq i\leq n}\sum\limits_{1\leq j\leq m} \bar{G}_{\nu_n}(X_i,Y_j).
\end{align*}
Let $n/N=r_n$. Then we have
\begin{align*}
&\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb Q)\\=&N^{-2}\left({1\over r_n(r_n-N^{-1})}\sum_{1\leq i\neq j\leq n} \bar{G}_{\nu_n}(X_i,X_j)\right.+\\&\left.{1\over (1-r_n)(1-r_n-N^{-1})}\sum_{1\leq i\neq j\leq m} \bar{G}_{\nu_n}(Y_i,Y_j)-{2\over r_n(1-r_n)}\sum\limits_{1\leq i\leq n}\sum\limits_{1\leq j\leq m} \bar{G}_{\nu_n}(X_i,Y_j)\right).
\end{align*}
Let
\begin{align*}
\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb Q)'=&N^{-2}\left({1\over r^2}\sum_{1\leq i\neq j\leq n} \bar{G}_{\nu_n}(X_i,X_j)+{1\over (1-r)^2}\sum_{1\leq i\neq j\leq m} \bar{G}_{\nu_n}(Y_i,Y_j)\right.\\
&\left.-{2\over r(1-r)}\sum_{1\leq i\leq n}\sum\limits_{1\leq j\leq m} \bar{G}_{\nu_n}(X_i,Y_j)\right).
\end{align*}
As we assume $r_n\rightarrow r$ as $n\rightarrow \infty$, Theorem \ref{th:gofnull} ensures that
$$
\frac{nm}{\sqrt{2}(n+m)}\left[{\mathbb E} \bar{G}_{\nu_n}^2(X_1,X_2)\right]^{-\frac{1}{2}}\left(\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb Q)-\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb Q)'\right)=o_p(1)
$$
A slight adaption of arguments in \cite{hall1984central} suggests that
\begin{align}\label{lc6}
\frac{{\mathbb E} \bar{G}_{\nu_n}^4(X_1,X_2)}{N^2{\mathbb E}\bar{G}_{\nu_n}^2(X_1,X_2)}+\frac{{\mathbb E} \bar{G}_{\nu_n}^2(X_1,X_2)\bar{G}_{\nu_n}^2(X_1,X_3)}{N{\mathbb E}\bar{G}_{\nu_n}^2(X_1,X_2)}+\frac{{\mathbb E} H_{\nu_n}^2(X_1,X_2)}{{\mathbb E}\bar{G}_{\nu_n}^2(X_1,X_2)}\to 0
\end{align}
ensures that
$$
\frac{nm}{\sqrt{2}(n+m)}\left[{\mathbb E} \bar{G}_{\nu_n}^2(X_1,X_2)\right]^{-\frac{1}{2}}\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb Q)'\to_d N(0,1).
$$
Following arguments similar to those in the proof of Theorem \ref{th:gofnull}, given $\nu_n\rightarrow \infty$ and $\nu_n/n^{4/d}\rightarrow 0$, (\ref{lc6}) holds and therefore
$$
\frac{nm}{\sqrt{2}(n+m)}\left[{\mathbb E} \bar{G}_{\nu_n}^2(X_1,X_2)\right]^{-\frac{1}{2}}\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb Q)\to_d N(0,1).
$$
Additionally, based on the same arguments as in the proof of Theorem \ref{th:gofnull},
$$
\widehat{s}_{n,m,\nu_n}^2/{\mathbb E}[\bar{G}_{\nu_n}(X_1,X_2)]^2\to_p 1.
$$
The proof is therefore concluded.
\end{proof}
\vskip 25pt
\begin{proof}[Proof of Theorem \ref{th:hompower}] With slight abuse of notation, we shall write
$$
\bar{G}_{\nu_n}(x,y;\mathbb P,\mathbb Q)=G_{\nu_n}(x,y)-{\mathbb E}_{Y\sim\mathbb Q} G_{\nu_n}(x,Y)-{\mathbb E}_{X\sim \mathbb P} G_{\nu_n}(X,y)+{\mathbb E}_{(X,Y)\sim \mathbb P\otimes\mathbb Q} G_{\nu_n}(X,Y),
$$
We consider the two parts separately.
\paragraph{Part (\romannumeral1).} We first verify the consistency of $\Phi_{n,\nu_n,\alpha}^{\mathrm{HOM}}$ with $\nu_n\asymp n^{4/(d+4s)}$ given $\Delta_n\gg n^{-2s/(d+4s)}$.
Observe the following decomposition of $\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb Q)$,
$$
\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb Q)=\gamma_{\nu_n}^2(\mathbb P,\mathbb Q)+L_{n,\nu_n}^{(1)}+L_{n,\nu_n}^{(2)},
$$
where
\begin{align*
L_{n,\nu_n}^{(1)}\notag=&\frac{1}{n(n-1)}\sum\limits_{1\leq i\neq j\leq n}\bar{G}_{\nu_n}(X_i,X_j;\mathbb P)-\frac{2}{mn}\sum\limits_{1\leq i\leq n}\sum\limits_{1\leq j\leq m}\bar{G}_{\nu_n}(X_i,Y_j;\mathbb P,\mathbb Q)\\
&+\frac{1}{m(m-1)}\sum\limits_{1\leq i\neq j\leq m}\bar{G}_{\nu_n}(Y_i,Y_j;\mathbb Q)
\end{align*}
and
\begin{align*
L_{n,\nu_n}^{(2)}=&\frac{2}{n}\sum\limits_{i=1}^n\left({\mathbb E}[G_{\nu_n}(X_i,X)|X_i]-{\mathbb E} G_{\nu_n}(X,X')-{\mathbb E}[G_{\nu_n}(X_i,Y)|X_i]+{\mathbb E} G_{\nu_n}(X,Y)\right)\\
&+\frac{2}{m}\sum\limits_{j=1}^m\left({\mathbb E} [G_{\nu_n}(Y_j,Y)|Y_j]-{\mathbb E} G_{\nu_n}(Y,Y')-{\mathbb E} [G_{\nu_n}(X,Y_j)|Y_j]+{\mathbb E} G_{\nu_n}(X,Y)\right).\notag
\end{align*}
In order to prove the consistency of $\Phi_{n,\nu_n,\alpha}^{\mathrm{HOM}}$, it suffices to show
\begin{align}
&\sup\limits_{\substack{p,q\in \mathcal W^{s,2}(M)\\ \|p-q\|_{L_2}\geq \Delta_n}}\frac{{\mathbb E}\left(L_{n,\nu_n}^{(1)}\right)^2+{\mathbb E}\left(L_{n,\nu_n}^{(2)}\right)^2}{\gamma_{G_{\nu_n}}^4(\mathbb P,\mathbb Q)}\rightarrow 0,\label{lc7}\\
&\inf\limits_{\substack{p,q\in \mathcal W^{s,2}(M)\\ \|p-q\|_{L_2}\geq \Delta_n}}\frac{\gamma_{G_{\nu_n}}^2(\mathbb P,\mathbb Q)}{\left(1/n+1/m\right)\sqrt{{\mathbb E}\left(\widehat{s}_{n,m,\nu_n}^2\right)}}\rightarrow \infty,\label{lc8}
\end{align}
as $n\rightarrow \infty$. We now prove (\ref{lc7}) and (\ref{lc8}) with arguments similar to those obtained in the proof of Theorem \ref{th:gofpower}.
Note that
\begin{align*}
{\mathbb E}(L_{n,\nu_n}^{(1)})^2\lesssim&{\mathbb E}\left(\frac{1}{n(n-1)}\sum\limits_{1\leq i\neq j\leq n}\bar{G}_{\nu_n}(X_i,X_j;\mathbb P)\right)^2+{\mathbb E}\left(\frac{2}{mn}\sum_{1\leq i\leq n}\sum_{1\leq j\leq m}\bar{G}_{\nu_n}(X_i,Y_j;\mathbb P,\mathbb Q)\right)^2\\&+{\mathbb E}\left(\frac{1}{m(m-1)}\sum\limits_{1\leq i\neq j\leq m}\bar{G}_{\nu_n}(Y_i,Y_j;\mathbb Q)\right)^2\\
\lesssim &\frac{1}{n^2}{\mathbb E} G_{\nu_n}^2(X_1,X_2)+\frac{1}{m^2}{\mathbb E} G_{\nu_n}^2(Y_1,Y_2).
\end{align*}
Given $p,q\in \mathcal W^{s,2}(M)$,
\begin{align*
{\mathbb E} G_{\nu_n}^2(X_1,X_2
\lesssim_d M^2\nu_n^{-d/2},\quad {\mathbb E} G_{\nu_n}^2(Y_1,Y_2)\lesssim_d M^2\nu_n^{-d/2}.
\end{align*}
Hence
\begin{align}\label{var1
{\mathbb E}(L_{n,\nu_n}^{(1)})^2\lesssim_d M^2\nu_n^{-d/2}\left(\frac{1}{n^2}+\frac{1}{m^2}\right).
\end{align}
Now consider bounding $L_{n,\nu_n}^{(2)}$. Let $f=p-q$. Then we have
\begin{align}\label{var2
{\mathbb E}(L_{n,\nu_n}^{(2)})^2\lesssim_d \nu_n^{-\frac{3d}{4}}M\|f\|_{L_2}^2\left(\frac{1}{n}+\frac{1}{m}\right).
\end{align}
Since $\nu_n\asymp n^{4/(4s+d)}\gg \Delta_n^{-2/s}$, Lemma \ref{le:gaussmmd} ensures that for sufficiently large $n$,
$$
\gamma_{G_{\nu_n}}^2(\mathbb P,\mathbb Q)\gtrsim_d \nu_n^{-d/2}\|f\|_{L_2}^2,\quad\forall\ p,q\in \mathcal W^{s,2}(M).
$$
This together with (\ref{var1}) and (\ref{var2}) gives
$$
\sup\limits_{\substack{p,q\in \mathcal W^{s,2}(M)\\ \|p-q\|_{L_2}\geq \Delta_n}}\frac{{\mathbb E}\left(L_{n,\nu_n}^{(1)}\right)^2+{\mathbb E}\left(L_{n,\nu_n}^{(2)}\right)^2}{\gamma_{G_{\nu_n}}^4(\mathbb P,\mathbb Q)}\lesssim_d \frac{M^2\nu_n^{d/2}}{n^2\Delta_n^4}+\frac{M\nu_n^{d/4}}{n\Delta_n^2}\rightarrow 0
$$
as $n\rightarrow \infty$, which proves (\ref{lc7}).
Finally, consider (\ref{lc8}). It follows from
\begin{align*}
{\mathbb E}\left(\widehat{s}_{n,m,\nu_n}^2\right)\leq\ & {\mathbb E}\max\left\{\left|\tilde{s}_{n,m,\nu_n}^2\right|,1/n^2\right\}\\
\lesssim\ &\max\{{\mathbb E} G_{\nu_n}^2(X_1,X_2),{\mathbb E} G_{\nu_n}^2(Y_1,Y_2)\}+1/n^2\\
\lesssim_d &M^2\nu_n^{-d/2}+1/n^2
\end{align*}
that (\ref{lc8}) holds.
\paragraph{Part (\romannumeral2).} Next, we prove that if $\liminf_{n\to\infty}\Delta_nn^{2s/(d+4s)}<\infty$, then there exists some $\alpha\in(0,1)$ such that no asymptotic $\alpha$-level test can be consistent. To prove this, we shall verify that consistency of homogeneity test is harder to achieve than that of goodness-of-fit test.
Consider an arbitrary $p_0\in \mathcal W^{s,2}(M/2)$. It immediately follows
$$
H_1^{\rm HOM}(\Delta_n; s)\supset \{(p,p_0):\ p\in H_1^{\rm GOF}(\Delta_n;s)\}.
$$
Let $\{\Phi_{n}\}_{n\geq 1}$ be any sequence of asymptotic $\alpha$-level homogeneity tests, where
$$
\Phi_{n}=\Phi_{n}(X_1,\cdots,X_n,Y_1,\cdots,Y_m).
$$
Then if $Y_1,\cdots,Y_m\sim_{\rm iid} P_0$, $\{\Phi_{n}\}_{n\geq 1}$ can also be treated as a sequence of (random) goodness-of-fit tests
$$
\Phi_{n}(X_1,\cdots,X_n,Y_1,\cdots,Y_m)=\tilde{\Phi}_n(X_1,\cdots,X_n)
$$
whose probabilities of type \uppercase\expandafter{\romannumeral1}\ error with respect to $P_0$ are controlled at $\alpha$ asymptotically. Moreover,
$$
{\rm power}\{\Phi_n; H_1^{\rm HOM}(\Delta_n; s)\}\leq {\rm power}\{\tilde{\Phi}_n; H_1^{\rm GOF}(\Delta_n; s)\}
$$
Since $0<c\leq m/n\leq C<\infty$,
Theorem \ref{th:goflower} ensures that there exists some $\alpha\in(0,1)$ such that for any sequence of asymptotic $\alpha$-level tests $\{\Phi_n\}_{n\geq 1}$,
$$
\liminf_{n\to\infty}{\rm power}\{\Phi_n; H_1^{\rm HOM}(\Delta_n; s)\}\leq \liminf_{n\to\infty}{\rm power}\{\tilde{\Phi}_n; H_1^{\rm GOF}(\Delta_n; s)\}<1
$$
given $\liminf_{n\to\infty}\Delta_nn^{2s/(d+4s)}<\infty$.
\end{proof}
\vskip 25pt
\begin{proof}[Proof of Theorem \ref{th:indnull}]
For brevity, we shall focus on the case when $k=2$ in the rest of the proof. Our argument, however, can be straightforwardly extended to the more general cases. The proof relies on the following decomposition of $\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb P^{X^1}\otimes\mathbb P^{X^2})$ under $H_0^{\rm IND}$:
\begin{align*}
\widehat{\gamma^2_{\nu_n}}(\mathbb P,\mathbb P^{X^1}\otimes\mathbb P^{X^2})=\frac{1}{n(n-1)}\sum_{1\leq i\neq j\leq n}G_{\nu_n}^*(X_i,X_j)+R_n,
\end{align*}
where
\begin{align*}
G_{\nu_n}^*(x,y)=\bar{G}_{\nu_n}(x,y)-\sum\limits_{\substack{1\leq j\leq 2}}g_j(x^j,y)-\sum\limits_{\substack{1\leq j\leq 2}}g_j(y^j,x)+\sum\limits_{\substack{1\leq j_1,j_2\leq 2}}g_{j_1,j_2}(x^{j_1},y^{j_2})
\end{align*}
and the remainder $R_n$ satisfies
$${\mathbb E}(R_n)^2\lesssim {\mathbb E} G_{2\nu}(X_1,X_2)/n^3\lesssim_d \|p\|_{L_2}^2\nu_n^{-d/2}/n^3.$$ See Appendix \ref{sec:HSIC_decomp} for more details.
Moreover, borrowing arguments in the proof of Lemma \ref{le:var}, we obtain
\begin{align*}
&{\mathbb E}(G_{\nu_n}^*(X_1,X_2)-\bar{G}_{\nu_n}(X_1,X_2))^2\\
\lesssim &\sum\limits_{1\leq j\leq 2}{\mathbb E}\Big(g_j(X_1^j,X_2)\Big)^2+\sum\limits_{\substack{1\leq j_1,j_2\leq 2}}{\mathbb E} \Big(g_{j_1,j_2}(X_1^{j_1},X_2^{j_2})\Big)^2\\
\leq &\sum\limits_{1\leq j_1\neq j_2\leq 2}{\mathbb E} G_{2\nu_n}(X_1^{j_1},X_2^{j_1})\cdot{\mathbb E}\left\{{\mathbb E}\left[ G_{\nu_n}(X_1^{j_2},X_2^{j_2})\Big|X_1^{j_2}\right]\right\}^2+\\
&\sum\limits_{1\leq j_1\neq j_2\leq 2}{\mathbb E} G_{2\nu_n}(X_1^{j_1},X_2^{j_1})[{\mathbb E} G_{\nu_n}(X_1^{j_2},X_2^{j_2})]^2+\\
&\ 2{\mathbb E}\left\{{\mathbb E}\left[ G_{\nu_n}(X_1^{1},X_2^{1})\Big|X_1^{1}\right]\right\}^2{\mathbb E}\left\{{\mathbb E}\left[ G_{\nu_n}(X_1^{2},X_2^2)\Big|X_1^{2}\right]\right\}^2\\
\lesssim_d &\ \nu_n^{-d_1/2-3d_2/4}\|p_1\|_{L_2}^2\|p_2\|_{L_2}^3+\nu_n^{-3d_1/4-d_2/2}\|p_1\|_{L_2}^3\|p_2\|_{L_2}^2
\end{align*}
Together with the fact that
$$
(2\nu_n/\pi)^{d/2}{\mathbb E}\bar{G}_{\nu_n}^2(X_1,X_2)\to \|p\|_{L_2}^2
$$
as $\nu_n\to \infty$, we conclude that
$$
\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb P^{X^1}\otimes\mathbb P^{X^2})=D(\nu_n)+o_p\left(\sqrt{{\mathbb E} D^2(\nu_n)}\right),
$$
where
$$
D(\nu_n)=\frac{1}{n(n-1)}\sum\limits_{1\leq i\neq j\leq n}\bar{G}_{\nu_n}(X_i,X_j).
$$
Applying arguments similar to those in the proofs of Theorem \ref{th:gofnull} and \ref{th:homnull}, we have
$$
\frac{D(\nu_n)}{\sqrt{{\mathbb E} D^2(\nu_n)}}\to_d N(0,1).
$$
Since
$$
{\mathbb E} D^2(\nu_n)=\frac{2}{n(n-1)}{\mathbb E} [\bar{G}_{\nu_n}(X_1,X_2)]^2\quad \text{and}\quad
{\mathbb E}[\bar{G}_{\nu_n}(X_1,X_2)]^2/{\mathbb E}[G_{\nu_n}^*(X_1,X_2)]^2\to 1,
$$
it remains to prove
$$
\widehat{s}_{n,\nu_n}^2/{\mathbb E}[G_{\nu_n}^*(X_1,X_2)]^2\to_p 1,
$$
which immediately follows by observing
$$
\tilde{s}_{n,\nu_n}^2/{\mathbb E}[G_{\nu_n}^*(X_1,X_2)]^2=\prod\limits_{j=1}^2\tilde{s}_{n,j,\nu_n}^2/{\mathbb E}[\bar{G}_{\nu_n}(X_1^j,X_2^j)]^2\to_p 1
$$
and $1/n^2=o({\mathbb E}[G_{\nu_n}^*(X_1,X_2)]^2)$.
The proof is therefore concluded.
\end{proof}
\vskip 25pt
\begin{proof}[Proof of Theorem \ref{th:indpower}] We prove the two parts separately.
\paragraph{Part (\romannumeral1).} The proof of consistency of $\Phi^{\rm IND}_{n,\nu_n,\alpha}$ is very similar to its counterpart in the proof of Theorem \ref{th:hompower}. It sufficies to show
\begin{align}
&\sup\limits_{p\in H_1^{\rm IND}(\Delta_n,s)}\frac{{\rm var}(\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb P^{X^1}\otimes \mathbb P^{X^2}))}{\gamma_{\nu_n}^4(\mathbb P,\mathbb P^{X^1}\otimes \mathbb P^{X^2})}\rightarrow 0,\label{lc12}\\
&\inf\limits_{p\in H_1^{\rm IND}(\Delta_n,s)}\frac{n\gamma_{\nu_n}^2(\mathbb P,\mathbb P^{X^1}\otimes \mathbb P^{X^2})}
{\mathbb E}\left(\widehat{s}_{n,\nu_n}\right)}\rightarrow \infty,\label{lc13}
\end{align}
as $n\rightarrow \infty$.
We begin with (\ref{lc12}).
Let $f=p-p_1\otimes p_2$. Lemma \ref{le:gaussmmd} then implies that there exists $C=C(s,M)>0$ such that
$$
\gamma_\nu^2(\mathbb P,\mathbb P^{X^1}\otimes\mathbb P^{X^2})\asymp_d \nu^{-d/2}\|f\|_{L_2}^2
$$
for $\nu\geq C\|f\|_{L_2}^{-2/s}$, which is satisfied by all $p\in H_1^{\rm IND}(\Delta_n,s)$ given $\nu=\nu_n$ and $\lim\limits_{n\rightarrow \infty}\Delta_nn^{2s\over 4s+d}=\infty$. On the other hand, we can still do the decomposition of $\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb P^{X^1}\otimes \mathbb P^{X^2})$ as in Appendix \ref{sec:HSIC_decomp}. We follow the same notations here.
Under the alternative hypothesis, the ``first order'' term
\begin{align*}
&D_1(\nu_n)\\=&\frac{2}{n}\sum\limits_{1\leq i\leq n}\Big({\mathbb E}_{X_i,X\sim_{\rm iid}\mathbb P}[G_{\nu_n}(X_i,X)|X_i]-{\mathbb E}_{X,X'\sim_{\rm iid}\mathbb P} G_{\nu_n}(X,X')\Big) \\&-\frac{2}{n}\sum\limits_{1\leq i\leq n}\Big({\mathbb E}_{X_i\sim \mathbb P,Y\sim \mathbb P^{X^1}\otimes \mathbb P^{X^2}}[G_{\nu_n}(X_i,Y)|X_i]-{\mathbb E}_{X\sim\mathbb P,Y\sim \mathbb P^{X^1}\otimes \mathbb P^{X^2}} G_{\nu_n}(X,Y)\Big)\\
&-\sum\limits_{1\leq j\leq 2}\left(\frac{2}{n}\sum\limits_{1\leq i\leq n}\left({\mathbb E}_{X_i\sim \mathbb P^{X^1}\otimes\mathbb P^{X^2},X\sim \mathbb P} [G_{\nu_n}(X_i,X)|X_i^j]-{\mathbb E}_{X\sim \mathbb P,Y\sim \mathbb P^{X^1}\otimes\mathbb P^{X^2}} G_{\nu_n}(X,Y)\right)\right)\\&+\sum\limits_{1\leq j\leq 2}\left(\frac{2}{n}\sum\limits_{1\leq i\leq n}\left({\mathbb E}_{X_i,Y\sim_{\rm iid} \mathbb P^{X^1}\otimes\mathbb P^{X^2}} [G_{\nu_n}(X_i,Y)|X_i^j]-{\mathbb E}_{Y,Y'\sim_{\rm iid} \mathbb P^{X^1}\otimes\mathbb P^{X^2}} G_{\nu_n}(Y,Y')\right)\right)
\end{align*}
no longer vanish, but based on arguments similar to those in the proof of Theorem \ref{th:gofpower},
$$
{\mathbb E} D_1^2(\nu_n)\lesssim_d Mn^{-1}\nu_n^{-3d/4}\|f\|_{L_2}^2.
$$
Moreover, the ``second order'' term $D_2(\nu_n)$ is not solely $\sum\limits_{1\leq i\neq j\leq n}G_{\nu_n}^*(X_i,X_j)/(n(n-1))$, but
we still have
$$
{\mathbb E} D_2^2(\nu_n)\lesssim n^{-2}\max\{{\mathbb E} G_{2\nu_n}(X_1,X_2),{\mathbb E} G_{2\nu_n}(X_1^1,X_2^1){\mathbb E} G_{2\nu_n}(X_1^2,X_2^2)\}\lesssim_d M^2n^{-2}\nu_n^{-d/2}.
$$
Similarly, define the third order term $D_3(\nu_n)$ and the fourth order term $D_4(\nu_n)$ as the aggregation of all $3$-variate centered components
and the aggregation of all $4$-variate
centered components in $\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb P^{X^1}\otimes \mathbb P^{X^2})$ respectively, which together constitue $R_n$. Then we have
$$
{\mathbb E} D_3^2(\nu_n)\lesssim_d M^2n^{-3}\nu_n^{-d/2},\quad {\mathbb E} D_4^2(\nu_n)\lesssim_d M^2n^{-4}\nu_n^{-d/2}.
$$
Hence we finally obtain
$$
\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb P^{X^1}\otimes \mathbb P^{X^2})=\gamma_{\nu_n}^2(\mathbb P,\mathbb P^{X^1}\otimes \mathbb P^{X^2})+\sum\limits_{l=1}^4D_l(\nu_n)
$$
and
$$
{\rm var}\Big(\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb P^{X^1}\otimes \mathbb P^{X^2})\Big)=\sum\limits_{l=1}^4{\mathbb E} D_l^2(\nu_n)\lesssim _d Mn^{-1}\nu_n^{-3d/4}\|f\|_{L_2}^2+M^2n^{-2}\nu_n^{-d/2}
$$
which proves (\ref{lc12}).
Now consider (\ref{lc13}). Since
$$
\widehat{s}_{n,\nu_n}\leq \max\left\{\prod\limits_{j=1}^2\sqrt{\left|\tilde{s}_{n,j,\nu_n}^2\right|},1/n\right\},
$$
we have
$$
{\mathbb E}\left(\widehat{s}_{n,\nu_n}\right)\leq \prod\limits_{j=1}^2\sqrt{{\mathbb E}\left|\tilde{s}_{n,j,\nu_n}^2\right|}+1/n,
$$
where
$$
\prod\limits_{j=1}^2{\mathbb E}\left|\tilde{s}_{n,j,\nu_n}^2\right|\lesssim \prod\limits_{j=1}^2{\mathbb E} G_{2\nu_n}(X_1^j,X_2^j)={\mathbb E}_{Y_1,Y_2\sim_{\rm iid} \mathbb P^{X^1}\otimes\mathbb P^{X^2}} G_{2\nu_n}(Y_1,Y_2)\lesssim_d M^2\nu_n^{-d/2}.
$$
Therefore (\ref{lc13}) holds.
\paragraph{Part (\romannumeral2).} Then we verify that $n^{2s/(d+4s)}\Delta_n\to \infty$ is also the necessary condition for the existence of consistent asymptotic $\alpha$-level tests for any $\alpha\in(0,1)$. Similarly to the proof of Theorem \ref{th:hompower}, the idea is to relate the existence of consistent independence test to the existence of consistent goodness-of-fit test.
Let $p_{j,0}\in \mathcal W^{s,2}\left(M_j/\sqrt{2}\right)$ be density on ${\mathbb R}^{d_j}$ for $j=1,2$ and $p_0$ be the product of $p_{1,0}$ and $p_{2,0}$, \textit{i.e.},
$$
p_0(x^1,x^2)=p_{1,0}(x^1)p_{2,0}(x^2),\quad\forall\ x^1\in{\mathbb R}^{d_1},x^2\in{\mathbb R}^{d_2}.
$$
Hence $p_0\in \mathcal W^{s,2}(M/2)$.
Let
$$
H_1^{\rm GOF}(\Delta_n;s)':=\{p:\ p\in \mathcal W^{s,2}(M), \ p_1=p_{1,0},\ p_2=p_{2,0}, \|p-p_0\|_{L_2}\geq \Delta_n\}.
$$
We immediately have
$$
H_1^{\rm IND}(\Delta_n; s)\supset H_1^{\rm GOF}(\Delta_n;s)'
$$
Let $\{\Phi_{n}\}_{n\geq 1}$ be any sequence of asymptotic $\alpha$-level independence tests, where
$$
\Phi_{n}=\Phi_{n}(X_1,\cdots,X_n).
$$
Then $\{\Phi_{n}\}_{n\geq 1}$ can also be treated as a sequence of asymptotic $\alpha$-level goodness-of-fit tests with the null density being $p_0$. Moreover,
$$
{\rm power}\{\Phi_n; H_1^{\rm IND}(\Delta_n; s)\}\leq {\rm power}\{\Phi_n; H_1^{\rm GOF}(\Delta_n;s)'\}.
$$
It remains to show that given $\liminf_{n\to\infty}n^{2s/(d+4s)}\Delta_n< \infty$, there exists some $\alpha\in(0,1)$ such that
\begin{align*
\liminf_{n\to\infty}{\rm power}\{\Phi_n; H_1^{\rm GOF}(\Delta_n;s)'\}<1,
\end{align*}
which cannot be directly obtained from Theorem \ref{th:goflower} because of the additional constraints
\begin{align}\label{constraint}
p_1=p_{1,0},\quad p_2=p_{2,0}
\end{align}
in $H_1^{\rm GOF}(\Delta_n;s)'$.
However, by modifying the proof of Theorem \ref{th:goflower}, we only need to further require each $p_{n,\bm{a}_n}$ in the proof of Theorem \ref{th:goflower} satisfying (\ref{constraint}), or equivalently,
$$
\int_{{\mathbb R}^{d_2}} (p-p_0)(x^1,x^2)d x^2=0,\quad \int_{{\mathbb R}^{d_1}} (p-p_0)(x^1,x^2)d x^1=0.
$$
Recall that each $p_{n,\bm{a}_n}=p_0+r_n\sum\limits_{k=1}^{B_n}a_{n,k}\phi_{n,k}$, where
$$
\phi_{n,k}(x)=\frac{b_n^{d/2}}{\|\phi\|_{L_2}}\phi(b_nx-x_{n,k}).
$$
Write $x_{n,k}=(x_{n,k}^1,x_{n,k}^2)\in {\mathbb R}^{d_1}\times {\mathbb R}^{d_2}$. Since $\phi$ can be decomposed as
$\phi(x^1,x^2)=\phi_1(x^1)\phi_2(x^2)$,
we have
$$\phi_{n,k}(x)=\frac{b_n^{d/2}}{\|\phi\|_{L_2}}\phi_1(b_nx^1-x_{n,k}^1)\phi_2(b_nx^2-x_{n,k}^2)$$
Hence
\begin{align*}
\int_{{\mathbb R}^{d_2}} (p_{n,\bm{a}_n}-p_0)(x^1,x^2)dx^2=&r_n\sum\limits_{k=1}^{B_n}a_{n,k}\int_{{\mathbb R}^{d_2}} \phi_{n,k}(x^1,x^2)dx^2\\
=&r_n\sum\limits_{k=1}^{B_n}a_{n,k}\frac{b_n^{d/2}}{\|\phi\|_{L_2}}\cdot \phi_1(b_nx^1-x_{n,k}^1)\cdot\frac{1}{b_n^{d_2}}\int_{{\mathbb R}^{d_2}} \phi_2(x^2)dx^2\\
=&0
\end{align*}
since $\int_{{\mathbb R}^{d_2}}\phi_2(x^2)dx^2=0.$ Similarly, $\int_{{\mathbb R}^{d_1}} (p_{n,\bm{a}_n}-p_0)(x^1,x^2)dx^1=0$. The proof is therefore finished.
\end{proof}
\vskip 25pt
\begin{proof}[Proof of Theorem \ref{th:gofadapt}]
The proof of Theorem \ref{th:gofadapt} consists of two steps. First, we bound $q_{n,\alpha}^{\rm GOF}$. To be more specific, we show that there exists $C=C(d)>0$ such that $$q_{n,\alpha}^{\rm GOF}\leq C(d)\log\log n$$ for sufficiently large $n$, which holds if
\begin{align}\label{eq:gofadapt1}
\lim\limits_{n\rightarrow \infty}P(T_n^{\rm GOF (adapt)}\geq C(d)\log\log n)=0
\end{align}
under $H_0^{\rm GOF}$. Second, we show that there exists $c>0$ such that
$$\liminf_{n\to\infty} \Delta_{n,s}(n/\log\log n)^{2s/(d+4s)}>c$$
ensures
\begin{align}\label{eq:gofadapt2}
\inf_{p \in H_1^{\rm GOF(adapt)}(\Delta_{n,s}: s\ge d/4)}P(T_n^{\rm GOF (adapt)}\geq C(d)\log\log n)\to 1
\end{align}
as $n\to \infty$.
\paragraph{Verifying (\ref{eq:gofadapt1}).}
In order to prove (\ref{eq:gofadapt1}), we first show the following two lemmas.
The first lemma suggests that $\widehat{s}_{n,\nu_n}^2$
is a consistent estimator of ${\mathbb E} \bar{G}_{\nu_n}^2(X_1,X_2)$ uniformly over all $\nu_n\in[1,n^{2/d}]$. Recall we have shown in the proof of Theorem \ref{th:gofnull} that for $\nu_n$ increasing at a proper rate,
$$
\widehat{s}_{n,\nu_n}^2/{\mathbb E}[\bar{G}_{\nu_n}(X_1,X_2)]^2\to_p1.
$$
Hence the first lemma is a uniform version of such result.
\begin{lemma}\label{consistent-est-unif}
We have that $\widehat{s}_{n,\nu_n}^2/{\mathbb E}[\bar{G}_{\nu_n}(X_1,X_2)]^2$ converges to $1$ uniformly over $\nu_n\in[1,n^{2/d}]$, \textit{i.e.},
$$
\sup\limits_{1\leq \nu_n\leq n^{2/d}}\left|\widehat{s}_{n,\nu_n}^2/{\mathbb E}[\bar{G}_{\nu_n}(X_1,X_2)]^2-1\right|=o_p(1).
$$
\end{lemma}
We defer the proof of Lemma \ref{consistent-est-unif} to the appendix. Note that
\begin{align*}
T_n^{\rm GOF (adapt)}=&\sup\limits_{1\leq \nu_n\leq n^{2/d}}\frac{n\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb P_0)}{\sqrt{2{\mathbb E} [\bar{G}_{\nu_n}(X_1,X_2)]^2}}\cdot\sqrt{{\mathbb E} [\bar{G}_{\nu_n}(X_1,X_2)]^2/\widehat{s}_{n,\nu_n}^2}\\\leq &\sup\limits_{1\leq \nu_n\leq n^{2/d}}\left|\frac{n\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb P_0)}{\sqrt{2{\mathbb E} [\bar{G}_{\nu_n}(X_1,X_2)]^2}}\right|\cdot\sup\limits_{1\leq \nu_n\leq n^{2/d}}\sqrt{{\mathbb E} [\bar{G}_{\nu_n}(X_1,X_2)]^2/\widehat{s}_{n,\nu_n}^2}.
\end{align*}
Lemma \ref{consistent-est-unif} first ensures that
$$
\sup\limits_{1\leq \nu_n\leq n^{2/d}}\sqrt{{\mathbb E} [\bar{G}_{\nu_n}(X_1,X_2)]^2/\widehat{s}_{n,\nu_n}^2}=1+o_p(1).
$$
It therefore suffices to show that under $H_0^{\rm GOF}$,
$$
\widetilde{T}_n^{\rm GOF (adapt)}:=\sup\limits_{1\leq \nu_n\leq n^{2/d}}\left|\frac{n\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb P_0)}{\sqrt{2{\mathbb E} [\bar{G}_{\nu_n}(X_1,X_2)]^2}}\right|
$$
is also of order $\log\log n$. This is the crux of our argument yet its proof is lengthy. For brevity, we shall state it as a lemma here and defer its proof to the appendix.
\begin{lemma}\label{at4}
There exists $C=C(d)>0$ such that
$$
\lim\limits_{n\rightarrow \infty}P\left(\widetilde{T}_n^{\rm GOF (adapt)}\geq C\log\log n\right)=0
$$
under $H_0^{\rm GOF}$.
\end{lemma}
\paragraph{Verifying (\ref{eq:gofadapt2}).} Let
$$
\nu_n(s)'=\left(\frac{\log\log n}{n}\right)^{-4/(4s+d)},
$$
which is smaller than $n^{2/d}$ for $s\geq {d/4}$. Hence it suffices to show
$$
\inf_{s\geq d/4}\inf_{p\in H_1^{\rm GOF}(\Delta_{n,s};s)}P(T_{n,\nu_n(s)'}^{\rm GOF}\geq C(d)\log\log n)\to 1
$$
as $n\rightarrow \infty$.
First of all, observe
$$
0\leq{\mathbb E}\left(\tilde{s}_{n,\nu_n(s)'}^2\right)\leq{\mathbb E} G_{2\nu_n(s)'}(X_1,X_2)\leq M^2 (2\nu_n(s)'/\pi)^{-d/2}
$$
and
$$
{\rm var}\left(\tilde{s}_{n,\nu_n(s)'}^2\right)\lesssim _d M^3n^{-1}(\nu_n(s)')^{-3d/4}+M^2n^{-2}(\nu_n(s)')^{-d/2}
$$
for any $s$ and $p\in H_1^{\rm GOF}(\Delta_{n,s},s)$. Further considering $1/n^2=o(M^2 (2\nu_n(s)'/\pi)^{-d/2})$ uniformly over all $s$, we obtain that
$$
\inf_{s\geq d/4}\inf_{p\in H_1^{\rm GOF}(\Delta_{n,s};s)}P\left(
\widehat{s}_{n,\nu_n(s)'}^2\leq 2M^2 (2\nu_n(s)'/\pi)^{-d/2}\right)\to 1.
$$
Let $$\Delta_{n,s}\geq c(\sqrt{M}+M)(\log\log n/n)^{2s/(d+4s)}$$ for some sufficiently large $c=c(d)$. Then
$$
{\mathbb E}\widehat{\gamma_{\nu_n(s)'}^2}(\mathbb P,\mathbb P_0)=\gamma_{\nu_n(s)'}^2(\mathbb P,\mathbb P_0)\geq \left(\frac{\pi}{\nu_n(s)'}\right)^{d/2}\cdot\frac{\|p-p_0\|_{L_2}^2}{4},
$$
as guaranteed by Lemma \ref{le:gaussmmd}. Further considering that
$$
{\rm var}\left(\widehat{\gamma_{\nu_n(s)'}^2}(\mathbb P,\mathbb P_0)\right)\lesssim_d M^2n^{-2}(\nu_{n}(s)')^{-d/2}+Mn^{-1}(\nu_{n}(s)')^{-3d/4}\|p-p_0\|_{L_2}^2,
$$
we immediately have
\begin{align*}
&\lim_{n\to\infty}\inf_{s\geq d/4}\inf_{p\in H_1^{\rm GOF}(\Delta_{n,s};s)}P(T_{n,\nu_n(s)'}^{\rm GOF}\geq C(d)\log\log n)\\
\geq& \lim_{n\to\infty}\inf_{s\geq d/4}\inf_{p\in H_1^{\rm GOF}(\Delta_{n,s};s)}P\left(\frac{n\gamma_{\nu_n(s)'}^2(\mathbb P,\mathbb P_0)/2}{\sqrt{2\widehat{s}_{n,\nu_n(s)'}^2}}\geq C(d)\log\log n\right)= 1.
\end{align*}
\end{proof}
\vskip 25pt
\begin{proof}[Proof of Theorem \ref{th:homadapt} and Theorem \ref{th:indadapt}]
The proof of Theorem \ref{th:homadapt} and Theorem \ref{th:indadapt} is very similar to that of Theorem \ref{th:gofadapt}. Hence we only emphasize the main differences here.
\paragraph{For adaptive homogeneity test:} to verify that there exists $C=C(d)>0$ such that
$$
\lim\limits_{n\rightarrow \infty}P(T_n^{\rm HOM (adapt)}\geq C\log\log n)=0
$$
under $H_0^{\rm HOM}$, observe that
$$
T_n^{\rm HOM (adapt)}\leq \sup\limits_{1\leq \nu_n\leq n^{2/d}}\sqrt{\frac{{\mathbb E}[\bar{G}_{\nu_n}(X_1,X_2)]^2}{\widehat{s}_{n,m,\nu_n}^2}}\cdot \left(\frac{1}{n}+\frac{1}{m}\right)^{-1}\sup\limits_{1\leq \nu_n\leq n^{2/d}} \frac{|\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb Q)|}{\sqrt{2{\mathbb E}[\bar{G}_{\nu_n}(X_1,X_2)]^2}}.
$$
Denote $X_1,\cdots,X_n,Y_1,\cdots,Y_m$ as $Z_1,\cdots,Z_N$. Hence
$$
2\sum_{i=1}^n\sum\limits_{j=1}^m G_{\nu_n}(X_i,Y_j)=\sum\limits_{1\leq i\neq j\leq N}G_{\nu_n}(Z_i,Z_{j})-\sum\limits_{1\leq i\neq j\leq n}G_{\nu_n}(X_i,X_{j})-\sum\limits_{1\leq i\neq j\leq m}G_{\nu_n}(Y_i,Y_j)
$$
and
\begin{align*}
&\sup\limits_{1\leq \nu_n\leq n^{2/d}} \frac{|\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb Q)|}{\sqrt{2{\mathbb E}[\bar{G}_{\nu_n}(X_1,X_2)]^2}}\\\leq &\left(\frac{1}{n(n-1)}+\frac{1}{mn}\right)\sup\limits_{1\leq \nu_n\leq n^{2/d}}\left|\sum\limits_{1\leq i\neq j\leq n}\frac{\bar{G}_{\nu_n}(X_i,X_{j})}{\sqrt{2{\mathbb E}[\bar{G}_{\nu_n}(X_1,X_2)]^2}}\right|\\&+\left(\frac{1}{m(m-1)}+\frac{1}{mn}\right)\sup\limits_{1\leq \nu_n\leq n^{2/d}}\left|\sum\limits_{1\leq i\neq j\leq m}\frac{\bar{G}_{\nu_n}(Y_i,Y_j)}{\sqrt{2{\mathbb E}[\bar{G}_{\nu_n}(X_1,X_2)]^2}}\right|\\
&+\frac{1}{mn}\sup\limits_{1\leq \nu_n\leq n^{2/d}}\left|\sum\limits_{1\leq i\neq j\leq N}\frac{\bar{G}_{\nu_n}(Z_i,Z_{j})}{\sqrt{2{\mathbb E}[\bar{G}_{\nu_n}(X_1,X_2)]^2}}\right|
\end{align*}
Apply Lemma \ref{at4} to bound each term of the right hand side of the above inequality. Then we conclude that for some $C=C(d)>0$,
$$
\lim\limits_{n\rightarrow \infty}P\left(\left(\frac{1}{n}+\frac{1}{m}\right)^{-1}\sup\limits_{1\leq \nu_n\leq n^{2/d}} \frac{|\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb Q)|}{\sqrt{2{\mathbb E}[\bar{G}_{\nu_n}(X_1,X_2)]^2}}\geq C\log\log n\right)=0.
$$
\paragraph{For adaptive independence test:} to verify that there exists $C=C(d)>0$ such that
\begin{align}\label{eq:indadapt}
\lim\limits_{n\rightarrow \infty}P(T_n^{\rm IND (adapt)}\geq C\log\log n)=0
\end{align}
under $H_0^{\rm IND}$,
recall the decomposition
$$
\widehat{\gamma_{\nu_n}^2}(\mathbb P,\mathbb P^{X^1}\otimes \mathbb P^{X^2})=D_2(\nu_n)+R_n=\frac{1}{n(n-1)}\sum\limits_{1\leq i\neq j\leq n}G_{\nu_n}^*(X_i,X_j)+R_n,
$$
where we express $R_n$ as $R_n=D_3(\nu_n)+D_4(\nu_n)$ in the proof of Theorem \ref{th:indpower}.
Following arguments similar to those in the proof of Lemma \ref{at4}, we obtain that there exists $C(d)>0$ such that for sufficiently large $n$,
$$
P\left(\sup\limits_{1\leq \nu_n\leq n^{2/d}}\left|\frac{nD_{2}(\nu_n)}{\sqrt{2{\mathbb E} [G_{\nu_n}^*(X_1,X_2)]^2}}\right|\geq C(d)(\log\log n+t\log\log\log n )\right)\lesssim \exp(-t^{2/3}),
$$
Similarly,
\begin{align*}
&P\left(\sup\limits_{1\leq \nu_n\leq n^{2/d}}\left|\frac{n^{3/2}D_{3}(\nu_n)}{\sqrt{2{\mathbb E} [G_{\nu_n}^*(X_1,X_2)]^2}}\right|\geq C(d)(\log\log n+t\log\log\log n )\right)\lesssim\exp(-t^{1/2})\\
&P\left(\sup\limits_{1\leq \nu_n\leq n^{2/d}}\left|\frac{n^2D_{4}(\nu_n)}{\sqrt{2{\mathbb E} [G_{\nu_n}^*(X_1,X_2)]^2}}\right|\geq C(d)(\log\log n+t\log\log\log n )\right)\lesssim\exp(-t^{2/5})
\end{align*}
for sufficiently large $n$.
On the other hand, note that
$$
{\mathbb E} [G_{\nu_n}^*(X_1,X_2)]^2=\prod\limits_{j=1}^2{\mathbb E}[\bar{G}_{\nu_n}(X_1^j,X_2^j)]^2,
$$
and based on results in the proof of Lemma \ref{consistent-est-unif},
$
\sup\limits_{1\leq \nu_n\leq n^{2/d}}\left|\tilde{s}^2_{n,j,\nu_n}/{\mathbb E}[\bar{G}_{\nu_n}(X_1^j,X_2^j)]^2-1\right|=o_p(1)
$
for $j=1,2$. Further considering that
$$
1/n^2=o({\mathbb E}[G_{\nu_n}^*(X_1,X_2)]^2)
$$
uniformly over all $\nu_n\in[1,n^{2/d}]$, we obtain
$$
\sup\limits_{1\leq \nu_n\leq n^{2/d}}\left|\widehat{s}_{n,\nu_n}^2/{\mathbb E}[G_{\nu_n}^*(X_1,X_2)]^2-1\right|=o_p(1).
$$
They combined together ensure that (\ref{eq:indadapt}) holds.
To show that the detection boundary of $\Phi^{\rm IND(adapt)}$ is of order $O((n/\log\log n)^{-2s/(d+4s)})$, observe that
$$
0\leq{\mathbb E}\left(\tilde{s}_{n,j,\nu_n(s)'}^2\right)\leq{\mathbb E} G_{2\nu_n(s)'}(X_1^j,X_2^j)\leq M_j^2 (2\nu_n(s)'/\pi)^{-d_j/2}
$$
and
$$
{\rm var}\left(\tilde{s}_{n,j,\nu_n(s)'}^2\right)\lesssim _{d_j} M_j^3n^{-1}(\nu_n(s)')^{-3d_j/4}+M_j^2n^{-2}(\nu_n(s)')^{-d_j/2}
$$
for $j=1,2$, where $\nu_n(s)'=\left(\log\log n/n\right)^{-4/(4s+d)}$ as in the proof of Theorem \ref{th:gofadapt}. Therefore,
$$
\inf_{s\geq d/4}\inf_{p\in H_1^{\rm IND}(\Delta_{n,s};s)}P\left(
\left|\tilde{s}_{n,j,\nu_n(s)'}^2\right|\leq \sqrt{3/2}M_j^2 (2\nu_n(s)'/\pi)^{-d_j/2}\right)\to 1,\quad j=1,2.
$$
Further considering $1/n^2=o(M^2 (2\nu_n(s)'/\pi)^{-d/2})$ uniformly over all $s$, we obtain that
$$
\inf_{s\geq d/4}\inf_{p\in H_1^{\rm IND}(\Delta_{n,s};s)}P\left(
\widehat{s}_{n,\nu_n(s)'}^2\leq 2M^2 (2\nu_n(s)'/\pi)^{-d/2}\right)\to 1.
$$
\end{proof}
\bibliographystyle{plainnat} |
2103.04013 | \section{Introduction}
The \textit{thin obstacle problem} studies the following system
\begin{equation}\label{TOP}
\begin{cases}
\Delta u\le 0 &\text{ in $B_1$,}\\
u\ge 0 &\text{ on $B_1\cap\{x_d=0\}$,}\\
\Delta u=0 &\text{ in $B_1\cap(\{u>0\}\cup\{x_d\neq 0\})$.}
\end{cases}
\end{equation}
Here we denote by $B_1$ the unit ball in the Euclidean space $\mathbb{R}^d$. For a point $x\in\mathbb{R}^d$, we decompose its coordinate as $x=(x',x_d)$ with $x'\in\mathbb{R}^{d-1}$ and $x_d\in\mathbb{R}$. Since the odd part of the solution, $(u(x',x_d)-u(x',-x_d))/2$, is harmonic, it is customary to remove it and assume that \textit{the solution $u$ is even in the $x_d$-direction}.
After earlier results by Richardson \cite{R} and Uraltseva \cite{U}, Athanasopoulos and Caffarelli showed in \cite{AC} that the solution is locally Lipschitz in $B_1$, and is locally $C^{1,1/2}$ when restricted to either $B_1\cap\{x_d\ge 0\}$ or $B_1\cap\{x_d\le 0\}$. This optimal regularity of the solution opened the door to the study of the \textit{contact set}
$$\Lambda(u):=\{u=0\}\cap\{x_d=0\}$$ and the \textit{free boundary} $$\Gamma(u):=\partial\{u>0\}\cap\{x_d=0\}.$$
In Athanasopoulos-Caffarelli-Salsa \cite{ACS}, the authors made a breakthrough by applying Almgren's monotonicity formula to show that for each point $q\in\Lambda(u)$, there is a constant $\lambda_q$ such that
$$\|u\|_{\mathcal{L}^2(\partial B_r(q))}\sim r^{\frac{d-1}{2}+\lambda_q}$$
as $r\to0.$ This constant $\lambda_q$ is called the \textit{frequency of the solution at $q$}. They also showed that the normalized rescalings converge to a \textit{blow-up profile}, that is,
\begin{equation}\label{FirstConvergence}u_{q,r}(\cdot):=r^{\frac{d-1}{2}}\frac{u(r\cdot+q)}{\|u\|_{\mathcal{L}^2(\partial B_r(q))}}\to u_0\end{equation}
along a subsequence of $r\to 0.$ The limit $u_0$ is a $\lambda_q$-homogeneous solution to \eqref{TOP} in $\mathbb{R}^d.$
It is interesting to study admissible values of frequencies, to classify homogeneous solutions, and to establish regularity of the contact set/ the free boundary. So far this program is completed only when $d=2.$ See, for instance, Petrosyan-Shahgholian-Uraltseva \cite{PSU}.
Let $\mathcal{A}$ denote the set of admissible frequencies, that is, $$\mathcal{A}=\{\lambda\in\mathbb{R}: \text{ there is a non-trivial $\lambda$-homogeneous solution to $\eqref{TOP}$}\}.$$ For a solution $u$ to \eqref{TOP} and $\lambda\in\mathcal{A}$, let $\Lambda_\lambda(u)$ denote the set of contact points with frequency $\lambda$, that is,
$$\Lambda_\lambda(u)=\{q\in\Lambda(u):\lambda_q=\lambda\}.$$
In general dimensions, Athanasopoulos-Caffarelli-Salsa \cite{ACS} showed that $\mathcal{A}\subset \{1,\frac 32\}\cup [2,+\infty).$ Explicit examples give that $\mathbb{N}\cup \{2k-\frac 12:k\in\mathbb{N}\}\subset\mathcal{A}.$ See, for instance, \cite{PSU}. Around each $m\in\mathbb{N}$, there is a frequency gap \cite{CSV, SY2}, in the sense that we can find $\alpha_m>0$, depending on $m$ and $d$, such that
$$\mathcal{A}\cap(m-\alpha_m,m+\alpha_m)=\{m\} \text{ for each $m\in\mathbb{N}$.}$$
As for the classification of homogeneous solutions and for the regularity of the contact set/ the free boundary, most results center on points with frequencies in $\{\frac 32\}\cup\mathbb{N}.$
Already in \cite{ACS}, it was known that the only $\frac 32$-homogeneous solutions are rotations and multiples of
$$u_{\frac 32}(x',x_d)=Re(x_{d-1}+i|x_d|)^{3/2}.$$
For a solution to \eqref{TOP}, the set $\Lambda_{\frac 32}(u)$ is relatively open in $\Gamma(u)$. The free boundary is an analytic manifold of dimension $(d-2)$ near $\Lambda_{\frac 32}(u)$ \cite{DS,KPS}.
For an even integer $2k$, Garofalo-Petrosyan \cite{GP} classified all $2k$-homogeneous solutions to \eqref{TOP} as
\begin{align}\label{EvenHomSolution}
\mathcal{P}_{2k}^+=\{p: \text{ }&\Delta p=0 \text{ and } x\cdot\nabla p=2kp \text{ in $\mathbb{R}^d$,} \\& p(\cdot,0)\ge 0 \text{ and } p(\cdot,x_d)=p(\cdot,-x_d)\}.\nonumber
\end{align}
Let $u$ be a solution to \eqref{TOP} and $q\in\Lambda(u).$ Garofalo-Petrosyan gave a geometric characterization of contact points with even frequencies
$$q\in\Lambda_{2k}(u) \text{ for some $k\in\mathbb{N}$} \iff \mathcal{H}^{d-1}(\Lambda(u)\cap B_r(q))=o(r^{d-1}) \text{ as $r\to 0.$}$$
Here $\mathcal{H}^{d-1}$ is the $(d-1)$-dimensional Hausdorff measure. In particular, we have $\Lambda_{2k}(u)\subset\Gamma(u).$ They also showed that $\Lambda_{2k}(u)$ is locally covered by $C^{1}$-manifolds. By quantifying the rate of convergence in \eqref{FirstConvergence} at points in $\Lambda_{2k}(u)$, the regularity of the covering manifolds was improved to $C^{1,\log^c}$ in Colombo-Spolaor-Velichkov \cite{CSV}.
More recently, there has been some interest in the study of contact points with odd frequencies, mainly motivated by the connection to the singular set in the obstacle problem, see \cite{FRS}. While points in $\Lambda_1(u)$ lie in the interior of $\Lambda(u),$ it remains an open question whether $\Lambda_{2k+1}(u)$ can contain points in the free boundary for $k\ge 1$. On the other hand, it is known that around $\Lambda_{2k+1}(u)$, the contact set has full density, that is,
$$q\in\Lambda_{2k+1}(u) \text{ for some $k\in\mathbb{N}$} \iff \mathcal{H}^{d-1}(\{u>0,x_d=0\}\cap B_r(q))=o(r^{d-1})\text{ as $r\to 0.$}$$
See, for instance, Proposition 7.1 in Fern\'andez-Real \cite{Fe}.
The family of $(2k+1)$-homogeneous solutions to \eqref{TOP} was recently classified by Figalli, Ros-Oton and Serra in \cite{FRS} as
\begin{align}\label{OddHomSolution}
\mathcal{P}_{2k+1}^+=\{p: \text{ }&\Delta p=0 \text{ in $\{x_d\neq0\}$ and } \Delta p\le 0 \text{ in $\mathbb{R}^d$,} \\ &x\cdot\nabla p=(2k+1)p \text{ in $\mathbb{R}^d$,}\nonumber \\&p(\cdot,0)= 0 \text{ and } p(\cdot,x_d)=p(\cdot,-x_d)\}.\nonumber
\end{align} They also proved uniqueness of the blow-up profile $u_0$ in \eqref{FirstConvergence} at $q\in\Lambda_{2k+1}(u)$.
Along a different direction, Focardi-Spadaro proved that the free boundary $\Gamma(u)$ is countably $(d-2)$-rectifiable in \cite{FoS1, FoS2}. They also showed that outside a set of dimension at most $(d-3)$, all free boundary points have frequencies in $\{2k,2k+1,2k-\frac 12: k\in\mathbb{N}\backslash\{0\}\}$. For generic boundary data, Fern\'andez-Real and Ros-Oton showed that the free boundary is smooth outside a set of dimension at most $(d-3)$ in \cite{FeR}.
In this paper, we focus on contact points with integer frequencies, that is, points in $\cup_{k\in\mathbb{N}}\Lambda_k(u).$ Around these points, we develop a unified approach that gives a uniform rate for the convergence in \eqref{FirstConvergence}.
Our main result is:
\begin{thm}\label{MainResult}
Suppose that $u$ is a solution to the thin obstacle problem \eqref{TOP}, and that $0\in\Lambda_m(u)$ for some $m\in\mathbb{N}$.
If $m=2k+1$ is odd, then there is a constant $\alpha\in(0,1)$, depending only on $k$ and the dimension $d$, such that
\begin{equation}\label{MainResultOdd}
u(x)=p(x)+O(|x|^{2k+1+\alpha}) \text{ as $x\to0$}
\end{equation} for some $p\in\mathcal{P}_{2k+1}^+.$
If $m=2k$ is even, then there is a constant $c>0$, depending only on $k$ and $d$, such that
\begin{equation}\label{MainResultEven}
u(x)=p(x)+O(|x|^{2k}(-\log|x|)^{-c}) \text{ as $x\to0$}
\end{equation} for some $p\in\mathcal{P}_{2k}^+.$
\end{thm}
\begin{rem} While it is known in \cite{FRS} that rescaled solutions converge to some $p\in\mathcal{P}_{2k+1}^+$ at points with odd frequencies, this is the first time a quantified rate of convergence has been obtained. Corresponding results at even frequency points were known in Colombo-Spolaor-Velichkov \cite{CSV}. Our method is different and applies to all points with integer frequencies. It also leads to an improved exponent $c$ in \eqref{MainResultEven}, and in the corresponding $\log$-epiperimatric inequality from \cite{CSV} at points with even frequencies, see Remark \ref{ImprovedEpi}.
\end{rem}
With a standard application of Whitney's extension theorem and the implicit function theorem, Theorem \ref{MainResult} leads to the following stratification result of $\Lambda_m(u)$:
\begin{thm}\label{MainStratification}
Suppose that $u$ is a solution to the thin obstacle problem \eqref{TOP}.
For each $m\in\mathbb{N}$, we have the following decomposition $$\Lambda_m(u)=\cup_{j=0,1,\dots, d-2}\Lambda_m^j(u).$$
The lowest stratum $\Lambda_m^0(u)$ is locally isolated.
If $m$ is odd, then $\Lambda_m^j(u)$ is locally covered by a $j$-dimensional $C^{1,\alpha}$ manifold for each $j=1,\dots,d-2.$
If $m$ is even, then $\Lambda_m^j(u)$ is locally covered by a $j$-dimensional $C^{1,\log}$ manifold for each $j=1,\dots,d-2.$
\end{thm}
\begin{rem}
Points in $\Lambda_{1}(u)$ and $\Lambda_3^0(u)$ lie in the interior of the contact set $\Lambda(u)$. It remains open whether other strata of $\Lambda_{2k+1}(u)$ can contain points on the free boundary, see Remark \ref{Interior}.
\end{rem}
To obtain the results at even-frequency points, the approach taken by Colombo-Spolaor-Velichkov \cite{CSV} is based on the decomposition of the energy in terms of Fourier modes. This leads to a $\log$-epiperimetric inequality for the $2k$-Weiss energy functional (see \eqref{Weiss}). On the other hand, our method is based on the classic technique of linearization as in De Silva \cite{D}. By working directly in the physical space instead of the Fourier space, it seems that we are able to get more detailed information.
The main challenge is that solutions to the linearized problem do not have to satisfy the constraints in \eqref{TOP} (they might fail $\Delta u\le 0$ or
$u(x',0)\ge 0$). In our approach, this issue is fixed by solving a `boundary layer problem' near the hyperplane $\{x_d=0\}.$ For each unconstrained $m$-homogenous harmonic polynomial $p$, we associate its approximation $\bar p$ that satisfies the constraints on $\{x_d=0\}$ and is harmonic up to an error $\kappa_p$ away from this hyperplane. We use the class of functions $\bar p$ to approximate the solution $u$ inductively in dyadic balls $B_r$, while keeping track of the rescaled error $\varepsilon \ge \kappa_p$. We introduce the notation $u \in \mathcal{S}_{m}(p,\varepsilon,r)$ when a solution $u$ is $\varepsilon$-approximated by $\bar p$ at scale $r$, see Definition \ref{DefWellApprox}.
With this notation, the main lemma is the following:
\begin{lem}\label{Dichotomy}
Given $m\in\mathbb{N}$, there are constants, $\tilde{\varepsilon}$, $r_0$, $c$ small, $C$ big, such that
If $u\in\mathcal{S}_{m}(p,\varepsilon,1)$ with $\varepsilon<\tilde{\varepsilon}$ and $1\le \|p\|_{\mathcal{L}^2(\mathbb{S}^{d-1})}\le 2$, then we have the following dichotomy:
a) Either $$W_{m}(u;1)-W_{m}(u; r_0)\ge c\varepsilon^2,$$ and $$u\in\mathcal{S}_m(p,C\varepsilon,r_0);$$
b) or $$u\in\mathcal{S}_{m}(p',\frac12\varepsilon,r_0)$$ for some $p'$ with $$\|p'-p\|_{\mathcal{L}^2(\mathbb{S}^{d-1})}\le C\varepsilon.$$
\end{lem}
Here the integer $m$ denotes the frequency of the contact point, and $W_m$ is the Weiss energy functional, see \eqref{Weiss}.
Lemma \ref{Dichotomy} states that when moving to a smaller scale, we can improve, by a definite amount, either the Weiss energy or the error in approximation. On the other hand, the Weiss energy is controlled by the error $\varepsilon$ in the approximation, see Lemma \ref{WeissComparison2}. It follows that both quantities decay in a quantified fashion.
Similar ideas have been applied to the nonlinear obstacle problem in \cite{SY1} as well as the triple-membrane problem in \cite{SY3}.
This paper is organized as follows: In Section 2, we collect some preliminary results and introduce the boundary layer problem. In Sections 3 and 4, we prove Lemma \ref{Dichotomy} at points with the odd and even frequencies. These two sections contain the heart of this work. In Section 5, we conclude with the proof of our main results.
\section{Preliminaries}
In this section, we collect some useful results and introduce the boundary layer problem.
Throughout this paper, we denote by $u$ a solution to the thin obstacle problem \eqref{TOP} on some domain inside $\mathbb{R}^d$ with $d\ge 3.$ This space is decomposed as $$\mathbb{R}^d=\{(x',x_d):x'\in\mathbb{R}^{d-1}, x_d\in\mathbb{R}\}.$$ For a set $E\subset\mathbb{R}^d$, we define the following subsets relative to $\{x_d=0\}$:
$$E'=E\cap\{x_d=0\}, \text{ } E^+=E\cap\{x_d>0\} \text{ and } E^-=E\cap\{x_d< 0\}.$$In particular, the contact set is $$\Lambda(u)=\{u=0\}'.$$
By applying Almgren's monotonicity formula to the thin obstacle problem, Athanasopoulos, Caffarelli and Salsa showed in \cite{ACS} that the contact set $\Lambda(u)$ can be decomposed according to the frequencies of the contact points. In this work, we focus on points with integer frequencies. Thanks to \cite{GP} and \cite{FRS}, these points can be characterized as
\begin{equation}\label{EvenContactDef}
\Lambda_{2k}(u)=\{q\in\Lambda(u): u_{q,r}\to u_0\in\mathcal{P}_{2k}^+ \text{ as $r\to 0$}\}
\end{equation} and
\begin{equation}\label{OddContactDef}
\Lambda_{2k+1}(u)=\{q\in\Lambda(u): u_{q,r}\to u_0\in\mathcal{P}_{2k+1}^+ \text{ as $r\to0$}\}.
\end{equation}
Recall the definition of normalized rescalings $u_{q,r}$ from \eqref{FirstConvergence}. The spaces of homogeneous solutions, $\mathcal{P}_{2k}^+$ and $\mathcal{P}_{2k+1}^+$, were introduced in \eqref{EvenHomSolution} and \eqref{OddHomSolution}.
When focusing on a particular frequency, constants depending only on that frequency and the dimension $d$ are called \textit{universal constants}.
\subsection{Weiss monotonicity formula and consequences}
First used by Weiss for the obstacle problem in \cite{W}, the Weiss monotonicity formula has been indispensable in the study of free boundary problems. Garofalo-Petrosyan \cite{GP} introduced its analogue to the thin obstacle problem.
For each $\lambda\in\mathbb{R}$, the \textit{$\lambda$-Weiss energy functional} is
\begin{equation}\label{Weiss}
W_\lambda(u;r)=\frac{1}{r^{d-2+2\lambda}}\int_{B_r}|\nabla u|^2-\frac{\lambda}{r^{d-1+2\lambda}}\int_{\partial B_r}u^2.
\end{equation}
We collect some of its properties in the following lemma. For its proof, see Theorem 1.4.1 and Theorem 1.5.4 in \cite{GP}.
\begin{lem}\label{PropertyWeiss}
Suppose that $u$ solves the thin obstacle problem in $B_1$. Then for $r\in(0,1)$, we have
\begin{equation}\label{DerOfWeiss}
\frac{d}{dr}W_\lambda(u;r)=\frac{2}{r}\int_{\partial B_1}(\nabla u_r\cdot\nu-\lambda u_r)^2,
\end{equation} where $u_r(x)=u(rx)/r^\lambda.$ In particular, $r\mapsto W_\lambda(u;r)$ is non-decreasing.
If we further assume that $0\in\Lambda_\lambda(u)$, then $\lim_{r\to 0}W_\lambda(u;r)=0.$
\end{lem}
Under the same assumptions as in Lemma \ref{PropertyWeiss}, we can integrate \eqref{DerOfWeiss} and apply H\"older's inequality to get
\begin{equation}\label{ChangeInRadial}
\int_{\partial B_1}|u_r-u_s|\le (\log(r/s))^{\frac 12}[W_\lambda(u;r)-W_\lambda(u;s)]^{\frac 12}
\end{equation} for $0<s<r<1.$
\subsection{The boundary layer problem}\label{TBLP}
When dealing with the linearized problem, we need to work with polynomials that may fail the constraints in \eqref{TOP}. These polynomials form the following spaces:
\begin{equation}\label{EvenHomPolyn}
\mathcal{P}_{2k}=\{p: \text{ }\Delta p=0 \text{ and } x\cdot\nabla p=2kp \text{ in $\mathbb{R}^d$,} \text{ and } p(\cdot,x_d)=p(\cdot,-x_d)\}
\end{equation} and
\begin{align}\label{OddHomPolyn}
\mathcal{P}_{2k+1}=\{p: \text{ }&\Delta p=0 \text{ in $\{x_d\neq0\}$, } x\cdot\nabla p=(2k+1)p \text{ in $\mathbb{R}^d$,}
\\&p(\cdot,0)= 0 \text{ and } p(\cdot,x_d)=p(\cdot,-x_d)\}.\nonumber
\end{align}
Compared with \eqref{EvenHomSolution} and \eqref{OddHomSolution}, polynomials in $\mathcal{P}_{2k}$ may fail to be non-negative along $\{x_d=0\}$, and polynomials in $\mathcal{P}_{2k+1}$ may fail to be superharmonic. We `correct' such error by solving a thin obstacle problem in a boundary layer on the sphere $\mathbb{S}^{d-1}$.
To be precise, for small $\eta>0$, the \textit{boundary layer of width $\eta$} is defined as
$$L_\eta=\{(x',x_d):|x_d|<\eta|x|\}.$$This is the region trapped by the following surfaces
$$S^+_\eta=\{(x',x_d):x_d=\eta|x|\} \text{ and } S^-_\eta=\{(x',x_d):x_d=-\eta|x|\}.$$
When there is no ambiguity, we denote their intersections with $\mathbb{S}^{d-1}$ by the same expressions.
\begin{rem}Given $m\in\mathbb{N}$, we fix $\eta$ small enough, depending only on $m$ and $d$, so that the first Dirichlet eigenvalue of the the operator $\Delta_{\mathbb{S}^{d-1}}+\lambda(m)$ in $L_\eta$ is negative, where $\Delta_{\mathbb{S}^{d-1}}$ is the spherical Laplacian, and $$\lambda(m)=m(m+d-2).$$
\end{rem}
In particular, the following is well-defined:
\begin{defi}\label{ReplacementOfp}
Given $m\in\mathbb{N}$ and $p\in\mathcal{P}_m$, the \textit{replacement of $p$}, denoted by $\overline{p}$, is the minimizer of the following energy
$$w\mapsto \int_{\mathbb{S}^{d-1}}|\nabla_{\mathbb{S}^{d-1}} w|^2-\lambda(m)w^2$$over functions satisfying $w\ge 0$ on $\{x_d=0\}$ and $w=p$ outside $L_\eta$.
Here $\nabla_{\mathbb{S}^{d-1}}$ denotes the tangential gradient on $\mathbb{S}^{d-1}$.
Denote the difference of $p$ and its replacement $\overline{p}$ by $v_p$, that is,
$$v_p=\overline{p}-p.$$
\end{defi}
\begin{rem}\label{LongRemarkForReplacement}
The replacement solves the thin obstacle problem for the operator $\Delta_{\mathbb{S}^{d-1}}+\lambda(m)$ in the boundary layer, namely,
$$\begin{cases}
(\Delta_{\mathbb{S}^{d-1}}+\lambda(m))\overline{p}\le 0 &\text{in $L_\eta$,}\\
\overline{p}\ge0 &\text{on $(\mathbb{S}^{d-1})',$}\\
(\Delta_{\mathbb{S}^{d-1}}+\lambda(m))\overline{p}=0 &\text{in $L_\eta\cap(\{x_d\neq 0\}\cup\{\overline{p}>0\}).$}
\end{cases}$$
As a result, we can view $(\Delta_{\mathbb{S}^{d-1}}+\lambda(m))\overline{p}$ as a signed measure, supported along $S^{\pm}_\eta$ and $\mathbb{S}^{d-1}\cap\{\overline{p}=0\}'$, of the following form
$$(\Delta_{\mathbb{S}^{d-1}}+\lambda(m))\overline{p}=f_pd\mathcal{H}^{d-2}|_{S^{\pm}_\eta}+g_pd\mathcal{H}^{d-2}|_{(\mathbb{S}^{d-1})'}.$$
With an abuse of notation, we denote the $m$-homogeneous extension of $\overline{p}$ and the corresponding extensions of $f_p$ and $g_p$ by the same notations. This way, we have
$$\Delta\overline{p}=f_pd\mathcal{H}^{d-1}|_{S^{\pm}_\eta}+g_pd\mathcal{H}^{d-1}|_{\{x_d=0\}}.$$
For each $p\in\mathcal{P}_m$, the following constant, $\kappa_p$, measures the extent to which $p$ fails to be a solution to the thin obstacle problem:
\begin{equation*}\label{Kappa}
\kappa_p:=\int_{S^+_\eta\cap\mathbb{S}^{d-1}}f_pd\mathcal{H}^{d-2}.
\end{equation*}
For these functions and constants, we often omit the subscript $p$ when there is no ambiguity.
\end{rem}
\begin{lem}\label{Lem22}
Using the notations in Definition \ref{ReplacementOfp} and Remark \ref{LongRemarkForReplacement}, we have
1) $v_p\ge 0$ on $\mathbb{S}^{d-1}$, $\kappa_p\ge 0.$
2) There are universal constants, $c$ and $C$, such that
$$c\kappa_p\le f_p\le C\kappa_p\text{ on $S^{\pm}_\eta\cap\mathbb{S}^{d-1}$}.$$ \end{lem}
\begin{proof}
The first statement follows directly from the maximum principle.
To see the second statement, we first note that $v$ is a non-negative harmonic function in $L_\eta^+.$ By the strong maximum principle, it suffices to consider the case when $v>0$ in $L_\eta^+.$
In this case, we can apply the Harnack principle to $v$ inside $(B_2\backslash B_{1/2})\cap L_\eta^+$ to get
$c\le\frac{\sup_K v}{\inf_K v}\le C,$ where $K=\mathbb{S}^{d-1}\cap\{x_d=\frac{1}{2}\eta\}.$
If we denote by $\nu$ the unit normal along $S_\eta^+$ that is exterior to $L_\eta$, by the boundary Harnack principle, we have
\begin{equation}\label{22equation}c\inf_K v\le \partial_\nu v(x)\le C\sup_K v \quad \forall x\in\mathbb{S}^{d-1}\cap S_\eta^+.\end{equation}
Note that $f=\partial_\nu v$ along $S_\eta^+$, the conclusion follows.
\end{proof}
We also have the following bounds for $\kappa_p$:
\begin{lem}\label{BoundsForKappa}
Using the notations in Definition \ref{ReplacementOfp} and Remark \ref{LongRemarkForReplacement}, and further assume $\|p\|_{\mathcal{L}^2(\mathbb{S}^{d-1})}\le 1$, we can find a universal constant $C$ such that
$$\kappa_p\le C\sup_{\mathbb{S}^{d-1}} v_p.$$
Moreover, when $m$ is even, we have $$(\sup_{\mathbb{S}^{d-1}}v_p)^{\frac{d-1}{2}}\le C\kappa_p.$$
\end{lem}
\begin{proof}
Directly from \eqref{22equation}, we have the bound $\kappa_p\le C\sup_{\mathbb{S}^{d-1}} v_p.$
For the second comparison, we note that when $m$ is even, $v=\overline{p}-p$ solves the thin obstacle problem in $L_\eta$ with $-p$ as the obstacle and $0$ as boundary data. Suppose $$\varepsilon=-p(e_1)=\sup_{(\mathbb{S}^{d-1})'}(-p),$$ where $e_1$ is the unit vector in the $x_1$-direction, then $\sup_{\mathbb{S}^{d-1}} v\le\varepsilon$. Regularity of $p$ gives $v\ge -p\ge\frac 78 \varepsilon$ in $B'_{c\varepsilon^{1/2}}(e_1)\cap\mathbb{S}^{d-1}$, which leads to $v\ge \frac 78 \varepsilon$ in $B_{c\varepsilon^{1/2}}(e_1)\cap\mathbb{S}^{d-1}$ by a scaling argument.
From here we have $v(e_1,\frac{1}{2}\eta)\ge c\varepsilon^{\frac{d-1}{2}}$ by comparing with a truncation and rescaling of the Green's function for $\Delta_{\Sph}$ with a pole at $e_1$ and vanishes outside $B_\eta(e_1)\cap\mathbb{S}^{d-1}.$ Combining this with \eqref{22equation} gives the desired result.
\end{proof}
With this, we have the following control over the size of $\|v\|_{H^1}$ in terms of $\kappa$:
\begin{lem}\label{H1Forv}
Using the notations in Definition \ref{ReplacementOfp} and Remark \ref{LongRemarkForReplacement}, and further assume $\|p\|_{\mathcal{L}^2(\mathbb{S}^{d-1})}\le 1$, we have a universal constant $C$ such that
$$\|v_p\|_{H^1(B_1)}\le C\kappa_p^{\frac{1}{2}+\frac{1}{d-1}} \text{ if $p\in \mathcal{P}_{2k}$}$$ and
$$\|v_p\|_{H^1(B_1)}\le C\kappa_p^{1/2} \text{ if $p\in \mathcal{P}_{2k+1}$}.$$
\end{lem}
\begin{proof}
We first deal with the case when $p\in\mathcal{P}_{2k}$.
With the homogeneity and harmonicity of $p$, we have
\begin{align*}
\int_{\mathbb{S}^{d-1}}|\nabla_{\Sph}\overline{p}|^2-\lambda(2k)\overline{p}^2&=-\int_{\mathbb{S}^{d-1}}v(\Delta_{\Sph}+\lambda(2k))\overline{p}\\
&\le-\sup_{\mathbb{S}^{d-1}}v\int_{(\mathbb{S}^{d-1})'}g.\nonumber
\end{align*}
Now let $P$ denote the $2k$-homogeneous harmonic polynomial with $P=1$ on $(\mathbb{S}^{d-1})'$. Then
$ \int_{\mathbb{S}^{d-1}}P(\Delta_{\Sph}+\lambda(2k))\overline{p}=\int_{\mathbb{S}^{d-1}}\overline{p}(\Delta_{\Sph}+\lambda(2k))P=0 $ gives
$$-\int_{(\mathbb{S}^{d-1})'}g\sim \int_{S^{\pm}_\eta}f\sim\kappa.$$ Thus
\begin{equation}\label{EvenPbarEnergy}
\int_{\mathbb{S}^{d-1}}|\nabla_{\Sph}\overline{p}|^2-\lambda\overline{p}^2\le C\kappa^{1+\frac{2}{d-1}}
\end{equation}
by Lemma \ref{BoundsForKappa}.
Using again the homogeneity and harmonicity of $p\in\mathcal{P}_{2k}$, this implies
$$\int_{\mathbb{S}^{d-1}}|\nabla_{\Sph} v|^2-\lambda v^2=\int_{\mathbb{S}^{d-1}}|\nabla_{\Sph}\overline{p}|^2-\lambda\overline{p}^2\le C\kappa^{1+\frac{2}{d-1}}.$$
To conclude the estimate for $p\in\mathcal{P}_{2k}$, we simply note that the left-hand side is comparable to $\|v\|_{H_1(B_1)}^2$ when $\eta$ is chosen small.
Now we deal with the case when $p\in\mathcal{P}_{2k+1}$.
In this case, note that $p$ is admissible in the minimization problem in Definition \ref{ReplacementOfp}, we have
\begin{equation}\label{OddPbarEnergy}\int_{\mathbb{S}^{d-1}}|\nabla_{\mathbb{S}^{d-1}}\overline{p}|^2-\lambda(2k+1)\overline{p}^2\le\int_{\mathbb{S}^{d-1}}|\nabla_{\mathbb{S}^{d-1}}p|^2-\lambda(2k+1)p^2=0,
\end{equation}
which implies
$$\int_{\mathbb{S}^{d-1}}|\nabla_{\mathbb{S}^{d-1}}(\overline{p}-p)|^2-\lambda(\overline{p}-p)^2\le 2\int_{\mathbb{S}^{d-1}}p(\Delta_{\mathbb{S}^{d-1}}+\lambda)(\overline{p}-p)=2\int_{\mathbb{S}^{d-1}\cap S_{\eta}^{\pm}}pfd\mathcal{H}^{d-2}.
$$For the last equality, we used $p=0$ along $\{x_d=0\}$, and $\Delta p=0$ away from $\{x_d=0\}.$
This gives
$$\int_{\mathbb{S}^{d-1}}|\nabla_{\mathbb{S}^{d-1}}v|^2-\lambda v^2\le C\kappa.$$We conclude by noting the left-hand side is comparable to $\|v\|_{H^1(B_1)}^2$ when $\eta$ is small. \end{proof}
For $p\in\mathcal{P}_m$ with $\overline{p}\neq p$, we sometimes need to absorb the right-hand side $f_p$ from Remark \ref{LongRemarkForReplacement}. To do so, we need some auxiliary functions.
Firstly, let $\varphi_p:\mathbb{S}^{d-1}\to\mathbb{R}$ denote the projection of the normalized $f_p$ onto $\mathcal{P}_m$, that is,
\begin{equation}\label{Defphip}
\varphi_p=\sum \langle \frac{f_p}{\kappa_p},p_j\rangle p_j,
\end{equation} where $\{p_j\}$ is an orthonormal basis for $\mathcal{P}_m$ in $\mathcal{L}^2(\mathbb{S}^{d-1})$ and $$\langle\frac{f_p}{\kappa_p},p_j\rangle=\frac{1}{\kappa_p}\int_{\mathbb{S}^{d-1}\cap S^{\pm}_\eta}p_jf_pd\mathcal{H}^{d-2}.$$
In particular, the difference $\frac{f_p}{\kappa_p}-\varphi_p$ is perpendicular to $\mathcal{P}_m$. Consequently, the theory of Fredholm implies that we can find a unique function $H_p$ on $\mathbb{S}^{d-1}$, that is even with respect to $x_d$ and satisfyies:
If $m=2k$, then
\begin{equation}\label{DefHpEven}
(\Delta_{\Sph}+\lambda(2k))H_p=\frac{f_p}{\kappa_p}-\varphi_p \text{ on $\mathbb{S}^{d-1}$};
\end{equation}
If $m=2k+1$, then
\begin{equation}\label{DefHpOdd}
(\Delta_{\Sph}+\lambda(2k+1))H_p=\frac{f_p}{\kappa_p}-\varphi_p \text{ on $(\mathbb{S}^{d-1})^{\pm}$, and } H_p(\cdot,0)=0.
\end{equation}
If we denote its $m$-homogeneous extension also by $H_p$, then
\begin{equation}\label{DefPhip}
\Phi_p:=H_p+\frac{1}{d+2m-2}\varphi_p(\frac{x}{|x|})|x|^{m}\log|x|
\end{equation}satisfies
$$\Delta\Phi_p=\frac{f_p}{\kappa_p}d\mathcal{H}^{d-1}|_{S^{\pm}_\eta} \text{ in $\mathbb{R}^d$ if $m=2k$;}$$
and
$$\Delta\Phi_p=\frac{f_p}{\kappa_p}d\mathcal{H}^{d-1}|_{S^{\pm}_\eta} \text{ in $(\mathbb{R}^d)^{\pm}$, and } \Phi_p(\cdot,0)=0 \text{ if $m=2k+1$.}$$
We often omit the subscript when there is no ambiguity.
For our argument, it is crucial that $f_p$ has a non-trivial projection into $\mathcal{P}_m$:
\begin{lem}\label{NontrivialProjection}
If $\kappa_p\neq 0$, then there are universal positive constants $c$ and $C$ such that
$$c\le\|\varphi_p\|_{\mathcal{L}^2(\mathbb{S}^{d-1})}\le C,
$$ and $$\|\Phi_p\|_{C^{0,1}(B_1)} \le C.$$
\end{lem}
\begin{proof}
Both upper bounds follow from the definitions of $\varphi$, $\Phi$ and Lemma \ref{Lem22}.
For the lower bound for $\varphi$, it suffices to note that we can find $q\in\mathcal{P}_m$ with $\|q\|_{\mathcal{L}^2(\mathbb{S}^{d-1})}=1$ and $q\ge c>0$ along $S^\pm_\eta$ if $\eta$ is small.
\end{proof}
\begin{lem}\label{PointwiseApprox}
If $u$ solves \eqref{TOP} in $B_1$, and $p\in\mathcal{P}_{2k}\cup\mathcal{P}_{2k+1}$, then $$\|u-\overline{p}\|_{\mathcal{L}^\infty(B_{1/2})}+\|u-\overline{p}\|_{H^1(B_{1/2})}\le C(\|u-\overline{p}\|_{\mathcal{L}^2(B_1)}+\kappa_p)$$ for a universal constant $C$.
\end{lem}
\begin{proof}
By Definition \ref{ReplacementOfp}, we have $\overline{p}\ge0$ along $\{x_d=0\}$. Thus $\Delta u=0$ inside $\{u-\overline{p}>0\}$. As a result,
$$\Delta(u-\overline{p})=-\Delta\overline{p}\ge-fd\mathcal{H}^{d-1}|_{S^{\pm}_\eta} \text{ in $\{u-\overline{p}>0\}$.}$$
Similarly, inside $\{\overline{p}-u>0\}$, we have $\Delta\overline{p}\ge 0.$ Thus
$$\Delta(u-\overline{p})\le\Delta u\le 0 \text{ in $\{\overline{p}-u>0\}$}.$$
Combining these, we have
$$\Delta |u-\overline{p}|\ge-fd\mathcal{H}^{d-1}|_{S^{\pm}_\eta} \text{ in $B_1$.}$$
Meanwhile, let $\zeta$ be a smooth non-negative function on $\mathbb{S}^{d-1}$ satisfying
$$\zeta=0\text{ on $(\mathbb{S}^{d-1})'$ and }\zeta=1 \text{ on $\{|x_d|\ge \frac{\eta}{2}|x'|\}$.}$$
Recall the auxiliary function from \eqref{DefPhip}, we have
$$\Delta(\zeta\Phi)=\frac{1}{\kappa}fd\mathcal{H}^{d-1}|_{S^{\pm}_\eta}+R,$$where the remainder $R$ is universally bounded in $B_1$. Consequently, we have
$$\Delta(|u-\overline{p}|+\kappa\zeta\Phi)\ge-C\kappa \text{ in $B_1.$}$$
Together with Lemma \ref{NontrivialProjection}, this gives the estimates on $\|u-\overline{p}\|_{\mathcal{L}^\infty(B_{1/2})}$ and $\|u-\overline{p}\|_{H^1(B_{1/2})}$.
\end{proof}
\begin{rem}
Lemma \ref{PointwiseApprox} is the reason why it is preferable to work with the replacement $\overline{p}$ rather then the original $p$.
\end{rem}
\subsection{Well-approximated solutions}
The heart of this paper is Lemma \ref{Dichotomy}, where we improve the approximation of a solution $u$ by replacements of polynomials from $\mathcal{P}_{2k}$ or $\mathcal{P}_{2k+1}$, defined in \eqref{EvenHomPolyn} and \eqref{OddHomPolyn}.
Let $u$ be a solution to \eqref{TOP} in $B_1$, and let $p\in\mathcal{P}_m$ for $m=2k$ or $2k+1$. The distance between them is denoted by
\begin{equation}\label{epsup}
\delta(u,p):=\max\{\|u-\overline{p}\|_{H^1(B_1)},\kappa_p\}.
\end{equation} Here we use notations from Definition \ref{ReplacementOfp} and Remark \ref{LongRemarkForReplacement}.
\begin{defi}\label{DefWellApprox}
Given $\varepsilon>0$, we say that $u$ is $\varepsilon$-approximated by $p\in\mathcal{P}_m$ at scale $r>0$, and write
$$u\in\mathcal{S}_m(p,\varepsilon,r)
$$ if
$$\delta(u_r,p)<\varepsilon,$$where $u_r(x)=\frac{1}{r^m}u(rx).$
\end{defi}
We collect some immediate consequences.
\begin{lem}\label{WeissComparison}
If $u\in\mathcal{S}_{m}(p,\varepsilon,1),$ then $$W_{m}(u;3/4)\le W_m(\overline{p};3/4)+C\varepsilon^2
$$for a universal $C$.
\end{lem}
\begin{proof}
With
$\|u-\overline{p}\|_{H^1(B_1)}\le \varepsilon$,
we can find $\rho\in[\frac{3}{4},\frac{7}{8}]$ such that
$$\int_{\partial B_\rho}(u_\nu-\overline{p}_\nu)^2+(u-\bar p)^2d\mathcal{H}^{d-1}\le C\varepsilon^2.$$
A direct computation gives
\begin{align*}
W_m(\overline{p};\rho)-W_m(u;\rho)=&\frac{1}{\rho^{d+2m-2}}\int_{B_\rho}|\nabla(\overline{p}-u)|^2-2\Delta u(\overline{p}-u)\\
+\frac{1}{\rho^{d+2m-2}}\int_{\partial B_\rho}2u_\nu&(\overline{p}-u)-\frac{m}{\rho^{d+2m-1}}\int_{\partial B_\rho}(\overline{p}-u)^2+2u(\overline{p}-u).
\end{align*}
With $u\Delta u=0$ and $\overline{p}\Delta u\le 0$, this implies
\begin{align*}W_{m}(\overline{p};\rho)-W_{m}(u;\rho)&\ge\frac{1}{\rho^{d+2m-2}}\int_{\partial B_\rho}2(\rho u_\nu-mu)(\overline{p}-u)-m(\overline{p}-u)^2\\
&\ge-C\varepsilon^2.
\end{align*}
where we have used $\rho u_\nu-mu = \rho (u-\bar p)_\nu - m (u - \bar p)$.
With monotonicity of $W_m$ and homogeneity of $\overline{p}$, this implies
$$W_{m}(u;3/4)\le W_m(\overline{p};3/4)+C\varepsilon^2.
$$
\end{proof}
A consequence of Lemma \ref{WeissComparison} is the following relation between $W_m$ and $\varepsilon$.
\begin{lem}\label{WeissComparison2}
If $u\in\mathcal{S}_{m}(p,\varepsilon,1),$ then $$W_{m}(u;3/4)\le C\varepsilon^2, \quad \quad \mbox{if $m$ is odd},$$
and
$$W_{m}(u;3/4)\le C\varepsilon^{1+\frac{2}{d-1}}, \quad \quad \mbox{if $m$ is even,}$$
with $C$ universal.
\end{lem}
\begin{proof}
By Lemma \ref{WeissComparison} we only need to bound $W_m(\bar p:1)$.
When $m$ is odd, $W_{2k+1}(\overline{p};1)\le 0$ by \eqref{OddPbarEnergy}.
When $m$ is even, note that
\begin{equation}\label{EvenPbarEnergy2}
W(\overline{p};1)=C\int_{\mathbb{S}^{d-1}}|\nabla_{\Sph}\overline{p}|^2-\lambda(2k)\overline{p}^2\le C\varepsilon^{1+\frac{2}{d-1}}
\end{equation} by \eqref{EvenPbarEnergy}.
\end{proof}
\begin{rem}\label{AttractionRepulsion}
The difference between the exponents in Lemma \ref{WeissComparison2} leads to the different rates of convergence in Theorem \ref{MainResult}.
\end{rem}
The following is a version of Lemma B.2 from \cite{FRS}. It follows by quantifying the proof in \cite{FRS} and it is left to the reader:
\begin{lem}\label{PinDown}
Suppose that $u$ is a solution to \eqref{TOP} in $B_1$.
If $u\le p+\varepsilon$ in $B_1$ for some $p\in\mathcal{P}_{2k+1}$ with $\|p\|_{\mathcal{L}^2(\mathbb{S}^{d-1})}\le 1$, then
$$u=0 \quad \text{in $B_{1-\varepsilon^{1/2}}'\cap\{\frac{\partial}{\partial x_d} p\le -M\varepsilon^{1/2}\}$,}$$where $M$ is a universal constant.
\end{lem}
\begin{rem}\label{OneSidedDer}
For a function $w\in C^{1}(B_1\cap\{x_d\ge 0\})$, $\frac{\partial}{\partial x_d}w(x',0)$ denotes the one-sided derivative in the $x_d$-direction taken in $B_1\cap\{x_d\ge0\},$ that is,
$$\frac{\partial}{\partial x_d}w(x',0)=\lim_{t\to 0+}\frac{w(x',t)-w(x',0)}{t}.
$$
\end{rem}
\section{The dichotomy at a point with odd frequency}
Suppose that $u$ is a solution to the thin obstacle problem \eqref{TOP}, and that $0$ is a point with integer frequency. By results in \cite{GP,FRS}, up to an initial scaling, the solution $u$ is well-approximated in $B_1$ by some homogeneous solution from either $\mathcal{P}_{2k+1}^+$ or $\mathcal{P}_{2k}^+$ as in \eqref{OddHomSolution} and \eqref{EvenHomSolution}. To get a rate of convergence as in Theorem \ref{MainResult}, we need to improve this approximation at smaller scales.
This is achieved through the dichotomy as in Lemma \ref{Dichotomy}, which states that at a smaller scale, either the approximation can be improved in a quantified fashion, or the Weiss energy drops in a quantified fashion. In some sense, this method combines the strengths of the epiperimetric inequality approach as in \cite{CSV} and the approach by linearization as in \cite{D}.
In this section and the next, we prove this dichotomy for points with odd and even frequencies, respectively. In the final section of this paper, we show how to deduce the main result from them.
We state the main lemma for this section:
\begin{lem}[Dichotomy at a point with odd frequency]\label{DichotomyOdd}
Given $k\in\mathbb{N}$, there are universal constants, $\tilde{\varepsilon}$, $r_0$, $c$ small and $C$ big, such that
If $u\in\mathcal{S}_{2k+1}(p,\varepsilon,1)$ with $\varepsilon<\tilde{\varepsilon}$ and $1\le \|p\|_{\mathcal{L}^2(\mathbb{S}^{d-1})}\le 2$, then we have the following dichotomy:
a) Either $$W_{2k+1}(u;1)-W_{2k+1}(u;r_0)\ge c\varepsilon^2,$$ and
$$u\in\mathcal{S}_{2k+1}(p,C\varepsilon,r_0);$$
b) or $$u\in\mathcal{S}_{2k+1}(p',\frac 12\varepsilon,r_0)$$ for some $p'$ with $$\|p'-p\|_{\mathcal{L}^2(\mathbb{S}^{d-1})}\le C\varepsilon.$$
\end{lem}
Recall the space of well-approximated solutions $\mathcal{S}_{2k+1}$ from Definition \ref{DefWellApprox}, and the Weiss energy from \eqref{Weiss}.
The remaining part of this section is devoted to the proof of Lemma \ref{DichotomyOdd}. We argue by contradiction.
Suppose, on the contrary, the lemma is not true. Then we find a sequence $(u_n,p_n)$ satisfying
$$ 1\le\|p_n\|_{\mathcal{L}^2(\mathbb{S}^{d-1})}\le 2, \text{ and } u_n\in\mathcal{S}_{2k+1}(p_n,\varepsilon_n,1) \text{ with $\varepsilon_n\to 0.$}$$
However, neither a) nor b) holds, that is
\begin{equation}\label{FailingWeiss1}
W_{2k+1}(u_n;1)-W_{2k+1}(u_n;r_0)<\frac{1}{n^2}\varepsilon_n^2,
\end{equation}
and
\begin{equation}\label{FailingImprovement}
u_n\notin\mathcal{S}_{2k+1}(p',\frac12\varepsilon_n,r_0) \quad \forall p'\in\mathcal{P}_{2k+1} \text{ with }\|p'-p_n\|_{\mathcal{L}^2(\mathbb{S}^{d-1})}\le C\varepsilon_n.
\end{equation} The constants $r_0$ and $C_0$ will be chosen depending on universal constants.
We will choose $r_0\le 1/2$. Hence by monotonicity of the Weiss energy and \eqref{FailingWeiss1}, we have
\begin{equation}\label{FailingWeiss}
W_{2k+1}(u_n;1)-W_{2k+1}(u_n;1/2)<\frac{1}{n^2}\varepsilon_n^2.
\end{equation}
Corresponding to this sequence $p_n$, we have auxiliary functions $f_n$, $g_n$, $v_n$, $\varphi_n$, $H_n$ and $\Phi_n$, and constants $\kappa_n$ as in Definition \ref{ReplacementOfp}, Remark \ref{LongRemarkForReplacement}, \eqref{Defphip} and \eqref{DefPhip}.
Now with $p_n$ uniformly bounded in the finite dimensional space $\mathcal{P}_{2k+1}$, we have, up to a subsequence,
$$p_n\to p_\infty \text{ uniformly in $C^1(B_1\cap\{x_d\ge0\})$ and $C^1(B_1\cap\{x_d\le0\})$.}$$
As a result, we have $p_\infty\in\mathcal{P}_{2k+1}$. Actually, it is in the more restrictive space $\mathcal{P}_{2k+1}^+$ (see \eqref{OddHomSolution}):
\begin{lem}
$\frac{\partial}{\partial x_d} p_\infty\le 0$ in $B_1'$.
\end{lem}
\begin{proof}
By Lemma \ref{H1Forv}, we have
\begin{equation}\label{33equation}\|u_n-p_n\|_{H^1(B_1)}\le \|u_n-\overline{p_n}\|_{H^1(B_1)}+\|v_n\|_{H^1(B_1)}\le C\varepsilon_n^{1/2}.
\end{equation}
Suppose $\frac{\partial}{\partial x_d}p_\infty(x',0)=\beta>0$ at some $(x',0)\in B_1'.$
Since $\frac{\partial}{\partial x_d}u_n(\cdot,0)\le 0$, we have
$\frac{\partial}{\partial x_d}(p_n-u_n)\ge\frac{1}{2}\beta$ in a neighborhood of $(x',0)$ for large $n$. This contradicts \eqref{33equation} eventually.
\end{proof}
Since $p_\infty\in\mathcal{P}_{2k+1}$, the following set
\begin{equation*}
\mathcal{N}:=\{\frac{\partial}{\partial x_d} p_\infty=0\}'
\end{equation*} is of dimension at most $(d-2)$.
\begin{lem}\label{EventualZero}
Given a compact set $K\subset B_1'\backslash\mathcal{N}$, we can find $N\in\mathbb{N}$ such that $$u_n=0 \text{ on $K$}$$ for all $n\ge N.$
\end{lem}
\begin{proof}
By compactness of $K$, we can find $\beta>0$ such that $$\frac{\partial}{\partial x_d} p_\infty\le-\beta \text{ on $K$.}$$
With Lemma \ref{PointwiseApprox} and \eqref{33equation}, we have $$u_n\le p_n+C_K\varepsilon_n^{1/2} \text{ in a neighborhood of $K$.}$$ Together with Lemma \ref{PinDown}, this gives the desired result.
\end{proof}
Now define the \textit{normalized solutions}
\begin{equation}\label{NormalizedSolution}
\hat{u}_n=\frac{u_n-\overline{p}_n}{\varepsilon_n}.
\end{equation} With Lemma \ref{PointwiseApprox} and \eqref{DefPhip}, we have that $$\Delta (\hat{u}_n+\frac{\kappa_n}{\varepsilon_n}\Phi_n)=0 \text{ in $B_1^+$}$$ and
$$ \|\hat{u}_n+\frac{\kappa_n}{\varepsilon_n}\Phi_n\|_{H^1(B_\rho)}\le C(\rho) \text{ for any $\rho<1$.}$$
Thus, up to a subsequence, $\hat{u}_n+\frac{\kappa_n}{\varepsilon_n}\Phi_n$ converges in $L^2_{loc}(B_1)$ to a limit function $h \in H^1_{loc}(B_1),$ which is harmonic in $B_1^+$ and $B_1^-$, and
\begin{equation}\label{VanishingH1}
\|\hat{u}_n+\frac{\kappa_n}{\varepsilon_n}\Phi_n-h\|_{\mathcal{L}^2(B_{7/8})}=o(1) \text{ as $n\to\infty$}.
\end{equation}
Moreover, Lemma \ref{EventualZero} implies that $\hat{u}_n+\frac{\kappa_n}{\varepsilon_n}\Phi_n$ vanishes eventually on any compact subsets of $B_1'\backslash\mathcal{N},$ where $\mathcal{N}$ is a subset of $\{x_d=0\}$ of dimension at most $(d-2)$.
Consequently, $\hat{u}_n+\frac{\kappa_n}{\varepsilon_n}\Phi_n$ convergences uniformly to $h$ on compact sets in $B_1 \setminus \mathcal N$, which
implies that $h=0$ on $B_1'\backslash\mathcal{N}.$
With $\mathcal{N}$ having $0$ capacity, this gives
$$\Delta h=0 \text{ in $B_1^+$, and } h=0 \text{ on $B_1'$.}$$
Denote the $(2k+1)$-order Taylor expansion of $h$ at the origin (in $B_1^+$, then evenly reflected to $B_1^-$) by $\sum_{\ell=0}^{2k+1}h_\ell,$ with each $h_\ell$ being the $\ell$-homogeneous part. Then we have
\begin{lem}\label{InitialImprovement}
There is a universal constant $C$, such that for $r\in(0,1/4)$ we have
$$\|(\hat{u}_n)_r-h_{2k+1}\|_{\mathcal{L}^2(B_2)}\le Cr(1+|\log r|)+o(1)$$ and
$$\frac{\kappa_n}{\varepsilon_n}\le Cr+o(1)
\text{ as $n\to\infty,$}$$where $(\hat{u}_n)_r(x)=\frac{1}{r^{2k+1}}\hat{u}_n(rx).$
\end{lem}
\begin{proof}
Throughout this proof, for a function $w$, we use $w_r$ to denote its rescaling $$w_r(x)=\frac{1}{r^{2k+1}}w(rx).$$
Firstly, with \eqref{ChangeInRadial} and \eqref{FailingWeiss}, we have
\begin{equation*}
\int_{\partial B_1}|u-u_{\frac{1}{2}}|\le \varepsilon o(1),
\end{equation*} which implies, by maximum principle and the homogeneity of $\overline{p}$, that
\begin{equation}\label{35equation}
|\hat{u}-\hat{u}_{\frac 12}|=o(1) \text{ in $B_{7/8}.$}
\end{equation}
With \eqref{VanishingH1} and regularity of the harmonic function $h$, we have
\begin{equation}\label{35equation1}
\|\hat{u}+\frac{\kappa}{\varepsilon}\Phi-\sum_{\ell=0}^{2k+1}h_\ell\|_{\mathcal{L}^2(B_{2r})}\le Cr^{2k+2+\frac{d}{2}}+o(1).
\end{equation}
A rescaling gives
$$\|\hat{u}_{\frac 12}+\frac{\kappa}{\varepsilon}\Phi_{\frac{1}{2}}-\sum_{\ell=0}^{2k+1}(h_\ell)_{\frac 12}\|_{\mathcal{L}^2(B_{4r})}\le Cr^{2k+2+\frac{d}{2}}+o(1).
$$
Combining these with \eqref{35equation}, we get
$$\|\frac{\kappa}{\varepsilon}(\Phi-\Phi_{\frac{1}{2}})+\sum[(h_\ell)_{\frac{1}{2}}-h_\ell]\|_{\mathcal{L}^2(B_{2r})}\le Cr^{2k+2+\frac{d}{2}}+o(1).
$$
That is,
$$
\frac{\kappa}{\varepsilon}\frac{\log(2)}{d+4k}\|\varphi|x|^{2k+1}\|_{\mathcal{L}^2(B_{2r})}+\sum_{\ell=0}^{2k}(2^{2k+1-\ell}-1)\|h_\ell\|_{\mathcal{L}^2(B_{2r})}\le Cr^{2k+2+\frac{d}{2}}+o(1),
$$where we used the definition of $\Phi$ from \eqref{DefPhip}, and the orthogonality of $\varphi$ and $h_\ell$ in $\mathcal{L}^2(\mathbb{S}^{d-1})$ for $\ell\le 2k.$
With Lemma \ref{NontrivialProjection}, we can use the bound on the first term to get
$$\frac{\kappa}{\varepsilon}\le Cr+o(1).
$$
Similarly, the bound on each of the remaining terms gives
$$\|h_\ell\|_{\mathcal{L}^2(B_{2r})}\le Cr^{2k+2+\frac{d}{2}}+o(1) \text{ for each $0\le\ell\le 2k.$}
$$
Putting these into \eqref{35equation1} gives
\begin{equation}\label{35equation3}
\|\hat{u}_r-h_{2k+1}\|_{\mathcal{L}^2(B_2)}\le Cr(1+|\log r|)+o(1).
\end{equation}This is the desired estimate.
\end{proof}
As an immediate consequence of Lemma \ref{InitialImprovement}, we have
\begin{equation}\label{AlmostDoneOdd}\|(u_n)_r-(\overline{p}_n+\varepsilon_n h_{2k+1})\|_{\mathcal{L}^2(B_2)}\le \varepsilon_n[Cr(1+|\log r|)+o(1)].
\end{equation}
\begin{lem}\label{ReplacementOdd}
As $n\to\infty,$ we have
$$\|\overline{p_n+\varepsilon_n h_{2k+1}}-\overline{p}_n-\varepsilon_n h_{2k+1}\|_{\mathcal{L}^2(B_2)}=\varepsilon_n o(1).$$
\end{lem}
\begin{proof}In this proof, define $w=\frac{1}{\varepsilon}(\overline{p+\varepsilon h_{2k+1}}-\overline{p}-\varepsilon h_{2k+1})$.
Firstly, note that by the maximum principle for $\Delta_{\Sph}+\lambda(2k+1)$ in $L^{+}_\eta$, we have
$|\overline{p+\varepsilon h_{2k+1}}-\overline{p}|\le C\varepsilon \text{ in $L^{+}$}.$ Thus $|w|\le C \text{ in $L^{+}$.}$
Meanwhile, in any compact subset of $B_1'\backslash\mathcal{N}$, the same argument as in Lemma \ref{EventualZero} implies $w=0$ for large $n$. We also have $w=0$ along $S^+_\eta$.
To summarize, $w$ is a bounded solution to $(\Delta_{\Sph}+\lambda)w=0$ in $L^+_\eta$ with $w=0$ on $S^+_\eta$ and eventually vanishing in any compact subset of $B_1'\backslash\mathcal{N}$, where $\mathcal{N}$ is of dimension at most $(d-2).$
Note that $w$ is even in the $x_d$-direction and vanishes outside $L_\eta$, this implies that $\|w\|_{\mathcal{L}^2(\mathbb{S}^{d-1})}=o(1)$, which gives the desired estimate. \end{proof}
If we define $p_n'\in\mathcal{P}_{2k+1}$ as $$p'_n=p_n+\varepsilon_nh_{2k+1},$$ then
$|p'_n-p_n|\le C\varepsilon_n.$
With Lemma \ref{InitialImprovement} and Lemma \ref{ReplacementOdd}, we have
$$\kappa_{p_n'}\le\kappa_{p_n}+\varepsilon_no(1)\le \varepsilon_n(Cr+o(1)).
$$By choosing $r$ small, we have $\kappa_{p_n'}<\iota\varepsilon_n$ for all large $n$, where $\iota<\frac12$ is a small universal constant to be chosen.
Combining \eqref{AlmostDoneOdd} and Lemma \ref{ReplacementOdd}, we have
\begin{equation*}\|(u_n)_r-\overline{p'_n}\|_{\mathcal{L}^2(B_2)}\le \varepsilon_n[Cr(1+|\log r|)+o(1)].
\end{equation*}
By choosing $r$ small, depending on universal constants, such that $Cr(1+|\log r|)<\iota/2,$ we have
$$\|(u_n)_r-\overline{p'_n}\|_{\mathcal{L}^2(B_2)}< \iota\varepsilon_n
$$
for all large $n$. Together with Lemma \ref{PointwiseApprox}, this implies
$$\|(u_n)_r-\overline{p'_n}\|_{H^1(B_1)}< C\iota\varepsilon_n<\frac{1}{2}\varepsilon_n$$ if $\iota$ is small.
Consequently, we have $$u_n\in\mathcal{S}_{2k+1}(p_n',\frac12\varepsilon_n,r),$$ contradicting \eqref{FailingImprovement}.
This concludes the proof of Lemma \ref{DichotomyOdd}.
\section{The dichotomy at a point with even frequency}
In this section, we establish a dichotomy similar to Lemma \ref{DichotomyOdd} but at a contact point with even frequency. We also explain how to get a $\log$-epiperimetric inequality with a slightly improved exponent then the one in Colombo-Spolaor-Velichkov \cite{CSV}.
The ideas are similar to those in the previous section. We only sketch the proof.
This main lemma for this section is:
\begin{lem}[Dichotomy at a point with even frequency]\label{DichotomyEven}
Given $k\in\mathbb{N}$, there are universal constants, $\tilde{\varepsilon}$, $r_0$, $c$ small and $C$ big, such that
If $u\in\mathcal{S}_{2k}(p,\varepsilon,1)$ with $\varepsilon<\tilde{\varepsilon}$ and $1\le \|p\|_{\mathcal{L}^2(\mathbb{S}^{d-1})}\le 2$, then we have the following dichotomy:
a) Either $$W_{2k}(u;1)-W_{2k}(u;r_0)\ge c\varepsilon^2$$ and
$$u\in\mathcal{S}_{2k}(p,C\varepsilon,r_0);$$
b) or $$u\in\mathcal{S}_{2k}(p',\frac12\varepsilon,r_0)$$ for some $p'$ with $$\|p'-p\|_{\mathcal{L}^2(\mathbb{S}^{d-1})}\le C\varepsilon.$$
\end{lem}
We prove this lemma by contradiction.
Suppose the lemma is not true, then we find a sequence $(u_n,p_n)$ satisfying
$$ 1\le\|p_n\|_{\mathcal{L}^2(\mathbb{S}^{d-1})}\le 2, \text{ and } u_n\in\mathcal{S}_{2k}(p_n,\varepsilon_n,1) \text{ with $\varepsilon_n\to 0.$}$$
However,
\begin{equation}\label{FailingWeissEven}
W_{2k}(u_n;1)-W_{2k}(u_n;\frac{1}{2})<\frac{1}{n^2}\varepsilon_n^2,
\end{equation}
and
\begin{equation}\label{FailingImprovementEven}
u_n\notin\mathcal{S}_{2k}(p',\frac12\varepsilon_n,r_0) \quad \forall p'\in\mathcal{P}_{2k} \text{ with }\|p'-p_n\|_{\mathcal{L}^2(\mathbb{S}^{d-1})}\le C\varepsilon_n.
\end{equation} The constants $r_0$ and $C$ will be chosen depending on universal constants.
Similar to the previous case, up to a subsequence, we have
$$p_n\to p_\infty\in\mathcal{P}_{2k} \text{ in $C^\infty(B_1)$.}$$
With Lemma \ref{BoundsForKappa}, we have $p_n\ge-v_n\ge-C\varepsilon_n^{\frac{2}{d-1}}$ on $B_1'$. Thus
$$p_\infty\ge0 \text{ on $B_1'$.}$$
The set where $p_\infty$ vanishes on $B_1'$, $\mathcal{N}=\{p_\infty=0\}'$, has dimension at most $(d-2)$.
Define normalized solutions $\hat{u}_n$ as in \eqref{NormalizedSolution}, we have, up to a subsequence,
\begin{equation*}
\|\hat{u}_n+\frac{\kappa_n}{\varepsilon_n}\Phi_n-h\|_{\mathcal{L}^2(B_{7/8})}=o(1) \text{ as $n\to\infty$}
\end{equation*} for some $h$ satisfying $$\Delta h=0 \text{ in $B_1$.}$$
With similar ideas as in Lemma \ref{InitialImprovement}, we can rule out lower order terms in the Taylor polynomial of $h$ at $0$ and obtain for $r\in(0,1/4)$,
\begin{equation*}\|(u_n)_r-(\overline{p}_n+\varepsilon_n h_{2k})\|_{\mathcal{L}^2(B_2)}\le \varepsilon_n[Cr(1+|\log r|)+o(1)].
\end{equation*}
If we choose $r$ small, then $p'_n=p_n+\varepsilon_nh_{2k}\in\mathcal{P}_{2k}$ satisfies $|p'_n-p_n|\le C\varepsilon_n,$ and
$$\|(u_n)_r-\overline{p'_n}\|_{\mathcal{L}^2(B_2)}< \iota\varepsilon_n
\text { and }\kappa_{p_n'}\le\iota\varepsilon_n
$$for large $n$, where $\iota$ is a universally small constant.
An application of Lemma \ref{PointwiseApprox} again gives $$u_n\in\mathcal{S}_{2k}(p_n',\frac{1}{2}\varepsilon_n,r)$$ if $\iota$ is small, which contradicts \eqref{FailingImprovementEven}.
This completes the proof for Lemma \ref{DichotomyEven}.
\begin{rem}\label{ImprovedEpi}
We sketch how similar ideas lead to a $\log$-epiperimetric inequality for the $2k$-Weiss energy functional. We get an improved exponent than the one currently known in the literature.
To be precise, let $w$ be a $2k$-homogeneous function satisfying $$w\ge 0 \text{ on $\{x_d=0\}$,}$$ and $$\|w\|_{\mathcal{L}^2(\mathbb{S}^{d-1})}\le 1, \quad |W_{2k}(w;1)|\le 1,$$ then we will show
\begin{equation}\label{EpiIneq}
W_{2k}(w;1)-W_{2k}(u;1)\ge cW_{2k}(w;1)^{1+\frac{d-3}{d+1}},
\end{equation} where $u$ is the solution to \eqref{TOP} with $u|_{\mathbb{S}^{d-1}}=w$, and $c$ is a universal constant.
A similar result is known in \cite{CSV} with the exponent on the right-hand side as $1+\frac{d-2}{d}.$
It suffices to prove \eqref{EpiIneq} under the assumption $\|w\|_{\mathcal{L}^2(\mathbb{S}^{d-1})}=1.$ For such $w$, choose \textit{$p\in\mathcal{P}_{2k}$ that minimizes $\delta(w,p)$ from \eqref{epsup}}.
With $W(p;1)=0$, $\Delta p=0$ and the homogeneity of $p$, we have
$$W(w;1)\le\int_{B_1}|\nabla w-\nabla p|^2\le C\delta(w,p)^2+C\|\overline{p}-p\|^2_{H^1(B_1)}.$$With homogeneity and harmonicity of $p$, we also have
$$W(\overline{p};1)=W(\overline{p}-p;1)=C[\int_{L_{\eta}}|\nabla_{\mathbb{S}^{d-1}}(\overline{p}-p)|^2-\lambda(2k)(\overline{p}-p)^2],$$
where the definitions of $L_\eta$ and $\lambda(2k)$ are given at the beginning of Subsection \ref{TBLP}. By making $\eta$ smaller, if necessary, we have $\int_{L_{\eta}}|\nabla_{\mathbb{S}^{d-1}}(\overline{p}-p)|^2-\lambda(2k)(\overline{p}-p)^2\sim\|\overline{p}-p\|^2_{H^1(B_1)}$. Therefore,
$$W(w;1)\le C\delta(w,p)^{1+\frac{2}{d-1}},$$ where we used \eqref{EvenPbarEnergy2}.
With $u=w$ on $\mathbb{S}^{d-1}$ and $w\ge 0 \text{ on $\{x_d=0\}$,}$ we have
$$ W(w;1)-W(u;1)\ge \int_{B_1}|\nabla(w-u)|^2.$$
Therefore, it suffices to show that for $\delta(w,p)$ small, we have
\begin{equation}\label{RemLower}\int_{B_1}|\nabla(w-u)|^2\ge c\delta(w,p)^2
\end{equation} for some universal constant $c$.
Suppose, on the contrary, this fails. Then we find a sequence $(w_n,u_n,p_n)$ as described above with $\delta_n=\delta(w_n,p_n)\to 0$ but
\begin{equation}\label{RemLower2}\int_{B_1}|\nabla(w_n-u_n)|^2\le \frac{1}{n^2}\delta_n^2.\end{equation}
Similar ideas as in the proof for Lemma \ref{DichotomyEven} gives, for large $n$, $$\delta(u_n,p_n')<\frac{1}{2}\delta_n$$ for some $p_n'\in\mathcal{P}_{2k}$, where \eqref{RemLower2} can be used in place of \eqref{FailingWeissEven} to control terms with lower homogeneities. With \eqref{RemLower2}, this gives $\delta(w_n,p_n')<\delta_n(w_n,p_n),$ contradicting the minimizing property of $p_n.$
\end{rem}
\section{Convergence rate to the blow-up profile}
In this final section, we prove our main result Theorem \ref{MainResult}. Our result on stratification of contact points with integer frequencies, Theorem \ref{MainStratification}, follows with Whitney's extension theorem and the implicit function theorem. See, for instance, \cite{GP}.
We first give a technical lemma about sequences. We will apply this to the sequences of Weiss energy and errors in approximations at different scales.
\begin{lem}\label{Sequences}
Let $(w_n)$ and $(e_n)$ be two sequences of nonnegative real numbers with $e_0 \le 1$. Suppose that for some constants, $A$ big, $a$ small and $\gamma\in(0,1]$, we have $$w_{n+1}\le A \, e_n^{1+\gamma},$$ and the following dichotomy:
\begin{itemize}
\item{ either $w_{n+1}\le w_n-ae_n^2$ and $e_{n+1}=Ae_n$; }
\item{ or $w_{n+1}\le w_n$ and $e_{n+1}=\frac{1}{2}e_{n}$}.
\end{itemize}
Then
$$\sum e_n<\sigma(e_0) \to 0 \quad \mbox{as $e_0 \to 0$},$$
and \begin{equation}\label{OddSum}\sum_{n\ge N}e_n\le C(1-c)^N \text{ if $\gamma=1$;}\end{equation} and
\begin{equation}\label{EvenSum}\sum_{n\ge N}e_n \le CN^{\frac{-\gamma}{1-\gamma}}\text{ if $\gamma\in(0,1)$.}\end{equation}
Here $c\in(0,1)$ and $C$ are constants depending only on $A$, $a$ and $\gamma.$
\end{lem}
\begin{proof}
Define a new sequence $$\alpha_n:=w_n+\mu e_n^2.$$ If $\mu>0$ is small enough, then we have
\begin{equation}\label{alphaDecay}\alpha_{n+1}\le \alpha_n-c \alpha_n^{\frac{2}{1+\gamma}},
\end{equation}
and $\alpha_1 \le C e_0^{1+\gamma}$.
For $\gamma=1$, we have $\alpha_n\le (1-c)^{n-1}\alpha_1,$ which gives the desired estimate for this case.
For $\gamma\in(0,1)$, from \eqref{alphaDecay} we have
that for all $n\ge 1$,
$$\alpha_n\le C(n+M)^{-\frac{1+\gamma}{1-\gamma}},$$
with $M \to \infty$ as $e_0 \to 0$, and the first estimate follows.
Meanwhile, by our definition of $\alpha_n$, we have $e_n^2\le C(\alpha_n-\alpha_{n+1}).$
Thus
\begin{align*}
\sum_{n=N}^{2N}e_n&\le C\sum_{N}^{2N}(\alpha_n-\alpha_{n+1})^{1/2}\\
&\le CN^{1/2} \left [ \sum_{N}^{2N}(\alpha_n-\alpha_{n+1})\right]^{1/2}\\
&\le CN^{1/2} \alpha_{N}^{1/2}\\
&\le CN^{-\frac{\gamma}{1-\gamma}}.
\end{align*}This gives the desired estimate for $\gamma\in(0,1)$.
\end{proof}
Now we give the proof of our main result.
\begin{proof}[Proof of Theorem \ref{MainResult}]
Suppose that $0\in\Lambda_m(u)$ for some $m\in\mathbb{N}$, then up to an initial rescaling, we have
$$u\in\mathcal{S}_m(p,\varepsilon,1), \quad \quad \|p\|_{L^2(B_1)}=3/2,$$ for some $\varepsilon<\tilde{\varepsilon}$. Here $\tilde{\varepsilon}$ is the constant from Lemma \ref{Dichotomy}, and the solution class $\mathcal{S}_m$ is from Definition \ref{DefWellApprox}.
As the initial set up, let $p_0=p$, $\rho_0=1$, $e_0=\varepsilon$, and $w_0=W_m(u;1)$.
Suppose that we have found $p_n$, $e_n$ $w_n$ small, and $\rho_n\in(0,1)$ such that $u\in\mathcal{S}_{m}(p_n,e_n,\rho_n)$ with $e_n<\tilde{\varepsilon}$. Then we apply Lemma \ref{Dichotomy}. If possibility a) happens in the dichotomy, we let $p_{n+1}=p_n$, $e_{n+1}=Ce_n$. If possibility b) happens, we let $p_{n+1}=p'$ and $e_{n+1}=\frac{1}{2}e_n.$ In both cases, we let $\rho_{n+1}=r_0\rho_n$ and $w_{n+1}=W_m(u;\rho_{n+1}).$
By Lemma \ref{Dichotomy} and Lemma \ref{WeissComparison2}, the sequences $(w_n)$ and $(e_n)$ satisfy the assumptions in Lemma \ref{Sequences}, with $\gamma=1$ if $m$ is odd, and $\gamma=\frac{2}{d-1}$ if $m$ is even. In particular, we have $\sum e_n<\tilde{\varepsilon}$ along the sequence if $e_0$ is chosen small enough. Consequently, Lemma \ref{Dichotomy} can be applied indefinitely.
Now note that $\|p_{n+1}-p_n\|_{\mathcal{L}^2(B_1)}\le Ce_n.$ The summability of $(e_n)$ implies the convergence of $p_n$ to some limit $p_\infty$.
If we denote by $u_r$ the rescaled solution $u_r(x)=\frac{1}{r^m}u(rx).$ When $m$ is odd, we use \eqref{OddSum} to get $\|u_{r_0^n}-p_\infty\|_{H^1(B_1)}\le C(1-c)^n.$ This gives the estimate in \eqref{MainResultOdd}. When $m$ is even, we use \eqref{EvenSum} to get $\|u_{r_0^{n}}-p_\infty\|_{H^1(B_1)}\le Cn^{-\frac{2}{d-3}}.$ This gives the estimate in \eqref{MainResultEven}.
\end{proof}
The stratification in Theorem \ref{MainStratification} follows by the same strategy as in Garofalo-Petrosyan \cite{GP} or Colombo-Spolaor-Velichkov \cite{CSV}. In the following remark, we point out that $\Lambda_1(u)$ and $\Lambda_3^0(u)$ always lie in the interior of the contact set.
\begin{rem}\label{Interior}
Suppose $0\in\Lambda_{2k+1}(u)$, then there is $p\in\mathcal{P}_{2k+1}^+$ such that $$u(x)=p(x)+O(|x|^{2k+1+\alpha})$$ as $x\to0.$
If $0\in\Lambda_1(u)$, then $p$ is a positive multiple of $-|x_d|$.
If $0\in\Lambda_3(u)$, we have $$p(x',x_d)=-|x_d|(p_1(x')+x_d^2p_2(x',x_d)),$$ where $p_1$ is a $2$-homogeneous polynomial with $p_1\ge0$ on $\{x_d=0\}.$ The zero stratum of $\Lambda_3(u)$ is defined as those points where $p_1$ depends on all $(d-1)$-variables. This implies $p_1>0$ on $(\mathbb{S}^{d-1})'.$
Consequently, if $0\in\Lambda_1(u)$ or $\Lambda_3^0(u)$, then $\frac{\partial}{\partial x_d} p<0$ on $(\mathbb{S}^{d-1})'$. With Lemma \ref{PinDown}, we have $u=0$ in $B_r'$ for some $r>0.$
\end{rem} |
1106.5579 | \section{Introduction}
\label{sec:Intro}
The Ising model is the prototypical model of phase transitions and as such is greatly studied \cite{Plischke1994}. Is is made up of sites interconnected along a lattice of ``bonds". Each of these sites can have spin of value $\pm 1$. The Union Jack lattice is obtained by adding alternate diagonals to a square lattice as shown in \Fref{fig:union}. This lattice is made up of two sublattices, the $\sigma$-lattice with sites with eight intersite ``bonds'' (filled circles) and the $\tau$-sublattice with sites with four of these ``bonds'' (hollow circles).
\begin{figure}[htbp]
\centering
\includegraphics[width=0.75\columnwidth]{Figure1.eps}
\caption{The Union Jack Lattice. (From \cite{Wu1987})}
\label{fig:union}
\end{figure}
The Union Jack lattice Ising model is of particular interest as it is one of the few exactly solvable models which exhibits a re-entrant phase transition \cite{Vaks1966}. A solution was presented for the general anisotropic model on this lattice for the $\sigma$-sublattice in \cite{Wu1987} and then later for the complete lattice in \cite{Wu1989}. The solution showed that the Union Jack lattice Ising model is equivalent to the free fermion model. The spontaneous magnetisation for the free fermion model had been computed in \cite{Baxter1986}. The Union Jack lattice Ising model has been greatly studied since these solutions, with the work being extended in \cite{Strecka2006,Strecka2006a} to study a mixed spin lattice.
This paper focuses on the numerical simulations of the work in the thesis on the general anisotropic Union Jack Ising model in \cite{Mellor2010}. To start, in Section \ref{sec:UJL} we will state the required background for the paper. In this section we will also present the sublattice prediction functions from the work of \cite{Wu1987,Wu1989}. In Section \ref{sec:Res} we will briefly discuss the method used in our simulation program, along with the qualitative modelling of the Union Jack Ising model. We will then go onto present a theoretical analysis of the work presented in \cite{Wu1987,Wu1989} and \cite{Vaks1966} and compare these results with those from our numerical simulations. We will show that the prediction functions of \cite{Wu1987,Wu1989} do not accurately model the results of many of the anisotropic systems with invalid results being produced. Specifically we will identify the cases where the $\tau$-sublattice predictions are physically implausible and those which are not consistent under rotation of the lattice. In the conclusion in Section \ref{sec:con} we will state the additional conditions required to allow the predictions to produce valid results and the causes of the rotational variance.
\section{The Union Jack Lattice}
\label{sec:UJL}
In this section we will briefly state the equations for spontaneous magnetisation for the general anisotropic Ising model on the Union Jack lattice as presented by Wu and Lin \cite{Wu1987,Wu1989}. For convenience we adopt the notation of Wu and Lin \cite{Wu1987} and label the nearest interaction strengths as $-J_r$, which are defined to be one of six values: $-J_1$, $-J_2$, $-J_3$, $-J_4$, $-J$, $-J'$. The resultant Boltzmann factors are given by
\begin{eqnarray}
\omega \left( a,b,c,d\right) &=&2\exp\left[ \frac{\beta J\left( ab+cd\right) }{2}+\frac{\beta J^\prime \left(ad+bc\right) }{2}\right] \nonumber\\ && \times\cosh \left( a\beta J_1+b\beta J_2+c\beta J_3+d\beta J_4\right),
\label{eq:ino22}
\end{eqnarray}
where $\beta=1/k_BT$, $k_B$ is the Boltzmann constant and $a$,$b$,$c$ and $d$ are the four sites surrounding a $\tau$ site. This equation produces sixteen possible factors, which can be reduced by symmetry to eight distinct expressions \cite{Wu1987}:
\begin{eqnarray*}
\omega _1=\omega \left( ++++\right)&=&2\mathrm{e}^{\beta J+\beta J^\prime } \cosh \left( \beta \left( J_1+J_2+J_3+J_4\right)\right) \nonumber \\
\omega _2=\omega \left( +-+-\right) &=&2\mathrm{e}^{-\beta J-\beta J^\prime }\cosh \left( \beta \left( J_1-J_2+J_3-J_4\right) \right) \nonumber \\
\omega _3=\omega \left( +--+\right) &=&2\mathrm{e}^{-\beta J+\beta J^\prime }\cosh \left( \beta \left( J_1-J_2-J_3+J_4\right) \right) \nonumber \\
\omega _4=\omega \left( ++--\right) &=&2\mathrm{e}^{\beta J-\beta J^\prime }\cosh \left( \beta \left( J_1+J_2-J_3-J_4\right) \right) \nonumber \\
\omega _5=\omega \left( +-++\right) &=&2\cosh \left( \beta \left( J_1-J_2+J_3+J_4\right) \right) \nonumber \\
\omega _6=\omega \left( +++-\right) &=&2\cosh \left( \beta \left( J_1+J_2+J_3-J_4\right) \right) \nonumber \\
\omega _7=\omega \left( ++-+\right) &=&2\cosh \left( \beta \left( J_1+J_2-J_3+J_4\right) \right) \nonumber \\
\omega _8=\omega \left( -+++\right) &=&2\cosh \left( \beta \left( -J_1+J_2+J_3+J_4\right) \right).
\label{eq:ino23}
\end{eqnarray*}
In \cite{Wu1987} it is shown that the Union Jack lattice is equivalent to an eight-lattice models with weights given by \eref{eq:ino22}. This eight lattice model satisfies the free fermion condition \cite{Fan1970}. The spontaneous magnetisation of a free fermion model was given by Baxter in \cite{Baxter1986}. As such, the spontaneous magnetisation for the $\sigma$-sublattice is
\begin{eqnarray}
\left\langle \sigma \right\rangle &=&\left\{
\begin{array}{cc}
\left(1-\Omega^{-2}\right)^{1/8}, & \Omega^{-2}\geq 1 \\
0, & \Omega^{-2} \leq 1,
\end{array}
\right.
\label{eq:ino25} \\ \nonumber \\
\Omega^2 &=&1-\frac{\gamma_1 ~ \gamma_2 ~ \gamma_3 ~ \gamma_4}{16\omega_5 ~\omega_6 ~\omega_7 ~\omega_8},
\label{eq:uno3}
\end{eqnarray}
where
\begin{eqnarray*}
\gamma_1 &=& -\omega_1+\omega_2+\omega_3+\omega_4 \nonumber \\
\gamma_2 &=& \omega_1-\omega_2+\omega_3+\omega_4 \nonumber \\
\gamma_3 &=& \omega_1+\omega_2-\omega_3+\omega_4 \nonumber \\
\gamma_4 &=& \omega_1+\omega_2+\omega_3-\omega_4.
\label{eq:uno1}
\end{eqnarray*}
The critical point(s) of this system are
\begin{displaymath}
\Omega^2=1
\end{displaymath}
or equivalently,
\begin{equation}
\omega_1+\omega_2+\omega_3+\omega_4=2\ \mathrm{max}\left\{\omega_1,\omega_2,\omega_3,\omega_4\right\}.
\label{eq:uno5}
\end{equation}
The $\tau$-sublattice magnetisation is given by \cite{Wu1989} as
\begin{equation}
\left\langle \tau \right\rangle = \left\langle \sigma \right\rangle \left[A_{1234}(K)(F_+ + F_-)+A_{2341}(K)(F_+ - F_-)\right].
\label{eq:ino68}
\end{equation}
We can see that this equation is a multiple of the $\sigma$-sublattice value. In (\ref{eq:ino68}) from \cite{Wu1989},
\begin{eqnarray}
A_{1234}(K)&=& \frac{\sinh{2(\beta J_1+\beta J_3)}}{\sqrt{2G_{-}(\beta J)\sinh{2\beta J_1}\sinh{2\beta J_3}}} \nonumber \\
A_{2341}(K)&=& \frac{\sinh{2(\beta J_2+\beta J_4)}}{\sqrt{2G_{-}(\beta J)\sinh{2\beta J_2}\sinh{2\beta J_4}}} \label{eq:change}
\end{eqnarray}
and
\begin{displaymath}
G_{-}(\beta J)= \cosh{2(\beta J_1+\beta J_3)}+\cosh{2(\beta J_2-\beta J_4)}.
\end{displaymath}
The calculation for $F_+$ and $F_-$ is a little more involved. We start by calculating
\begin{displaymath}
F_{\pm}= \sqrt{\frac{A+2\sqrt{BC}}{D+2E\sqrt{B}}}
\end{displaymath}
where
\begin{eqnarray*}
A&=& 2\omega_5\omega_6\omega_7\omega_8\left(\omega^2_1+\omega^2_2+\omega^2_3+\omega^2_4\right) \nonumber \\
&&-\left(\omega_1\omega_2+\omega_3\omega_4\right)\left(\omega_1\omega_3 + \omega_2\omega_4\right)\left(\omega_1\omega_4+\omega_2\omega_3\right) \nonumber \\
B&=& \omega_5\omega_6\omega_7\omega_8\left(\omega_5\omega_6\omega_7\omega_8-\omega_1\omega_2\omega_3\omega_4\right) \nonumber \\
C&=& \left(\omega_1^2+\omega_2^2+\omega_3^2+\omega_4^2\right)^2-4\left(\omega_5\omega_6-\omega_7\omega_8\right)^2 \nonumber \\
D&=& \left(\omega_1^2+\omega_2^2\right)\left(2\omega_5\omega_6\omega_7\omega_8- \omega_1\omega_2\omega_3\omega_4\right) \\ && - \omega_5\omega_6\omega_7\omega_8\left(\omega_3^2+\omega_4^2\right) \nonumber \\
E&=& \omega_1^2-\omega_2^2. \nonumber
\end{eqnarray*}
We can relate $F_+$ and $F_-$ with the following formula from \cite{Wu1989}, allowing us to get values for each variable,
\begin{displaymath}
F_+F_- = \frac{\omega_5\omega_6-\omega_7\omega_8}{\omega_1\omega_2}.
\end{displaymath}
We can compute the overall nearest neighbour magnetisation by taking the mean of the two sublattice magnetisations
\begin{displaymath}
M_0= \frac{1}{2}(\left\langle \sigma \right\rangle + \left\langle \tau \right\rangle).
\end{displaymath}
\subsection{Classification of phases}
\label{sub:CoPh}
In \cite{Wu1987}, a classification of the phase of the $\sigma$-sublattice is presented. It is based on the following energy values:
\begin{eqnarray*}
-E_1 &=& J + J^\prime + \left|J_1+J_2+J_3+J_4\right| \nonumber\\
-E_2 &=& -J - J^\prime + \left|J_1-J_2+J_3-J_4\right| \nonumber \\
-E_3 &=& -J + J^\prime + \left|J_1-J_2-J_3+J_4\right| \nonumber\\
-E_4 &=& J - J^\prime + \left|J_1+J_2-J_3-J_4\right|.
\label{eq:uno6}
\end{eqnarray*}
The sublattice is in:
\begin{asparaenum}[i)]
\item a ferromagnetic phase when $E_1 < E_2,~E_3,~E_4;$
\item a antiferromagnetic phase when $E_2 < E_1,~E_3,~E_4;$
\item a metamagnetic phase when $E_3 < E_1,~E_2,~E_4$ or $E_4 < E_1,~E_2,~E_3.$
\end{asparaenum}
As the temperature rises, depending on the relative strengths of the interactions $J_r$, the occurrence of a phase change is signified by one or more of the equations in \eref{eq:uno5} being realised. A re-entrant phase transition occurs if any one equation admits two solutions.
\section{Results}
\label{sec:Res}
In our work in \cite{Mellor2010} we presented a theoretical and numerical analysis of the results derived in \cite{Wu1987}. In our theoretical analysis we compared these results against those of \cite{Vaks1966} to identify systems to further investigate numerically. The results of both the theoretical and numerical analysis will be presented below. In \cite{Mellor2010} we also presented analysis of a mean-field approximation of the Union Jack lattice using two approaches. The first used a set partially uncoupled predictor equations, both functions of $\left\langle\sigma\right\rangle$. The second approximation used coupled predictor equations that are functions of $\left\langle\sigma\right\rangle$ and $\left\langle\tau\right\rangle$. Qualitatively we saw that our Mean Field models showed good correlation with the isotropic ferromagnetic systems. In anisotropic antiferromagnetic systems as well as anisotropic ferromagnetic systems with a re-entrant phase transition, the correlation was poor between the Mean Field results and the theoretical predictions.
As the theoretical results are for an infinite lattice, we used a Monte Carlo Markov Chain method with periodic boundary conditions to numerically simulate the systems. Our Monte Carlo algorithm was the Metropolis-Hastings algorithm \cite{Plischke1994, Metropolis1953}. For our simulations we will apply this algorithm to a lattice of 100 sites by 100 sites. This lattice size has been chosen as it is small enough to have a reasonable run time while being large enough to suppress the finite size effects. The computer program was calibrated against exact results for the general anisotropic triangular lattice for both the average magnetisation \cite{Stephenson1964} and the three-site correlator \cite{Baxter1975}. With this calibration we have confidence in the accuracy of the simulation results. The code for this simulation program can be found in \cite{Mellor2010}.
To classify the phase of the $\sigma$-sublattice we used a alternative approach to that given in Section \ref{sub:CoPh}. By using the individual $\gamma$ terms from \eref{eq:uno3} we note that the sublattice is in a ferromagnetic phase when $\gamma_1<0$, a antiferromagnetic phase when $\gamma_2<0$, a metamagnetic phase when $\gamma_3<0$ or $\gamma_4<0$ and a disordered phase when $\gamma_1\gamma_2\gamma_3\gamma_4>0$. The critical temperature(s) can also be determined when any of the $\gamma$ functions move from being negative to being positive, or vice versa.
\subsection{Isotropic ferromagnetic}
\label{sec:IF}
To start with we look at the isotropic ferromagnetic system. Here the interactions for our system will be $J=J^\prime=J_n=100k_B$. In our initial plot of this system the $\tau$-sublattice prediction \eref{eq:ino68} was a factor of two higher than expected. Upon analysis of the equation we found that due to the symmetric interactions of the system $A_{1234}=A_{2341}=1$, $F_+F_-=0$ and $\left\langle \tau \right\rangle = 2F_+\left\langle \sigma \right\rangle$. As such we adapted \eref{eq:ino68} to the following form
\begin{equation}
\left\langle \tau \right\rangle = \left\langle \sigma \right\rangle \frac{\left[A_{1234}(K)(F_+ + F_-)+A_{2341}(K)(F_+ - F_-)\right]}{2}.
\label{eq:papertau}
\end{equation}
As this is a minor adjustment we will continue to refer to the result as that of Wu and Lin \cite{Wu1989}. The plot of our numerical simulation results against the adapted prediction results of Wu and Lin \cite{Wu1987,Wu1989} are shown in Figure \ref{fig:ferro}.
\begin{figure}[htbp]
\centering
\includegraphics[width= 0.9\columnwidth]{Figure2.eps}
\caption{Plot of simulation results for an isotropic ferromagnetic system on the Union Jack lattice. Here the theoretical predictions are shown with the lines and the numerical results are shown with the points.}
\label{fig:ferro}
\end{figure}
Intuitively this is the type of graph we would expect for this type of system. As expected there is strong agreement between the two theories with the phase transition and critical temperature being the same. There is a high correlation between our simulation data and both sublattice prediction functions. There is some noise around the critical temperature, though it is of a small magnitude when compared to the other results. After the noise part of the data, around 400-600 Kelvin the simulation results again follow the prediction with a high correlation.
\subsection{Anisotropic metamagnetic}
\label{sec:NSmeta}
Next we move on to look at an anisotropic metamagnetic system where $J_n=10k_B$, $J=100k_B$ and $J^\prime=-100k_B$. The graph we obtain when we plot our simulation results against the predictions of Wu and Lin \cite{Wu1987, Wu1989} is shown in Figure \ref{fig:meta} below.
\begin{figure}[htbp]
\centering
\includegraphics[width= 0.9\columnwidth]{Figure3.eps}
\caption{Plot of simulation results for an anisotropic metamagnetic system on the Union Jack lattice. Note that the simulation results do not follow the curves of the prediction functions.}
\label{fig:meta}
\end{figure}
As can be seen from the graph, the prediction functions of \cite{Wu1987,Wu1989} show a non-zero magnetisation on the $\sigma$-sublattice. In this system the $\gamma_4$ is negative at low temperatures \cite{Mellor2010}. The equation for $\gamma_4$ is
\begin{displaymath}
\gamma_4 = \omega_1+\omega_2+\omega_3-\omega_4,
\end{displaymath}
and we note from examination of this equation that $\omega_4=\omega \left( ++--\right)$ is the dominant term. From the spin configuration, this represents we would expect an antiferromagnetic phase with the average magnetisation for the $\sigma$-sublattice being zero. When we look at our simulation results for this sublattice we see that they have average spin zero which follows more closely the results we would expect.
\subsection{Anisotropic antiferromagnetic}
\label{sec:NSanti}
Next we study an anisotropic antiferromagnetic system. The chosen system of this type will have horizontal and vertical interactions of $J_n=100k_B$ and diagonal interactions of $J=J^\prime=-100k_B$. The simulation results are plotted against Wu and Lin's \cite{Wu1987,Wu1989} predictions in Figure \ref{fig:antiferro} below.
\begin{figure}[htbp]
\centering
\includegraphics[width= 0.9\columnwidth]{Figure4.eps}
\caption{Plot of simulation results for an anisotropic antiferromagnetic system on the Union Jack lattice. Note that the simulation results do not follow the curves of the theoretical predictions which are physically impossible.}
\label{fig:antiferro}
\end{figure}
Using similar analysis as in Section \ref{sec:NSmeta}, we find that $\omega_2=\omega(+-+-)$ is the dominant term for the magnetisation of the $\sigma$-sublattice. This configuration would agree with the result of \cite{Vaks1966} in that it predicts an ordered antiferromagnetic phase and the critical temperatures agree. However as we can see in \Fref{fig:antiferro}, the prediction function \eref{eq:uno3} produces a non-zero magnetisation for this phase where we would expect zero magnetisation.
When we examine the $\tau$-sublattice prediction function \eref{eq:papertau} we notice that it produces physically implausible results. That is, the predicted magnetisation of the $\tau$-sublattice is greater than the possible spin values, $\pm 1$, for this Ising model. As in the isotropic ferromagnetic case we have symmetric interactions leading to $A_{1234}=A_{2341}=1$ and $F_+F_-=0$. As in this case $F_+>2$, we can see that this is the cause of the implausible results.
The correlation between the simulation data and the predictions of \cite{Wu1987,Wu1989} is again poor. We see that our simulation results are centred around the zero magnetisation level, as we would intuitively expect. However it can be seen that the simulation results for the $\sigma$-sublattice do not show a difference between an ordered phase and a disordered phase.
\subsection{Anisotropic ferromagnetic}
\label{secNSanferro}
We now move to examining anisotropic systems with the $\sigma$-sublattice classified as ferromagnetic at low temperatures. This type of system is particularly interesting as it contains systems with staggered interactions and those with re-entrant phase transitions. We will also discuss rotational variance. First let us present a system containing staggered interactions, with the interactions of the system shown in \Fref{fig:antia92500} defined as $J_n=-100k_B$ and $J=J^\prime=92k_B$.
\begin{figure}[htbp]
\centering
\includegraphics[width= 0.9\columnwidth]{Figure5.eps}
\caption{Plot of simulation results for an anisotropic ferromagnetic system on the Union Jack lattice with equal horizontal and vertical interactions.}
\label{fig:antia92500}
\end{figure}
As we can see from the prediction functions of \cite{Wu1987,Wu1989} the average magnetisation have opposite sign. This agrees with the theoretical predictions for the critical temperature and state from \cite{Vaks1966}. It can also be noted that the overall magnetisation for the complete lattice has magnitude zero until around 200 Kelvin. The numerical simulation results have a high correlation with the theoretical predictions and follow the curves closely. There is noise after the critical temperature on both sublattices but this quickly reduces down to a small level for higher temperatures.
Having covered staggered interactions we now go onto looking at systems that contain a re-entrant phase transition. The system we studied was one with horizontal and vertical interactions of $J_n=100k_B$ and diagonal interactions of $J=J^\prime=-92k_B$. The graph of this system is shown in \Fref{fig:anti92150}.
\begin{figure}[htbp]
\centering
\includegraphics[width= 0.9\columnwidth]{Figure6.eps}
\caption{Plot of simulation results for an anisotropic ferromagnetic system on the Union Jack lattice with equal horizontal and vertical interactions.}
\label{fig:anti92150}
\end{figure}
This system is predicted by \cite{Vaks1966} to start in a ferromagnetic phase, move to a disordered phase, then into an ordered antiferromagnetic phase and then finally into a disordered phase again. As we see from the prediction functions of \cite{Wu1987,Wu1989}, initially the theories agree with a ferromagnetic phase being shown. The critical temperatures predicted by both theories agree in all three values. With further analysis we see that initially $\gamma_1$ is the dominant term, leading to a ferromagnetic phase. From the second critical temperature to the third we see that $\gamma_2$ is now the dominant term. However as we saw in Section \ref{sec:NSanti}, while we would expect a zero magnetisation, there is instead a non-zero value for both sublattices, with the $\tau$-sublattice having value greater than one.
In our numerical simulations we see that there is good correlation up to the second critical temperature but the simulation does not show the re-entrant phase transition. As we have discussed in Section \ref{sec:NSanti} both the ordered antiferromagnetic phase and disordered phase would have average magnetisation zero, and so are indistinguishable from this point of view.
Looking at more a general anisotropic system, we see that there exist systems which have rotational variance. As an example of these systems we first look at the system with horizontal interactions of $J_1=J_3=100k_B/0.9^2$, vertical interactions of $J_2=J_4=100k_B/0.9$ and diagonal interactions of $J=J^\prime=100k_B$, and compare with the system with horizontal interactions of $J_1=J_3=100k_B/0.9$, vertical interactions of $J_2=J_4=100k_B/0.9^2$ (i.e. the lattice rotated through 90 degrees). The graphs of our simulation results are shown in Figure \ref{fig:NSfunkypos} below.
\begin{figure*}[htbp]
\centering
\subfloat[]{
\includegraphics[width= 0.9\columnwidth]{Figure7a.eps}
\label{subfig:funkygraphpos}
}
\subfloat[]{
\includegraphics[width= 0.9\columnwidth]{Figure7b.eps}
\label{subfig:funkyposgraph}
}
\caption{Plot of simulation results for an anisotropic ferromagnetic system on the Union Jack lattice with horizontal and vertical interactions that are not equal and positive diagonals. (a) shows the system with interactions $J_1=J_3=100k_B/0.9^2$, $J_2=J_4=100k_B/0.9$ $J=J^\prime=100k_B$. (b) shows the rotated system with interactions $J_1=J_3=100k_B/0.9$, $J_2=J_4=100k_B/0.9^2$ $J=J^\prime=100k_B$.}
\label{fig:NSfunkypos}
\end{figure*}
As we see when we compare the simulations results, the data forms similar curves and has a similar phase transition at equal critical temperatures. In comparison to the predictions of Wu and Lin \cite{Wu1987,Wu1989}, we see that at higher temperatures the simulation results follow all three curves with good correlation. At lower temperatures, below about 200 Kelvin, we see that the $\sigma$ prediction still has good correlation for both systems. This suggests that the $\tau$-sublattice prediction is incorrect. Our simulation results show that rotation of the lattice should not have an effect on the results of the system. When we analyse \eref{eq:papertau} we see that
\begin{eqnarray}
F_+F_-&=& \frac{\cosh^2{2\beta J_{\mathrm{Horizontal}}}-\cosh^2{2\beta J_{\mathrm{Vertical}}}}{\cosh^2{2\beta J_{\mathrm{Horizontal}}}+\cosh^2{2\beta J_{\mathrm{Vertical}}}} \nonumber\\
A_{1234}&=& 1 \nonumber\\
A_{2341}&=& \frac{\cosh{2\beta J_{\mathrm{Vertical}}}}{\cosh{2\beta J_{\mathrm{Horizontal}}}}.
\label{eq:proof}
\end{eqnarray}
Both the rotational variance and disagreement between the $\tau$-sublattice and the simulation results are a result of \eref{eq:proof}.
A further result can be seen if we now take a system similar to the previous example but with negative diagonal interactions. For an example of this type of system, we will look at the system with horizontal interactions of $J_1=J_3=100k_B/0.9^2$, vertical interactions of $J_2=J_4=100k_B/0.9$ and diagonal interactions of $J=J^\prime=-100k_B$, and the same system rotated through 90 degrees. The graphs of our simulation results are shown in Figure \ref{fig:NSFunky}.
\begin{figure*}[htbp]
\centering
\subfloat[]{
\includegraphics[width= 0.9\columnwidth]{Figure8a.eps}
\label{subfig:funkydata1}
}
\subfloat[]{
\includegraphics[width= 0.9\columnwidth]{Figure8b.eps}
\label{subfig:funkydata2}
}
\caption{Plot of simulation results for an anisotropic ferromagnetic system on the Union Jack lattice with horizontal and vertical interactions that are not equal and negative values for the diagonal interactions. (a) shows the system with interactions $J_1=J_3=100k_B/0.9^2$, $J_2=J_4=100k_B/0.9$ $J=J^\prime=-100k_B$. (b) shows the rotated system with interactions $J_1=J_3=100k_B/0.9$, $J_2=J_4=100k_B/0.9^2$ $J=J^\prime=-100k_B$.}
\label{fig:NSFunky}
\end{figure*}
Again we see as in Figure \ref{fig:NSfunkypos}, that the simulation results are very similar to each other. However, in this case we see that below 10 Kelvin all three simulation results are lower than the predictions. After 10 Kelvin the simulation results again move up to the prediction curves, following the $\sigma$ prediction in both cases. We see that in both cases the simulation results and predicted results show the same critical temperature and phase transition. This oddity maybe due to the simulation being performed on a finite system, while the theoretical predictions are for an infinite lattice.
\section{Conclusion}
\label{sec:con}
We have seen that the re-entrant phase transitions of the Union Jack Ising model can not be seen when only considering the average magnetisation. This is due to the transition being from unordered to an ordered antiferromagnetic phase which both have an average magnetisation that is identically zero. In addition, the prediction of the $\sigma$-sublattice given in \cite{Wu1987} requires additional conditions to have agreement with our numerical simulations. It is possible to classify the phases of the system by examining the $\gamma_i$ terms of equation (\ref{eq:uno3}). The predictions with the current conditions produce non-zero magnetisations for non-ferromagnetic systems. However, if we impose the conditions that we only use the prediction formula (\ref{eq:ino25}) when $\gamma_1<0$ or $\gamma_1~\gamma_2~\gamma_3~\gamma_4~>0$ and it is zero outside those conditions, the results now work for all systems.
The prediction for the $\tau$-sublattice, given in \cite{Wu1989}, has additional issues in the general anisotropic lattice. Initially the prediction given in \cite{Wu1989} is too large by a factor of two, but can be easily corrected as shown in \eref{eq:papertau}. The $\sigma$-sublattice prediction is a factor of the $\tau$-sublattice prediction and so a zero magnetisation would be observed for the antiferromagnetic and metamagnetic phases if the above conditions are applied. The rotational variance seen in \Fref{fig:NSfunkypos} and \Fref{fig:NSFunky} can be eliminated by rewriting \eref{eq:change} as follows
\begin{eqnarray*}
A_{1234}(K)&=& \frac{\sinh{2(\beta J_1+\beta J_3)}}{\sqrt{2G_{1-}(\beta J)\sinh{2\beta J_1}\sinh{2\beta J_3}}} \\
A_{2341}(K)&=& \frac{\sinh{2(\beta J_2+\beta J_4)}}{\sqrt{2G_{2-}(\beta J)\sinh{2\beta J_2}\sinh{2\beta J_4}}},
\end{eqnarray*}
where
\begin{eqnarray*}
G_{1-}(\beta J)&=& \cosh{2(\beta J_1+\beta J_3)}+\cosh{2(\beta J_2-\beta J_4)}, \\
G_{2-}(\beta J)&=& \cosh{2(\beta J_2+\beta J_4)}+\cosh{2(\beta J_1-\beta J_3)}
\end{eqnarray*}
This change also stops the disagreement at low temperatures in these systems. Applying both these changes and the conditions for the $\sigma$-sublattice gives agreement with the simulation results.
Further work is required to understand the implications of the conditions suggested above on the work of Wu and Lin \cite{Wu1989}. In this paper they use a similar approach to the one presented here for the checker-board lattice, so an investigation into this lattice would be useful to see if similar results are obtained. Also as many papers, such as \cite{Strecka2006,Strecka2006a}, extend the results presented here. These should also be studied for similar inconsistencies. Finally, we observed a disagreement between the theoretical predictions and simulation results at low temperatures in our analysis of the system presented in \Fref{fig:NSFunky}. This disagreement remains after the application of the above conditions. As such, further investigation into these systems is required. |
2202.06000 | \section{Acknowledgments}
\bibliographystyle{ACM-Reference-Format}
\section{Introduction}
People with upper body motor impairments have difficulty touching an on-screen target accurately with fingers due to tremors, muscular dystrophy, or loss of arms~\cite{10.1145/3173574.3174094,10.1145/3236112.3236116}.
They were found to have difficulty entering and correcting texts, grabbing and lifting the phone, making multi-touch input, pressing physical buttons, and so on, especially outside of home~\cite{10.1145/2661334.2661372}.
While voice-based interfaces, such as Siri~\cite{10.1145/3151470.3151476}, could be an alternative method, they suffer from low input speed and accuracy~\cite{10.1145/3151470.3151476,10.1145/2661334.2661372} and may raise privacy and social acceptance concerns when used in public~\cite{10.1145/2702123.2702188,10.1145/2661334.2661372}.
Recently researchers proposed eyelid gestures for people with motor impairments to subtly interact with mobile devices without finger touch or drawing others' attention~\cite{10.1145/3373625.3416987,fan2021eyelidCAM}. These eyelid gestures, though followed design principles, were created without involving people with motor impairments in the design process. Consequently, it remains questionable whether these gestures are the ones people with motor impairments preferred in the first place. Indeed, the participants with motor impairments in their study~\cite{10.1145/3373625.3416987} also suggested other eyelid gestures and expressed a desire to design \textit{their own} eyelid gestures. Thus, there is a need to involve people with motor impairments to design gestures that will be used by them. In the meantime, prior research demonstrated that allowing users to define their preferred gestures would uncover more representative and preferred gestures~\cite{10.1145/1518701.1518866,10.1145/2468356.2468527,10.1145/1978942.1978971}.
Motivated by this need and prior success of designing user-defined gestures in other contexts, we sought to engage people with motor impairments to design eyelid gestures they prefer. What's more, as recent research demonstrated the promise of gaze and head pose for hands-free interaction in addition to eyelid gestures~\cite{nukarinen2016evaluation,sidenmark2019eye,kyto2018pinpointing,yan2018headgesture}, we extended the design space of user-defined gestures by inviting people with motor impairments to design \textit{above-the-neck gestures} that include eyelids, gaze, mouth, and head.
We conducted an online user study, in which 17 participants with various upper body motor impairments designed above-the-neck gestures to complete 26 tasks that were commonly performed on mobile devices. These tasks included general commands (e.g., tap and swipe), app-related commands (e.g., open an app or a container with the app), and physical button-related commands (e.g., volume up and down). During the study, participants first watched a video clip explaining each task and its effect on a smartphone and then had time to design and perform an above-the-neck gesture. Afterward\st{s}, participants rated the goodness, ease, and social acceptance of the gestures they just created. Finally, they were interviewed to provide feedback on the gestures.
We collected a total of 442 user-defined gestures.
Our results show that participants preferred to use gestures that were simple, easy-to-remember, and less attention-demanding. Based on all the gestures obtained and the rating and frequency of use of each gesture, we assigned each command the most appropriate gesture or gesture.
To validate the usability and acceptance of these user-defined gestures, we conducted an online survey that asked participants to select the most appropriate gesture from three candidates to complete each of the
that we did previously in the user study on mobile devices. These candidate options were chosen from the most frequently mentioned gestures from the user study. Results show that our gesture set was well accepted and recognized by people with and without motor impairments. In sum, we make the following contributions in this work:
\begin{itemize
\item We present a set of user-defined above-the-neck gestures based on the gestures designed by people with upper body motor impairments to complete common interactions on mobile devices;
\item We show that these user-defined above-the-neck gestures are largely preferred by people with and without motor impairment.
\end{itemize}
\section{Conclusion}
We have adopted a user-centered approach by involving people with motor impairments to design user-defined above-the-neck gestures for them to interact with mobile phones. By analyzing the 442 gestures and resolving conflicts, we have arrived at a set of user-defined gestures. The participants were excited about the convenience the gestures could bring to them. They preferred gestures that were simple, easy to remember, and had high social acceptance.
Our follow-up survey study results found that the user-defined gestures were well received by both people with and without motor impairments. Finally, we also highlight the design considerations and future work.
\section{Background and Related Works}
Our work is informed by prior work on \textit{interaction techniques for people with motor impairments} and \textit{user-defined gesture designs}.
\subsection{Interaction Techniques for People with Motor Impairments}
Brain\rv{-}computer interfaces (BCIs) sense brain signals for people with motor impairments to communicate with the environment or control computer systems without using hands (e.g.,~\cite{10.3389/fnhum.2018.00014,corralejo2014p300}).
However, BCIs often need long periods of training for users to control their brain rhythms well~\cite{pires2012evaluation}, and people with motor impairments were reported to be concerned about fatigue, concentration\rv{,} and also social acceptance~\cite{taherian2017we,blain2012barriers}.
Gesture-based interactions have been investigated as an alternative approach. Ascari et al.~\cite{10.1145/3408300} proposed two machine learning approaches to recognize \textit{hand gestures} for people with motor impairments to interact with computers. However, these approaches are not feasible for people with upper body impairments who could not use their hands freely.
To overcome the limitations of body and hand gestures for people with upper body impairments, researchers investigated \textit{eye-based} interactions.
Among all eye-based gestures, \textit{blink} was probably the most widely studied for people with motor impairments. Earlier work used EOG sensors to detect blink to trigger computer commands~\cite{kaufman1993}.
The \textit{duration} of blink was also utilized as additional input information. For example, \rv{a} long blink was detected and used to stop a moving onscreen target~\cite{Heikkila2012}.
What's more, blink was also used along with \textit{eye movements} to trigger mouse click~\cite{Kwon1999}.
Blink was also used along with \rv{the} head motion. For example, the frequency of blink combined with \textit{head motion} \st{were}\rv{was} used to infer five activities, including reading, talking, watching TV, math problem solving, and sawing~\cite{Ishimaru2014}.
In addition to blink, \textit{wink} was used for people with motor impairments. Shaw et al. constructed a prototype to detect the open and close states of each eye and used such information to infer three simple eyelid gestures: blink, wink the left eye, and wink the right eye~\cite{Shaw1990}.
Similarly, Zhang et al. proposed an approach to combine blinks and winks with \st{with }gaze direction to type characters~\cite{Zhang:2017:SGG:3025453.3025790}. Recently, Fan et al. took a step further to investigate the design space of \textit{eyelid gestures} and proposed an algorithm to detect nine eyelid gestures on smartphones for people with motor impairments~\cite{10.1145/3422852.3423479,10.1145/3373625.3416987}.
These eyelid gestures were designed based on eyelid states, in which two eyelids could be in, and the possible parameters that humans can control, such as the duration of closing or opening an eyelid and the sequence.
Although the design of eyelid gestures followed a set of design principles, these gestures were designed by the researchers who did not have motor impairments themselves, and the design process did not involve people with motor impairments in the loop. Consequently, it remains unknown \textit{whether these eyelid gestures were the ones that people with motor impairments preferred in the first place}. In fact, participants with motor impairments in their study could not well perform some eyelid gestures well, proposed new eyelid gestures\rv{,} and expressed the desire to design their own gestures. Motivated by this need, we seek to explore user-defined eyelid gestures that people with motor impairments would want to create and use.
Other body parts, such as the head, have also been used to extend the interaction for people with motor impairments. Kyto et al.~\cite{kyto2018pinpointing} compared eyes and head\rv{-}based interaction techniques for wearable AR and found that the head-based interactions caused less error than eye-based ones. What's more, they found the combination of eye and head resulted in a faster selection time. Sidenmark and Gellersen~\cite{sidenmark2019eye} studied the coordination of eye gaze and head movement and found this approach was preferred by the majority of the participants because they felt better control and less distracted.
Similarly, gaze and head turn were also combined to facilitate the control of onscreen targets~\cite{nukarinen2016evaluation}.
Inspired by this line of work that shows the advantage of combining head motion with eye motions, we extend our exploration to include above-the-neck body parts, including both eyes, head and mouth, to allow people with motor impairments to better design a richer set of user-defined gestures.
\subsection{User-Defined Gesture Designs}
User-defined gestures have been investigated by researchers in various contexts~\cite{10.1145/1518701.1518866,10.1145/2468356.2468527,10.1145/2166966.2166984,10.1145/2702613.2732747,10.1145/3334480.3382883,10.1145/3385959.3422694,10.1145/1978942.1978971,10.1145/3365610.3365625,10.1145/2598153.2598184}. Wobbrock et al.~\cite{10.1145/1518701.1518866} studied user-defined gestures for multi-touch surface computing, such as tabletop. They investigated what kind of hand gestures non-technical users would like to create and use by asking the participants to create gestures for 27 referents with one hand and two hands. Wobbrock et al.~\cite{10.1145/1518701.1518866} also designed gestures for the 27 referents on their own and compared the gestures created by them with the ones created by the users. They found that they created far fewer gestures than participants, and many of the gestures they created were never tried by users. Kurdyukova et al.~\cite{10.1145/2166966.2166984} studied the user-defined iPad gestures to transfer data between two displays, including multi-touch gestures, spatial gestures, and direct contact gestures. Piumsomboon et al.~\cite{10.1145/2468356.2468527} worked on user-defined gestures for AR and asked people to perform hand gestures with a tabletop AR setting while Lee et al.~\cite{10.1145/2702613.2732747}utilized an augmented virtual mirror interface as a public information display. Dong et al.~\cite{10.1145/3334480.3382883,10.1145/3385959.3422694}worked on the user-defined surface and motion gestures for mobile AR applications. In the work of Ruiz et al.~\cite{10.1145/1978942.1978971}, they utilized the user-defined method to develop the motion gesture set for mobile interaction. Weidner and Broll~\cite{10.1145/3365610.3365625} proposed user-defined hand gestures for interacting with in-car user interfaces, and Troiano et al.~\cite{10.1145/2598153.2598184} presented for interacting with elastic, deformable displays.
These user-defined methods motivated our research. Specifically, our research investigates what upper-neck gestures people with motor impairments would like to create and how they would want to use such gestures to accomplish tasks on their touch-screen mobile devices.
\section{Research Problems}
In this paper, we sought to answer the following three research problems (RQs):
\begin{itemize}
\item RQ1: What gestures would people with motor impairments prefer to create?
\item RQ2: What factors do people with motor impairments consider when making gestures?
\item RQ3: What are the characteristics of the gesture set?
\end{itemize}
\section{Method}
The goal of this IRB-approved study was to gather user-defined above-the-neck gestures for common tasks on mobile devices from people with motor impairments and then identify the common user-defined gestures for each task.
\begin{table*}[htb!]
\caption{Participants' demographic information}
\label{participant}
\Description{Table demonstrates the age, sex, location, and prior device usage experiences, such as computer, android phone, iPad, and iPhone of 16 participants each.}
\begin{tabular}{c|c|c|c}
\hline
\rowcolor[gray]{0.9}ID & Age & Gender & Motor impairments \\
\hline
P1 & 32 & Male & Spinal cord injuries, wheelchair user\\ \hline
P2 & 25 & Male & Cerebral palsy, shaking hands and hard to control hand movements\\ \hline
P3 & 30 & Male & Loss or injury of limbs loss of both of arms\\ \hline
P4 & 19 & Female & Cerebral palsy, shaking hands and hard to control hand movements\\ \hline
P5 & 28 & Male & Loss or injury of limbs, loss of right leg, needs prosthetics\\ \hline
P6 & 31 & Female & Cerebral palsy, shaking hands and hard to control hand movements\\ \hline
P7 & 21 & Male & Cerebral palsy, shaking hands and hard to control hand movements\\ \hline
P8 & 32 & Female & Spinal cord injuries, wheelchair user\\ \hline
P9 & 42 & Male & Cerebral palsy, shaking hands and hard to control hand movements\\ \hline
P10 & 34 & Male & Loss or injury of limbs, loss of one leg, needs crutches\\ \hline
P11 & 35 & Male & Spinal cord injuries, hands have no feeling\\ \hline
P12 & 26 & Male & Loss or injury of limbs, right hand has no fingers\\ \hline
P13 & 28 & Male & Cerebral palsy, shaking hands and hard to control hand movements\\ \hline
P14 & 26 & Female & Loss or injury of limbs, loss of left leg, needs crutches\\ \hline
P15 & 35 & Male & Loss or injury of limbs, loss of both of arms\\ \hline
P16 & 24 & Male & Loss or injury of limbs, loss of legs, needs crutches and prosthetics\\ \hline
P17 & 27 & Female & Loss or injury of limbs, missing fingers on left hand\\ \hline
\end{tabular}
\end{table*}
\subsection{Participants}
We recruited seventeen (N=17) participants \rv{through online contact with a disability organization} for the study. Table~\ref{participant} shows the demographic information. Twelve were males and five were females. Their average age was 29 years old ($SD = 6$). All participants had some forms of motor impairments that affected their use of mobile phones. Ten participants had arm or hand problems. Specifically, eight had the loss or injury of their limbs, six had cerebral palsy who had shaky hands and difficulty controlling their hand movements. The remaining three had spinal cord injuries, two of whom needed to use a wheelchair\rv{,} and one did not have hand sensation. Seven had their legs amputated and needed prosthetics or crutches.
Some of them had difficulties speaking clearly or fluently due to the influence of cerebral palsy, but it did not affect our user study. None of them used upper-neck gestures to control devices prior to the study. \rv{The participants were compensated for the study.}
\subsection{Tasks}
\rv{Firstly, we studied the instructions on the official websites of iOS and Android \footnote{\url{https://support.apple.com/en-us/guide/iphone/iph75e97af9b/ios}, \url{https://support.apple.com/en-us/guide/iphone/iphfdf164cac/ios}, \url{https://support.apple.com/en-us/guide/iphone/iphca3d8b4e3/ios}, \url{https://support.google.com/android/answer/9079644?hl=en}, \url{https://support.google.com/android/answer/9079646}} to learn about the commands and corresponding gestures designed for today's touchscreen smartphones. Moreover, we drew inspiration from recent work ~\cite{10.1145/3373625.3416987,fan2021eyelidCAM} for the commands that were supported by eyelid gestures designed for people with motor impairments to interact with touchscreen devices. In the end, we identified 26 commands commonly used for smartphone interactions.
Based on their similarities, we further clustered} these commands \st{were clustered} into three groups. \textit{Group 1} included twelve \textbf{General commands}, which were Single Tap, Double Tap, Flick, Long Press, Scroll Up, Scroll Down, Swipe Left, Swipe Right, Zoom In, Zoom Out Drag, and Rotate. \textit{Group 2} included ten \textbf{App-related commands}, which were Open the Ap\rv{p}, Move to Next Screen, Next Button, Previous Button, Open the Container (a UI component within an app), Next Container, Previous Container, Move to Next Target App, Open Previous App in the Background, Open Next App in the Background. \textit{Group 3} included four \textbf{Physical Button-related commands}, which were Volume Up, Volume Down, and Screenshot.
The general commands were obtained from mobile phone systems (iOS$\And$Android), and the app-related commands were inspired by a recent study~\cite{10.1145/3373625.3416987}, which proposed commands such as switching between apps, switching between tabs in an app, and switching between containers in a tab. The four physical button-based commands were \rv{also inspired by the commands supported by iOS and Android and were} related to common button functions, such as turning the volume up \& down, taking screenshots, and locking the screen.
\subsection{Procedure}
\label{sec:procedure}
\begin{figure*}[h!]
\centering
\includegraphics[width=1\linewidth]{Figures/procedure.png}
\caption{The procedure of the study procedure.}
\label{fig:procedure}
\Description{The procedure of the task experiment (i.e., "background questionnaire", "watch a video demonstration of a command", "think aloud while creating a gesture for it", "rate the goodness, ease and social acceptance of the gesture for the command", and "interviews").}
\end{figure*}
\begin{figure*}[htb!]
\centering
\subfigure[Zoom In]{
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{Figures/zoom_in.png}
\end{minipage}%
}%
\subfigure[Next Container]{
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2.1in]{Figures/next_container.png}
\end{minipage}%
}%
\subfigure[Volume Up]{
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2in]{Figures/volume_up.png}
\end{minipage}%
}%
\centering
\caption{Video clips: (a) is the video clip for Zoom In, which is an example of enlarging a picture by hand interaction, (b) is the video clip for the Next Container command in the App-related group, showing the target containers that the participants needed to interact with, (c) is the video clip for the Volume Up command in the button-related group, which shows how one of the authors interacted with her mobile phone to increase the volume.}
\label{fig:video clips}
\Description{Examples of video screenshots used in the study (i.e., "Zoom In", "Next Container", and "Volume Up").}
\end{figure*}
Figure ~\ref{fig:procedure} shows the study procedure. After answering the background questionnaire, participants were asked to watch a short video clip for a command. We created a short video clip to show each command and its effect on a smartphone. Figure ~\ref{fig:video clips} shows example video clip frames for the command Zoom In, Next Container, and Volume Up. Showing video clips instead of explaining verbally was to ensure the consistent presentation of tasks to participants. After watching the video clip, participants would create an above-the-neck gesture for it and perform the gesture to the moderator.
To reduce any ordering effects, the task videos for the general commands and physical button-related commands were presented to participants in \st{a }random order. Because there was a logical order among the commands in the app-related group, we kept the order of the tasks in this group to avoid confusion\rv{s}.
During this process, we asked participants to think aloud so that the moderator could better monitor the design process.
After performing the user-defined gesture, they were asked to rate the goodness, ease\rv{,} and social acceptance of the gestures for the command using 7-point like-scale questions.
Then, participants repeated this process until they created gestures for all commands.
To reduce the gesture conflicts happening during the study, we asked participants to design different gestures for each command within the same group (i.e., three groups for the general, app-related, and physical\rv{-}button related commands). As for the commands that were not in the same group, we allowed the participants to perform the same gesture.
However, due to a large number of commands, some participants might forget their previous gestures. Thus, the moderator monitored the gestures already created, and if she found a conflict gesture was proposed, she would remind the participants to change to a different gesture for either the current one or the earlier one that was conflicted with. In addition, participants were also allowed to change their mind\rv{s} if they later wanted to go back and change a previous one.
All study sessions were conducted remotely through a video conference platform in order to comply with the COVID-19 social distance requirements, and the whole process was video recorded. In total, we collected 442 user-defined above-the-neck gestures (17 participants x 26 commands).
\subsection{Conceptual Complexity of the Commands}
\label{sec:cc}
\begin{figure}[htb!]
\centering
\includegraphics[width=1\linewidth]{Figures/commands.png}
\caption{The conceptual complexity of 26 commands, rated by two HCI researchers. The higher the score, the more complex the command \rv{perceived by the HCI researchers}.}
\label{fig:commands}
\Description{A bar chart with different colors representing the mean value of conceptual complexity of each command.}
\end{figure}
Before we determined the final user-defined gestures for the commands, we took a step to understand the perceived complexity of the commands. To do so, we calculated the \textbf{conceptual complexity} of each command.
Conceptual complexity of a command was a concept \rv{widely used in prior works (e.g., Wobbrock et al.'s work~\cite{10.1145/1518701.1518866}, Arefin Shimo et al.'s work~\cite{arefin2016exploring}, and Dingler et al.'s work~\cite{dingler2018designing})}, which measures \rv{the perceived difficulty of the command from HCI researchers' points of view}.
\st{how difficult to achieve the effect of the command.}
For example, Single Tap \rv{(e.g., tapping a button on the touchscreen)} is a command that \rv{we, as HCI researchers, believed} could be achieved easily with a tap and thus has \st{a }low conceptual complexity. In contrast, Screenshot (e.g., taking a screenshot) requires more \rv{fingers and} steps than a single tap, so it has a relatively higher conceptual complexity.
To determine the conceptual complexity for each command, two HCI researchers rated the difficulty of completing each command \rv{on a 5-point Likert scale }(1: the relatively easiest, 5: the relatively most difficult) independently \rv{to avoid influencing the other person's point of view}. \rv{A score of 5 means that it was perceived by the researcher as the most difficult among the 26 commands.} \rv{The scores of the two researchers were similar for most commands with only a few that had relatively bigger differences}. Finally, we calculated the average of the scores assigned by the two researchers to get the conceptual complexity of the command. Figure~\ref{fig:commands} shows the conceptual complexity for each command.
\section{Results}
We followed the analysis methods of prior user-defined gesture design papers (e.g., ~\cite{10.1145/1518701.1518866}) to obtained \textit{the gesture taxonomy}, the \textit{gesture agreement score}, the \textit{participants' ratings of the gestures performed}, and the final \textit{user-defined gesture set}. We also analyzed the participants' interviews to understand the design rationales.
\subsection{User-defined Above-the-Neck Gestures Taxonomy}
\begin{table}[htb!]
\caption{The seven categories of the user-defined above-the-neck gestures, including only eyes, only head, only mouth, eyes and head, eyes and mouth, head and mouth, and eyes, head and mouth, and the gestures within each category}
\label{Table 1}
\Description{The table demonstrates the taxonomy of all the gestures, including only eyes, only head, only mouth, eyes and head, eyes and mouth, head and mouth, and Eyes, head and mouth.}
\begin{tabular}{c|l}
\hline
\rowcolor[gray]{0.9}Taxonomy of All Gestures & Breakdown of Each Taxonomy \\
\hline
\multirow{12}{*}{Only Eyes} & blink \\\cline{2-2}
& gaze \\\cline{2-2}
& eye-movement \\\cline{2-2}
& eye size \\\cline{2-2}
& eyebrows \\\cline{2-2}
& blink + gaze \\\cline{2-2}
& gaze + eye size \\\cline{2-2}
& eye movement + eye size \\\cline{2-2}
& blink + eye movement \\\cline{2-2}
& gaze + eye movement \\\cline{2-2}
& blink + gaze + eye movement \\\cline{2-2}
& blink + eye size \\\hline
\multirow{4}{*}{Only Head} & head movement \\\cline{2-2}
& head distance \\\cline{2-2}
& head rotation \\\cline{2-2}
& head distance + head movement \\\hline
\multirow{5}{*}{Only Mouth} & pout \\\cline{2-2}
& wide open then close mouth \\\cline{2-2}
& wry mouth \\\cline{2-2}
& suck mouth \\\cline{2-2}
& smile \\\hline
\multirow{7}{*}{Eyes $\And$ Head} & blink + head movement \\\cline{2-2}
& eye size + head movement \\\cline{2-2}
& eye gaze + head movement \\\cline{2-2}
& eye movement + head movement \\\cline{2-2}
& gaze + head distance \\\cline{2-2}
& gaze + head rotation \\\cline{2-2}
& gaze + eye size + head movement \\\hline
\multirow{2}{*}{Eyes $\And$ Mouth} & blink + pout \\\cline{2-2}
& blink + wide open mouth\\\hline
\multirow{2}{*}{Head $\And$ Mouth} & head movement + pout \\
\cline{2-2}
& head movement \\
& + wide open then close mouth\\
\hline
\multirow{2}{*}{Eyes $\And$ Head $\And$ Mouth} & eye movement + head movement \\
& + wide open mouth\\
\hline
\end{tabular}
\end{table}
We collected 17×26=442 gestures for all 26 commands and classified them according to different body parts, the eyes, the head, and the mouth.
\textbf{Gesture Categories}. We grouped the gestures into \textbf{seven categories} based on the body parts involved. These seven categories included \textit{a single body part} and \textit{the combinations of different body parts}: \textbf{only eyes}, \textbf{only head}, \textbf{only mouth}, \textbf{eyes$\And$head}, \textbf{eyes$\And$mouth}, \textbf{head$\And$mouth}, and \textbf{eyes$\And$head$\And$mouth}. For each dimension, we subdivided it according to the gestures that the participants performed. Table~\ref{Table 1} shows the taxonomy of the user-defined above-the-neck gestures.
The \textit{only eyes} category includes five basic types of eye gestures: \textit{blink}, \textit{gaze}, \textit{eye movement}, \textit{eye size}, \textit{eyebrow} and the combination of these basic eye gestures. Blinks include single and double blinks, as well as different numbers of blinks. Gaze is the eyes on the screen, which may differ in the length of time. In addition to moving the eyes up and down, left and right, eye rotating and cross-eye are also considered eye movement. The wide opening, closing, and squinting actions are regarded as the scope of eye size. In addition, we counted eyebrow movements as eye scope, such as having participants squeeze their eyebrows. After combining two or more of these single eye movements, there will be dozens of combinations in total. We got rid of the ones that the participants did not perform and ended up with seven combinations of eye gestures.
The \textit{only head} category includes \textit{head movement}, \textit{head distance}, \textit{head rotation}, and the combination of different head gestures as well. The scope of head movement is turning and tilting the head in different directions. We separated head rotation from the head movement because the amplitude of the rotation is larger and more apparent. In addition, the tilted head included in the head movement is a bit similar to the half-circle rotation, so we thought that distinguishing them could be better understand the participants’ preference for these two different amplitude gestures. Head distance means the distance between the head and the mobile phone screen that people would move heads closer or further from the screen. Among all the combination possibilities of these single head gestures, only the combination of head movement with head distance change was chosen by our participants.
The \textit{only mouth} category includes \textit{pout}, \textit{open mouth}, \textit{close mouth}, \textit{wry mouth}, \textit{suck mouth}, and \textit{smile}.
\textit{Combined Gestures}. The above three categories are the gestures involving an individual body part. The participants also made some \textit{combined gestures}, \rv{which were} the gestures with a combination of different body parts. One of the most frequently proposed combined gestures is \textit{the combination of eyes and head gestures}, such as blinking followed by a nod, closing the eyes with the head swinging, gazing with head rotation, etc. The combination of eyes$\And$mouth, head$\And$mouth, had two different varieties in each category. The eyes$\And$head$\And$mouth had only one combination.
\textbf{Distribution of Gesture Groups}. There were many overlaps between the gestures designed by different participants. After removing overlaps, we found 250 unique user-defined above-the-neck gestures. Among these unique gesture\rv{s}, 44.4\% were only-eye gestures, 14.8\% were only-head gestures, 4.8\% were only-mouth gestures, 30.8\% were eyes$\And$head, 1.2\% were eyes$\And$mouth, 3.2\% were head$\And$mouth, and 0.8\% were eyes$\And$head$\And$mouth. This finding suggests that although participants could use the mouth, the eyes, and the head, they still preferred eye\rv{-}based gestures the most and the head\rv{-}based gestures the second.
Among all categories, the \textit{Only Eyes} category was most diverse. Moreover, the combinations including eyes (e.g., Eyes \& Head, Eyes \& Mouth) were more common than those performed by other parts (e.g., Head \& Mouth).
\subsection{Determination of the User-defined Gesture Set for the Commands}
To derive the final user-defined gesture set from all gestures proposed by all participants, we collated the gestures included in each command and counted the number of participants performing the same gesture. We resolved conflicts between gestures to obtain the final gesture set. We also calculated the \textbf{agreement score} for each command. \rv{Agreement score was initially proposed by Wobbrock et al.~\cite{wobbrock2005maximizing} and later widely used in studies uncovering user-defined gestures for various platforms (e.g., tabletop, phone, watch, and glasses~\cite{10.1145/1518701.1518866,10.1145/1978942.1978971,dingler2018designing,vatavu2015formalizing,arefin2016exploring}}. \st{It represents the perceived complexity of a command from the perspectives of the end-users (i.e., people with motor impairments).}
\rv{It intuitively characterizes differences in agreement between target users for assigning a gesture to a given command. In general, the higher the agreement score of a command, the better the participants are in agreement \st{for}\rv{with} the gesture assigned to the command.}
\subsubsection{Agreement Score}
We categorized the gestures performed by the participants for each command and then counted how many people made the same gesture. These groups and the number of people in each group were used to calculate the \textbf{agreement score} of the commands. We adopted this method from prior user-defined gesture research~\cite{wobbrock2005maximizing,10.1145/1518701.1518866,10.1145/1978942.1978971} and used the following equation:
\begin{equation}
A_c = \sum_{P_i
}(\frac{P_i}{P_c
})^2
\end{equation}
In Equation.1, $c$ is one of the commands, $A_c$ represents its \textit{agreement score} based on participants' proposed gestures for this command. The value ranges from 0 to 1. $P_c$ is the total number of gestures proposed for $c$, which is the number of participants in our case (N=17). $i$ represents a unique gesture. Because different participants proposed \rv{the} same gestures, the number of \textit{unique gestures} was smaller than the total number of proposed gestures. $P_i$ represents the number of participants who propose\st{s} the unique gesture i.
Take the \textit{Single Tap} command as an example, 17 participants proposed 17 gestures in total, thus $Pc$ equals 17. Among these gestures, there were seven unique gestures. There were 7, 4, 2, 1, 1, 1, and 1 participants who proposed each of the seven unique gestures respectively. As a result\rv{,} the agreement score of \rv{the} Single Tap command was calculated as follows:
\begin{equation}(\frac{7}{17})^2+(\frac{4}{17})^2+(\frac{2}{17})^2+4
(\frac{1}{17})^2=0.25\end{equation}.
\begin{figure}[htb!]
\centering
\includegraphics[width=1\linewidth]{Figures/agreement.png}
\caption{The \textit{agreement scores} of the 26 commands. The higher the score, the higher the participants' consensus on which gesture(s) should be assigned}
\label{fig:agreement}
\Description{A bar chart with different colors representing the agreement score of each command.}
\end{figure}
Figure~\ref{fig:agreement} shows the agreement score of the gestures proposed for each command. The commands were arranged in the same order as the conceptual complexity as in Figure~\ref{fig:commands}. In general, the higher the agreement score, the higher the participants' consensus on which gesture(s) should be assigned. The agreement score of \textit{Double Tap} was high, which indicated that participants more agreed on which gesture(s) should be allocated to this command. In contrast, the agreement score of \textit{Rotate} was relatively lower, which indicated that participants proposed more diverse gestures for it and less agreed on which gesture should be allowed to it.
\subsubsection{Conceptual Complexity vs. Agreement Score}
As we explained in Sec.\ref{sec:cc}, the \textit{conceptual complexity} is a measurement of the perceived complexity of commands from \textbf{researchers' perspectives}. In contrast, \textit{agreement score} is a measure of perceived complexity of commands from \textbf{end-user's perspectives}. If the perception of researchers aligns with the perception of end-users (i.e., people with motor impairments), then we should expect to see a correlation between the two measures.
We did Pearson Correlation Test and \st{interesting, we}found no significant correlation between the agreement score and conceptual complexity score of each command (r=-.38, p=.05). In other words, the commands given a low conceptual complexity score by the researchers\st{,} did not result in a high agreement score. This suggests that there was a discrepancy between the researchers' understanding of the complexity of the commands and gestures and that of the target users'\rv{. This finding} further highlights the necessity to involve end-users into the design process to design user-defined gestures that they\rv{, instead of researchers,} would perceive easy-to-use.
\subsubsection{Gesture Conflict}
We found that in some cases participants proposed different gestures for a command. Thus, we needed to resolve the conflicts to assign one gesture for each command.
Our conflict resolution strategy was as follows. When the same gesture was allocated to both a single command (e.g., Drag) and a paired commands (e.g., Swipe Left$\And$Swipe Right), we would prioritize the gesture to the paired ones because the cost for finding alternative gestures for the paired commands was higher than that for a single command.
After allocating the conflicted gesture to the paired commands, we allocated the gesture proposed by the second-highest number of participants for this single command to it.
Figure~\ref{fig:conflict 1} illustrates the process with an example.
The same gesture ``Turn Head to the Left and Look Left'' was proposed by the same number of participants (N=6) for both the single command (i.e., Drag) and the paired commands (i.e., Swipe Left$\And$Swipe Right). Our resolving strategy would assign this gesture to the Swipe Left command. Next, we allocated the gesture that was proposed by the second-highest number of participants to the Drag command. In this case, it was ``Gaze, and Look At a Certain Direction.''
\begin{figure*}[h]
\centering
\includegraphics[width=1\linewidth]{Figures/conflict1.png}
\caption{Gesture conflict 1: a single command (e.g., Drag) conflicted with paired commands (e.g., Swipe Left$\And$Swipe Right). In this case, we assigned gesture 2 to Drag and gesture 1 to Swipe Left$\And$Swipe Right. The column of No.of participants means how many participants performed the same gesture, including the participant number. The gestures were arranged in the descending order of No.of participants.}
\label{fig:conflict 1}
\Description{A table shows how we solved the gesture conflict when a single command conflicted with paired commands.}
\end{figure*}
\begin{table}[htb!]
\caption{The final user-defined gestures for all commands. Most commands have one allocated gesture (i.e., Gesture 1) and a few have two or three gestures (i.e., Gesture 2 and Gesture 3)}
\label{gesture set}
\Description{The table demonstrates our final gesture set. For each of the 26 commands, we assigned the most appropriate gesture or gestures to it.}
\begin{tabular}{p{0.2\textwidth} | p{0.3\textwidth}|p{0.2\textwidth} | p{0.2\textwidth}}
\hline
\rowcolor[gray]{0.9}Commands & Gesture 1 & Gesture 2 & Gesture 3 \\
\hline
Single Tap & eyes blink once & & \\ \hline
Open the App & eyes blink once & &\\ \hline
Move to Next Screen & turn head to the left and look left & & \\ \hline
Open the Container & eyes blink once & & \\ \hline
Double Tap & eyes blink twice & & \\ \hline
Flick & raise head & & \\ \hline
Long Press & gaze for 3-5s & & \\ \hline
Next Button & look right, then blink once & & \\ \hline
Previous Button & look left, then blink once & & \\ \hline
Next Container & look downward & & \\ \hline
Previous Container & look upward & & \\ \hline
Move to Next Target App & look right & & \\ \hline
Scroll Up & raise head and look upward & look upward & \\ \hline
Scroll Down & lower head and look downward & look downward & \\ \hline
Phone Lock & close eyes for 3s & & \\ \hline
Swipe Left & turn head to the left and look left & & \\ \hline
Swipe Right & turn head to the right and look right & & \\ \hline
Volume Up & blink right eye & & \\ \hline
Volume Down & blink left eye & & \\ \hline
Zoom In & wide open eyes & & \\ \hline
Zoom Out & squint eyes & & \\ \hline
Drag & gaze, and look at a certain direction & & \\ \hline
Open Previous App in the Background & raise head, then turn head to the right, then blink eyes & & \\ \hline
Open Next App in the Background & raise head, then turn head to the left, then blink eyes & & \\ \hline
Screenshot & eyes blink three times & & \\ \hline
Rotate & turn head to the left, and look at the screen & eyes look counter-clockwise & tilt head \\ \hline
\end{tabular}
\end{table}
\subsubsection{Final User-defined Above-the-Neck Gesture Set}
Table~\ref{gesture set} shows the final gestures for each command. Most of the commands have only one allocated user-defined gesture (i.e., Gesture 1 in the table). However, there are three commands that had more than one gesture allocated. This is because these commands have more than one gesture \st{that were }proposed by the same number of participants.
As shown in Table~\ref{gesture set}, twenty-three commands were assigned 1 gesture, two commands had 2 gestures, and one command had 3 gestures.
\begin{figure*}[htb!]
\centering
\includegraphics[width=1\linewidth]{Figures/sketch.png}
\caption{Visual illustrations of the final user-defined above-the-neck gestures for the 26 commands.}
\label{fig:sketch}
\Description{The sketch of our final gesture set.}
\end{figure*}
Figure~\ref{fig:sketch} further illustrates the final user-defined above-the-neck gestures for the 26 commands.
There were 30 gestures in the final gesture set. 20 were only-eyes gestures, 3 were only-head gestures, and 7 were eyes$\And$head gestures.
We found continuity in our final gesture set. When two commands were related, such as single tap and double tap, the final assigned gestures were blinking once and blinking twice, which were very strongly correlated. There is also a strong symmetry. For example, for swipe left and swipe right, the gestures were looking at left and right and turning head to the left and right; and for zoom in and zoom out, the gestures were wide-open eyes and squint eyes. These gestures were also logical, such as the phone lock was closing eyes.
\subsubsection{Subjective Ratings of the Gestures}
We asked participants to rate the goodness, ease of use, and social acceptance of each gesture they performed. We divided the gestures under each command into large groups and small groups. The large group was defined as the number of people making that gesture over the number of people making the other gestures, and it was usually the gesture that was selected in the gesture set. The rest are the small groups. We compared the participants’ average goodness (mean of large groups=5.94, mean of small groups=5.94), ease of use (mean of large groups=5.80, mean of small groups=5.72), and social acceptance (mean of large groups=5.69, mean of small groups=5.70) of the gestures in large groups with those in small groups. There was no significant difference between them. We also did Pearson Correlation Test and found that the conceptual complexity of the commands did not have a strong correlation of their goodness (r=-.42, p=.04), ease of use (r=-.20, p=.34), or social acceptance (r=-.40, p=.05). The reason was many participants commented while doing the test that "I choose to make this gesture because I think it is good," resulting in a high rating score, which proved why a gesture was performed by fewer participants and command had a high conceptual complexity score, but the rating was not low.
\subsection{Perceptions of the User-defined Gestures}
We present the following insights learned from participants' feedback about user-defined gestures.
\subsubsection{Easy to Use and Understand}
After participants made a gesture, we asked them why they chose that gesture, the most common feedback was that it was simple to understand, easy to do, and understandable.
P13 explained, \textit{``Because I feel that these gestures are good, simple, easy to perform, so I choose them.''}, and P16 argued, \textit{``It is easy to understand and belongs to the normal range of head movement.''}
In addition to the gesture itself was simple to do, but also with the previous use of technical devices. Many participants thought about creating gestures to complete commands in conjunction with how they usually used their mobile phones with their hands. For example, when I asked them to swipe to the next page, most of them turn their heads to the left because they also swiped to the left when operated by hand. P11 used his computer more often than his phone, so he considered his habits of using the computer when creating gestures. For example, when doing the single tap command, he blinked his left eye and said, \textit{``Operation is similar to clicking with the left mouse button''-P11}. Using different mobile phone models also affected the gestures. Many Android phone users chose to blink three times when taking a screenshot because they usually used three fingers to pull down for taking a screenshot with their Android phones, so this gesture was better for them to understand, but it might not be for iPhone users.
\subsubsection{Memorability}
During the testing process, many participants gave feedback that they forgot what gestures they had done before. When P14 was doing command 15, she said she couldn't remember what gestures she had done. In addition to not remembering, some participants deliberately chose to make the gestures they could easily remember.
\subsubsection{Duration of Gestures}
There were some gaze or close-eyes gestures, and these gestures would need to take into account the issue of how long the eyes were gazed at or closed. Although some participants mentioned 5 seconds or 10 seconds, most were 2 seconds or 3 seconds. The time length should be moderate, and as long as the differences could be distinguished from the length of time, it would not have to take too long. \textit{``Would 2 seconds or 3 seconds be better? It is clearer to stop for 3 seconds.``-P8}; \textit{``It would be inconvenient if closing the eyes for too long.''-P12}
\subsubsection{Recognition}
Many participants considered whether the action would be too subtle for the phone to recognize when they did it. For example, when considering the gaze time, the participants thought about whether the time was too short for the phone to recognize. Many participants preferred head gestures because they thought eye movements were too small to be identified. They also considered whether the gesture was confused with our natural movements or random small movements that a spontaneous movement would trigger a command. Thus, some participants made a distinction between the gestures they performed and what we usually did spontaneously. \textit{``I feel that the head forward may not be very sensitive to identify.''-P14}; \textit{``An occasional small action may affect the phone recognition.''-P11}
\subsubsection{Self-condition}
The participants were people with motor impairments, and their motor impairment problems could cause some eye or facial movement difficulties, especially eye closure difficulties. Some participants had difficulty closing a single eye, so they chose to make the gestures in both eyes together. Some participants had difficulty closing the left or right eye, and they chose to make the gesture with the eye that had no difficulty closing. P3 was left-handed. He was more willing to use his left eye to make the gesture. \textit{``I can close both eyes, but cannot close only one eye, so I do it with both eyes.''-P13}
\subsubsection{Social Acceptance}
In the comments and observations, we found that these participants were very concerned about the eyes of others. They would consider whether the gesture would be too exaggerated and did not want to attract attention. They would like to choose some simple gestures so that they would not look strange. \textit{``Doing eye gestures will still take into account the feelings of others.''-P3}; \textit{``Simple, does not attract special attention or disturb others.''-P11};
\textit{``I don't want people to look at me differently.''-P14}
\section{Survey Study}
We identified user-defined above-the-neck gestures by resolving conflicts in the original gestures created by a group of 17 people with motor impairments as illustrated in the previous section. One follow-up question would be: \textbf{\st{what are the general user acceptance of these user-defined gestures} what are the more appropriate gestures among these user-defined ones for the users who would use \rv{the} gestures to interact with smartphones}? \rv{Our gesture elicitation study was to create a gesture set for people with motor impairments to interact with the touchscreen smartphones without touch. However, able-bodied people might also encounter the same difficulties in many scenarios and find such gestures useful. For example, it would be hard to operate the phone by hand when people's hands are occupied, for example, while carrying bags or with wet or dirty hands. As a result, we included people without motor impairments in this survey study as well to understand their preferences for the user-defined gestures designed by people with motor impairments.}
To answer this question, we conducted an online survey to validate the \st{acceptance and} agreement of the user-defined gesture set by people with and without motor impairments.
\subsection{Method}
\begin{figure}[htb!]
\centering
\includegraphics[width=0.5\linewidth]{Figures/question.jpeg}
\caption{Survey example: the screenshot of the Long Press question.}
\label{fig:question1}
\Description{The screenshot of one question in the survey.}
\end{figure}
In the survey, participants were first asked questions about their physical condition and some general demographic questions. Then, we asked about each of the 26 commands proposed in the previous user study. In each question, we asked participants to choose the most appropriate gesture for that command from three candidate options. These candidate options were derived from previous studies and were the gestures that participants most frequently performed for each command. These options were arranged in random order to reduce potential order bias. We also provided an additional option for participants to write out what they thought was most appropriate if they felt none of the three options were suitable.
Figure~\ref{fig:question1} shows the screenshot of the question for the Long Press command in the survey. The survey was published in an online format and took an average of 5 minutes to complete.
\subsection{Participants}
Twenty-four participants volunteered in the survey study, including 10 people with motor impairments and 14 people without motor impairments. \rv{Ideally, it would be best to recruit people with motor impairments who do not have any prior knowledge about the study. However, due to the difficulty in recruiting people with motor impairments during the pandemic, we recruited the 10 participants with motor impairments who took the previous gesture elicitation study. To mitigate the potential memory effect, we did intentionally conduct the survey study one month after the gesture elicitation study. With a one-month time gap between the two studies, it was unlikely that these participants would still remember the commands or how they assigned the gestures to the commands a month ago. For the 14 participants without motor impairments, we recruited them through both online and offline means.} The average age of all participants was 29 years ($SD=3$).
\subsection{Results}
\begin{figure*}[htb!]
\centering
\includegraphics[width=1\linewidth]{Figures/survey.png}
\caption{Survey results: participants with motor impairments in green color, participants without motor impairments in orange color. The darkest green/orange color represents the gestures with the highest number of selections for that command. The lightest green/orange color represents the gesture with the lowest number of selections. \rv{The black dots in the consistency column represent the two groups of people making the same choices.}}
\label{fig:survey}
\Description{A heat-map of the survey results in different colors, representing the results of all participants, participants with motor impairments and participants without motor impairments.}
\end{figure*}
In Figure~\ref{fig:survey}, we showed \st{the results of the responses of 24 survey participants in blue color,}
the results of the participants with motor impairments in green color and the results of the participants without motor impairments in orange color. As we mentioned that there were three most frequently performed gesture options for each question, and they were all performed by the participants in the previous user study.
These three options in all questions were collectively referred to as \textit{Gesture 1}, \textit{Gesture 2}, and \textit{Gesture 3}, in descending order of the number of times participants performed them in the user study. Of the results for all participants in the survey study, \textit {Gesture 1} was the first choice for 17 out of 26 commands and the second choice for the other 6 commands. From the results of the ten motor-impaired individuals, \textit{Gesture1} was preferred in 15 commands and the second choice for 9 commands. From the results of fourteen non-motor-impaired individuals, \textit{Gesture1} was the first choice for 15 commands and the second choice for 8 commands.
\rv{Furthermore, Figure~\ref{fig:survey} also includes a "consistency" column to indicate the commands for which both participants without and with motor impairments agreed on the most popular gesture options. Out of the 26 commands, both user groups agreed on 16 commands (62\%), which}
\st{The overall results} indicate\rv{s} that our gesture set had a reasonably high \st{acceptance and }agreement between participants with and without motor impairments.
The agreement of \textit{Gesture 1} by the motor-impaired population was most on general commands, and the non-motor-impaired population was most on app-related and button commands. A possible reason for this was that as the complexity of the command increased, the gestures became more complex, and the combination of different above-the-neck parts and gestures involved increased. Some app-related commands were more complicated than general commands, and people were more likely to create complex gestures. For participants with motor impairments, in addition to considering the appropriateness of the gestures, it was also essential to consider their physical conditions when creating complex gestures, making it more difficult to reach an agreement.
We also noticed that none of the people with motor impairments chose \textit{Others} options in the survey. The possible reason for this was that they had already done the earlier user study. Therefore, the three options provided were either the same or similar to what they had demonstrated in the user study, which led to their agreement on these three options. While in the survey for non-motor-impaired participants, we found that one participant chose the option \textit{Others} very frequently. We checked her responses to see that she indicated that she did not come up with more appropriate gestures but disagreed with our provided options. We then interviewed her briefly, and she told us, \textit{``some gestures are so complicated that no one would find it easier than using their hands unless they had paraplegia''}. Another participant also chose \textit{Others} options for two of the commands, but did not suggest any gestures that she would prefer to use.
Through the survey, we found that the choices of gestures by people with and without motor impairment were mostly consistent with our gesture set. However, there were also some differences due to physical impairments or lack of understanding of people with motor impairments by people without motor impairments. In general, our gesture set is universally applicable to people with and without motor impairments and is accepted and recognized by the public.
\section{Discussion}
We discuss the implications of the user-defined above-the-neck gestures design and users with motor impairments.
\subsection{Key Takeaways}
By involving people with motor impairments \st{into}\rv{in} the design process and \st{revolving}\rv{resolving} conflicts in the proposed gestures, we uncovered a set of user-defined above-the-neck gestures and how people with motor impairments would want to use these gestures to execute commands on a touchscreen mobile device.
All of the participants of the user study mentioned that they would like to use these gestures to interact with mobile phones in the future.
Moreover, we learned that people with motor impairments preferred gestures that were simple, easy to remember, and had high social acceptance. Although they had the freedom to include eyes, mouth, and head into the design of the gestures, gestures involving eyes were still the most diverse and preferred, followed by the gestures that combined eyes and head. This finding is consistent with the prior works~\cite{kyto2018pinpointing, sidenmark2019eye} that people like to add head movements to eye-based gestures.
Our survey study with people with and without motor impairments found that these user-defined gestures were generally agreed by people with and without motor impairments.
\subsection{Design Considerations for User-Defined Gestures}
Compared to the eyelid gestures designed by the researchers~\cite{10.1145/3373625.3416987,fan2021eyelidCAM}, our user-defined gestures are unique in two aspects. First, our gestures are grounded in the preferences and creativity of people with motor impairments. Second, our gestures are more diverse, which not only include\st{d} eyelids but also eye motion and other body parts (e.g., head and mouth), and are more expressive and can be used to accomplish more commands. However, one must be wary of the downside of a more diverse set of user-defined gestures.
First, as the number of user-defined gestures increases, the efforts for remembering the mapping between the user-defined gestures and the corresponding commands also increases. Indeed, some of the user study participants asked us in an apprehensive tone \st{that }whether they would have to remember all the gestures they created throughout the study. Thus, it is worth investigating how best to help people with motor impairments make use of these user-defined gestures with \rv{the }minimum burden of memorization. One possible solution might be to suggest relevant user-defined gestures based on the initial input of the user. For example, if the user starts to close one eye, then the system could recommend a much smaller set of gestures that start with "close one eye."
Second, some participants were concerned about the long\rv{-}term or frequent use of some gestures, which might develop a bad habit. For example, P15 designed a gesture that required him to tilt his mouth to the left side to use it, and he was worried about creating a bad habit of tilting the mouth. Perhaps when designing user-defined gestures for people with motor impairments, we need to consider not only the gestures' simplicity and social acceptance but also the gestures' long-term health implications.
Third, we derived a standard set of user-defined gestures by resolving conflicts between the gestures proposed by different participants. While the standard set might be the most applicable set for a group of people with motor impairments, the set might not be optimal for a particular user. \rv{Users may have different physical conditions and habitual perceptions.} For example, if a user could not close her left eye well, then she should be able to skip all the gestures involving closing the left eye and define her alternatives. \rv{If the users prefer to use their eyes, they should have the flexibility to use only-eye gestures instead of head-involved gestures. In addition, as mentioned above, if users are concerned about potential negative effects of performing the same gesture too often on their facial expressions, they could choose to allocate multiple gestures to trigger one command.} Thus, it is important to make this standard set of user-defined gestures applicable so that an individual user with motor impairment could customize it. \rv{The personalization could be users assigning multiple gestures to one command or assigning the same gesture to different commands.}
Lastly, social acceptance was a key factor that people with motor impairments considered when designing the gestures. However, little is known whether people with motor impairments perceive as socially unacceptable are really unacceptable from the public's perspective and vice versa. Furthermore, it also remains unclear what gestures are more socially acceptable. Perhaps gestures that are small in amplitude and consistent with normal daily routine activities might be more \st{more }socially acceptable. One interesting question would be \rv{to} understand the relative social acceptability of our user-defined gestures.
\section{Limitations and Future Work}
\rv{\textbf{Potential Effect of The Presentation Order of Commands}.}
\rv{When defining gestures, the participants were given the commands randomly. Thus, they were not able to know in advance the complexity of all commands or whether there were similar or symmetrical commands.
Our approach to alleviating this potential presentation order effect, as stated in Section~\ref{sec:procedure}, was allowing the participants to change the gestures assigned earlier anytime during the study. If they had difficulty keeping track of the assigned gestures, for example, forgetting about what gestures had been already assigned, they could ask the moderator to remind them. Our approach was promising based on the final gesture set shown in Table~\ref{gesture set}. For example, different commands had been assigned with different gestures, and symmetrical gestures were given to symmetrical commands. Nevertheless, it is interesting to investigate whether participants would adopt different strategies to allocate gestures to commands if they were allowed to know all the commands upfront.}
\rv{\textbf{Command Selection}.} We explored 26 commands commonly used on a touchscreen smartphone. However, these commands are not exhaustive. For example, participants mentioned \st{about} other commands, such as returning to the main screen, unlocking the phone, \rv{and back to the previous page}. \rv{In addition, our classification of commands was based on the prior work ~\cite{10.1145/3422852.3423479,10.1145/3373625.3416987,fan2021eyelidCAM}, and we divided the 26 commands into three categories based on the task goals. Different classification methods may also affect the participants' choice of gestures.} Future work could apply the same principles to define user-defined gestures for additional commands. However, one challenge is to resolve conflicts among the gestures allocated for different gestures, which has proved to be increasingly challenging with more commands to cater.
\rv{\textbf{Common vs. Personal User-defined Gestures}. Our study aimed to identify a set of common user-defined gestures for the commands often performed on a touchscreen smartphone based on the feedback from a group of people with motor impairments. Thus, we believe that this set of common user-defined gestures is a good starting point for people with motor impairments to interact with smartphones without touch. However, we acknowledge that people with motor impairments have different residual motor abilities and may be able to or prefer to perform different gestures. Thus, it is imperative to investigate how best to allow people with motor impairments to design a \textit{personal} set of user-defined gestures tailed to their specific motor abilities, such as a recent work by Ahmetovic et al.~\cite{10.1145/3447526.3472044}.}
\rv{\textbf{Differences Between People With and Without Motor Impairments.}} Our survey study revealed the preferences of people with and without motor impairments. However, it remains unknown why people without motor impairments preferred the same or different user-defined gestures for the same command or whether they care about social acceptance of these gestures as much as people with motor impairments do.
\rv{\textbf{Potential Effects of Age and Culture.}} Our participants were primarily young and middle-aged people. It remains an open question of whether age plays a role in user-defined gestures. Future work could replicate the study with older adults with motor impairments and examine whether the user-defined gestures are applicable across different age groups and whether there are specific user-defined gestures that are more preferred by an age group.
\rv{Our participants lived in Asia, and they were used to the culture of the East. The creation and preference of user-defined gestures might be affected by culture, and the social acceptance of the gestures was also likely related to the social norms and cultures they lived in. Future work could explore user-defined gestures for people with motor impairments in different cultures and compare the similarities and differences to understand cross-culture and culture-specific user-defined gestures.} |
1907.06010 | \section{Introduction}
Imagine you are on a routine grocery shopping trip and plan to buy some bananas. You know that the store carries both good and bad bananas which you must search through. There are multiple ways you can go about your search. One way is to randomly pick any ten bananas available on the shelf, which can be regarded as a form of unbiased search. Alternatively, you could introduce some bias to your search by only picking those bananas that are neither underripe nor overripe. Based on your past experiences from eating bananas, there is a better chance that these bananas will taste better. The proportion of good bananas retrieved in your biased search is greater than the same proportion in an unbiased search; you used your prior knowledge about tasty bananas. This common routine shows how bias enables us to conduct more successful searches based on prior knowledge of the search target.
Viewing these decision-making processes through the lens of machine learning, we analyze how algorithms tackle learning problems under the influence of bias. Will we be better off without the existence of bias in machine learning algorithms? Our goal in this paper is to formally characterize the direct relationship between the performance of machine learning algorithms and their underlying biases. Without bias, machine learning algorithms will not perform better than uniform random sampling, on average. Yet to the extent an algorithm is biased toward some target is the extent to which it is biased against all remaining targets. As a consequence, no algorithm can be biased towards all targets. Therefore, bias represents the trade-offs an algorithm makes in how to respond to data.
We approach this problem by analyzing the performance of search algorithms within the algorithmic search framework introduced by Monta\~nez~\cite{montanez2017fof}. This framework applies to common machine learning tasks such as classification, regression, clustering, optimization, reinforcement learning, and the general machine learning problems considered in Vapnik's learning framework \cite{montanez2017dissertation}. We derive results characterizing the role of bias in successful search, extending Famine of Forte results~\cite{montanez2017fof} for a fixed search target and varying information resources. Our results for bias-free search then directly apply to bias-free learning, showing the extent to which bias is necessary for successful learning and quantifying how difficult it is to find a distribution with favorable bias for a particular target.
\section{Related Work}
Schaffer's seminal work \cite{schaffer1994conservation} showed that generalization performance for classification problems is a conserved quantity, such that favorable performance on a particular subset of problems will always be offset and balanced by poor performance over the remaining problems. Similarly, we show that bias is also a conserved quantity for any set of information resources. While Schaffer studied the performance of a single algorithm over different learning classes, Wolpert and Macready's ``No Free Lunch Theorems for Optimization" \cite{wolpert1997nfl} established that all optimization algorithms have the same performance when uniformly averaged over all possible cost functions. They also provided a geometric intuition for this result by defining an inner product which measures the alignment between an algorithm and a given prior over problems. This shows that no algorithm can be simultaneously aligned with all possible priors. In the context of the search framework, we define the geometric divergence as a measure of alignment between a search algorithm and a target in order to bound the proportion of favorable search problems.
While No Free Lunch Theorems are widely recognized as landmark ideas in machine learning, McDermott claims that No Free Lunch results are often misinterpreted and are practically insignificant for many real-world problems~\cite{McDermott2019}. This is because algorithms are commonly tailored to a specific subset of problems in the real world, but No Free Lunch requires that we consider the set of all problems that are closed under permutation. These arguments towards the impracticality of No Free Lunch results are less relevant to our work here, since we evaluate the proportion of successful problems instead of considering the mean performance over the set of all problems. As such, our results are also applicable to sets of problems that are not closed under permutation, as a generalization of No Free Lunch results.
In ``The Famine of Forte: Few Search Problems Greatly Favor Your Algorithm", Monta\~nez~\cite{montanez2017fof} reduces machine learning problems to search problems and develops a rigorous search framework to generalize No Free Lunch ideas. He strictly bounds the proportion of problems that are favorable for a fixed algorithm and shows that no single algorithm can perform well over a large fraction of search problems. Extending these results to fixed search targets, we show that there are also strict bounds on the proportion of favorable information resources, and that the bound relaxes with the introduction of bias.
Our notion of bias developed here relates to ideas introduced by Mitchell~\cite{needforbiases}. According to Mitchell, a completely unbiased classification algorithm cannot generalize beyond training data. He argued that the ability of a learning algorithm to generalize depends on incorporating biases, which means making assumptions beyond strict consistency with training data. These biases may include prior knowledge of the domain, preferences for simplicity, and awareness of the algorithm's real-world application. We strengthen Mitchell's argument with a mathematical justification for the need for bias in improving learning performance.
G\"{u}l\c{c}ehre and Bengio empirically support Mitchell's ideas by investigating the nature of training barriers affecting the generalization performance of black-box machine learning algorithms~\cite{priorinfo}. Using the Structured Multi-Layer Perceptron (SMLP) neural network architecture, they showed that pre-training the SMLP with hints based on prior knowledge of the task generalizes more efficiently as compared to an SMLP pre-trained with random initializers. Furthermore, Ulyanov et al.\ explore the success of deep convolutional networks applied to image generation and restoration~\cite{deepimagepriors}. By applying untrained convolutional networks to image reconstruction with competitive success to trained ones, they show that the impressive performance of these networks is not due to learning alone. They highlight the importance of inductive bias, which is built into the structure of these generator networks, in achieving this high level of success. In a similar vein, Runarsson and Yao establish that bias is an essential component in constrained evolutionary optimization search problems~\cite{searchbias}. It is experimentally shown that carefully selecting an appropriate constraint handling method and applying a biasing penalty function enhances the probability of locating feasible solutions for evolutionary algorithms. Inspired by the results obtained from these experimental studies, we formulate a theoretical validation of the role of bias in generalization performance for learning problems.
\section{The Search Framework}
\subsection{The Search Problem}
We formulate machine learning problems as search problems using the algorithmic search framework \cite{montanez2017fof}. Within the framework, a search problem is represented as a 3-tuple $(\mathrm{\Omega}, T, F)$. The finite search space from which we can sample is $\mathrm{\Omega}$. The subset of elements in the search space that we are searching for is the target set $T$. A target function that represents $T$ is an $|\mathrm{\Omega}|$-length vector with entries having value 1 when the corresponding elements of $\mathrm{\Omega}$ are in the target set and 0 otherwise. The external information resource $F$ is a binary string that provides initialization information for the search and evaluates points in $\mathrm{\Omega}$, acting as an oracle that guides the search process.
\subsection{The Search Algorithm}
Given a search problem, a history of elements already examined, and information resource evaluations, an algorithmic search is a process that decides how to query elements of $\mathrm{\Omega}$. As the search algorithm samples, it adds the record of points queried and information resource evaluations, indexed by time, to the search history. If the algorithm queries an element $\omega \in T$ at least once during the course of its search, we say that the search is successful. Figure \ref{fig:jellyfish} visualizes the search algorithm.
\begin{figure}
\centering
\def2{2}
\def-1{-1}
\begin{tikzpicture}
\begin{axis}[hide axis]
\addplot3[surf, domain=-2:6,domain y=-5:3]
{exp(-( (x-2)^2 + (y--1)^2)/3 )};
\node[text centered] at (axis cs:2, -1, 1.20) {\large $P_i$};
\end{axis}
\node at (5.4,0) {\huge{$\mathrm{\Omega}$}};
\draw[->] (2 + 0.8,4) -- node[above, text centered] {\scriptsize next point at time step i} node[below, text centered] {\scriptsize ($\omega$, F($\omega$))} (-0.5,4);
\node[draw, fill=black!, text=white, text width=2cm, minimum height=1.5cm, text centered] at (1,-0.7) {Black-Box Algorithm};
\draw[->] (1, 0.2) -- (1.5, 0.7);
\node[minimum width=1.65cm, text centered] at (-1.8,4.01)
{\small Search History};
\foreach \y in {3.4,3.55, 3.7}
\node[ minimum width=2cm, minimum height=0.3cm, text centered] at (-1.8,\y) {$\cdot$};
\node[draw, minimum width=1.65cm, text centered] at (-1.8,2.95)
{\footnotesize ($\omega_2$, \textit{F}($\omega_2$))};
\node[text centered] at (-3.15,2.95) {\scriptsize i = 5};
\node[draw, minimum width=1.65cm, text centered] at (-1.8,2.435)
{\footnotesize ($\omega_0$, \textit{F}($\omega_0$))};
\node[text centered] at (-3.15,2.435) {\scriptsize i = 4};
\node[draw, minimum width=1.65cm, text centered] at (-1.8,1.92)
{\footnotesize ($\omega_5$, \textit{F}($\omega_5$))};
\node[text centered] at (-3.15,1.92) {\scriptsize i = 3};
\node[draw, minimum width=1.65cm, text centered] at (-1.8,1.405)
{\footnotesize ($\omega_4$, \textit{F}($\omega_4$))};
\node[text centered] at (-3.15,1.405) {\scriptsize i = 2};
\node[draw, minimum width=1.65cm, text centered] at (-1.8,0.89)
{\footnotesize ($\omega_1$, \textit{F}($\omega_1$))};
\node[text centered] at (-3.15,0.89) {\scriptsize i = 1};
\draw[->] (-1.8, 0.45) -- (-0.32, -0.7);
\end{tikzpicture}
\caption{As a black-box optimization algorithm samples from $\mathrm{\Omega}$, it produces an associated probability distribution $P_i$ based on the search history. When a sample $\omega_k$ corresponding to location $k$ in $\mathrm{\Omega}$ is evaluated using the external information resource $F$, the tuple ($\omega_k$, $F(\omega_k)$) is added to the search history.}
\label{fig:jellyfish}
\end{figure}
\subsection{Measuring Performance}
Within this search framework, we measure a learning algorithm's performance by examining the expected per-query probability of success. This measure is more effective than measuring an algorithm's total probability of success, since the number of sampling steps may vary depending on the algorithm used. Furthermore, the per query probability of success naturally accounts for sampling procedures that may involve repeatedly sampling the same points in the search space, as is the case of genetic algorithms \cite{goldberg1999genetic,reeves2002genetic}. Thus, this measure effectively handles search algorithms that balance exploration and exploitation.
The expected per-query probability of success is defined as
\[ q(T,F) = \mathbb{E}_{\tilde{P}, H} \Bigg[ \frac{1}{|\tilde{P}|} \sum_{i=1}^{|\tilde{P}|} P_i(\omega \in T) \Bigg| F \Bigg] \]
where $\tilde{P}$ is a sequence of probability distributions over the search space (where each timestep \(i\) produces a distribution $P_i$), \(T\) is the target, \(F\) is the information resource, and \(H\) is the search history. The number of queries during a search is equal to the length of the probability distribution sequence, $|\tilde{P}|$.
\section{Main Results}
We present and explain our main results in this section. Note that full proofs for the following results can be found in the Appendix. We proceed by defining our measures of bias and target divergence, then show conservation results of bias and give bounds on the probability of successful search and the proportion of favorable search problems given a fixed target.
\begin{definition}
\label{def:bias_D}
(Bias between a distribution over information resources and a fixed target) Let $\mathcal{D}$ be a distribution over a space of information resources $\mathcal{F}$ and let $F \sim \mathcal{D}$. For a given $\mathcal{D}$ and a fixed $k$-hot target function \(\bm{t}\),
\begin{align*}
\bias(\mathcal{D}, \bm{t})
&= \mathbb{E}_{\mathcal{D}} \left[\bm{t}^\top \overline{P}_{F}\right] - \frac{k}{|\Omega|} \\
&= \bm{t}^\top \mathbb{E}_{\mathcal{D}}\left[\,\overline{P}_{F}\right] - \frac{\|\bm{t}\|^2}{|\Omega|} \\
&= \bm{t}^\top \int_{\mathcal{F}} \overline{P}_{f} \mathcal{D}(f) \dif f - \frac{\|\bm{t}\|^2}{|\Omega|}
\end{align*}
where $\overline{P}_{f}$ is the vector representation of the averaged probability distribution (conditioned on $f$) induced on $\Omega$ during the course of the search, which can be shown to imply $q(t,f) = \bm{t}^\top \overline{P}_{f}$.
\end{definition}
\begin{definition}
\label{def:bias_B}
(Bias between a finite set of information resources and a fixed target) Let $\mathcal{U}[\mathcal{B}]$ denote a uniform distribution over a finite set of information resources \(\mathcal{B}\). For a random quantity $F \sim \mathcal{U}[\mathcal{B}]$, the averaged \(|\Omega|\)-length simplex vector $\overline{P}_{F}$, and a fixed $k$-hot target function \(\bm{t}\),
\begin{align*}
\bias(\mathcal{B}, \bm{t})
&= \mathbb{E}_{\mathcal{U}[\mathcal{B}]}[\bm{t}^\top \overline{P}_{F}] - \frac{k}{|\Omega|} \\
&= \bm{t}^\top \mathbb{E}_{\mathcal{U}[\mathcal{B}]}[\overline{P}_{F}] - \frac{k}{|\Omega|} \\
&= \bm{t}^\top \left( \frac{1}{|\mathcal{B}|}\sum_{f \in \mathcal{B}} \overline{P}_{f} \right) - \frac{\|\bm{t}\|^{2}}{|\Omega|}.
\end{align*}
\end{definition}
We define bias as the difference between average performance of a search algorithm on a fixed target over a set of information resources and the baseline search performance for the case of uniform random sampling. Definition \ref{def:bias_D} is a generalized form of Definition \ref{def:bias_B}, characterizing the alignment between a target function and a distribution over information resources instead of a fixed set.
\begin{definition}
(Target Divergence) The measure of similarity between a fixed target function \textbf{t} and the expected value of the averaged \(|\Omega|\)-length simplex vector $\overline{P}_{F}$, where $F\sim \mathcal{D}$, is defined as
\[\theta = \arccos \left ( \frac{\bm{t}^{\top} \mathbb{E}_{\mathcal{D}}[\overline{P}_{F}]}{\|\bm{t}\| \|\mathbb{E}_{\mathcal{D}}[\overline{P}_{F}]\|} \right)\]
\end{definition}
Similar to Wolpert and Macready's geometric interpretation of the No Free Lunch theorems in \cite{wolpert1997nfl}, we can evaluate how far a target function $\bm{t}$ deviates from the averaged probability simplex vector $\overline{P}_{f}$ for a given search problem. In this paper, we use cosine similarity to measure the level of similarity between $\bm{t}$ and $\overline{P}_{f}$. Geometrically, the target divergence is the angle between the target vector and the averaged $|\Omega|$-length simplex vector. Figure \ref{fig:tardiv} depicts the target divergence for various levels of alignments between $\bm{t}$ and $\overline{P}_{f}$.
\tdplotsetmaincoords{70}{130}
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.3\textwidth}
\begin{tikzpicture}[tdplot_main_coords, scale=1.2]
\def2{2}
\def1{1}
\def.2{.2}
\coordinate (O) at (0,0,0);
\draw [->] (0,0,0) -- (2,0,0) node [anchor=north east] {$x$};
\draw [->] (0,0,0) -- (0,2,0) node [anchor=north west] {$y$};
\draw [->] (0,0,0) -- (0,0,2) node [anchor=south] {$z$};
\pgfmathtruncatemacro{\nticks}{floor(2)-1}
\begin{scope}[
help lines,
every node/.style={inner sep=1pt,text=black}
]
\foreach \coord in {1,...,\nticks} {
\draw (\coord,.2,0) -- ++(0,-.2,0) -- ++(0,0,.2)
node [pos=1,left] {\coord};
\draw (.2,\coord,0) -- ++(-.2,0,0) -- ++(0,0,.2)
node [pos=1,right] {\coord};
\draw (.2,0,\coord) -- ++(-.2,0,0) -- ++(0,.2,0)
node [at start,above right] {\coord};
}
\draw[-stealth,color=black] (0,0,0) -- (0,1,1) node [anchor= west] {$\bm{t_1}$};
\draw[-stealth,color=black] (0,0,0) -- (0,.2,.8) node [anchor= south west] {$\overline{P}_{f_1}$};
\tdplotsetrotatedcoords{0}{90}{90}
\tdplotdrawarc[tdplot_rotated_coords,color=black]{(0,0,0)}{0.4}{45}{76}{anchor=south west,color=black}{$\theta_1$}
\end{scope}
\filldraw [opacity=.33,red] (1,0,0) -- (0,1,0)
-- (0,0,1) -- cycle;
\end{tikzpicture}
\caption{$\overline{P}_{f_1} = [0, 0.2,0.8]^{\top}$, $\bm{t_1} = [0,1,1]^{\top}$, and $\theta_1 \approx 31 \degree$. While all of the probability mass in $\overline{P}_{f_1}$ lies on the target set $\bm{t_1}$, the target divergence takes value greater than $0\degree$ because $\overline{P}_{f_1}$ is not uniform. \\}\label{fig:tardiva}
\end{subfigure} \hfill
\begin{subfigure}[b]{0.3\textwidth}
\begin{tikzpicture}[tdplot_main_coords, scale=1.2]
\def2{2}
\def1{1}
\def.2{.2}
\coordinate (O) at (0,0,0);
\draw [->] (0,0,0) -- (2,0,0) node [anchor=north east] {$x$};
\draw [->] (0,0,0) -- (0,2,0) node [anchor=north west] {$y$};
\draw [->] (0,0,0) -- (0,0,2) node [anchor=south] {$z$};
\pgfmathtruncatemacro{\nticks}{floor(2)-1}
\begin{scope}[
help lines,
every node/.style={inner sep=1pt,text=black}
]
\foreach \coord in {1,...,\nticks} {
\draw (\coord,.2,0) -- ++(0,-.2,0) -- ++(0,0,.2)
node [pos=1,left] {\coord};
\draw (.2,\coord,0) -- ++(-.2,0,0) -- ++(0,0,.2)
node [pos=1,right] {\coord};
\draw (.2,0,\coord) -- ++(-.2,0,0) -- ++(0,.2,0)
node [at start,above right] {\coord};
}
\draw[-stealth,color=black] (0,0,0) -- (1,0,1) node [anchor= east] {$\bm{t_2}$};
\draw[-stealth,color=black] (0,0,0) -- (0,1,0) node [anchor= north] {$\overline{P}_{f_2}$};
\tdplotsetrotatedcoords{0}{135}{90}
\tdplotdrawarc[tdplot_rotated_coords,color=black]{(0,0,0)}{0.35}{00}{90}{anchor=south west,color=black}{$\theta_2$}
\end{scope}
\filldraw [opacity=.33,red] (1,0,0) -- (0,1,0)
-- (0,0,1) -- cycle;
\end{tikzpicture}
\caption{$\overline{P}_{f_2} =[0,1,0]^{\top} $, $\bm{t_2}= [1,0,1]^{\top}$, and $\theta_2 = 90 \degree$. Since none of the non-zero probability mass in $\overline{P}_{f_2}$ aligns with their corresponding target elements in the target set $\bm{t_2}$, the target divergence is maximized at $90 \degree$.}\label{fig:tardivb}
\end{subfigure} \hfill
\begin{subfigure}[b]{0.3\textwidth}
\begin{tikzpicture}[tdplot_main_coords, scale=1.2]
\def2{2}
\def1{1}
\def.2{.2}
\coordinate (O) at (0,0,0);
\draw [->] (0,0,0) -- (2,0,0) node [anchor=north east] {$x$};
\draw [->] (0,0,0) -- (0,2,0) node [anchor=north west] {$y$};
\draw [->] (0,0,0) -- (0,0,2) node [anchor=south] {$z$};
\pgfmathtruncatemacro{\nticks}{floor(2)-1}
\begin{scope}[
help lines,
every node/.style={inner sep=1pt,text=black}
]
\foreach \coord in {1,...,\nticks} {
\draw (\coord,.2,0) -- ++(0,-.2,0) -- ++(0,0,.2)
node [pos=1,left] {\coord};
\draw (.2,\coord,0) -- ++(-.2,0,0) -- ++(0,0,.2)
node [pos=1,right] {\coord};
\draw (.2,0,\coord) -- ++(-.2,0,0) -- ++(0,.2,0)
node [at start,above right] {\coord};
}
\draw[-stealth,color=black] (0,0,0) -- (1,1,0) node [anchor= north] {$\bm{t_3}$};
\draw[-stealth,color=black] (0,0,0) -- (0.5,0.5,0) node [anchor = west] {$\overline{P}_{f_3}$};
\end{scope}
\filldraw [opacity=.33,red] (1,0,0) -- (0,1,0)
-- (0,0,1) -- cycle;
\end{tikzpicture}
\caption{$\overline{P}_{f_3} = [0.5,0.5,0]^{\top}$, $\bm{t_3} = [1,1,0]^{\top}$, and $\theta_3 = 0 \degree$. Since $\overline{P}_{f_3}$ places all of its probability mass uniformly on the target set, the target divergence is minimized at $0 \degree$. \\\\} \label{fig:tardivc}
\end{subfigure}
\caption{These examples visualize the target divergence for various possible combinations of target functions and simplex vectors. Figure \ref{fig:tardivb} demonstrates minimum alignment, while Figure \ref{fig:tardivc} demonstrates maximum alignment.}
\label{fig:tardiv}
\end{figure}
\begin{restatable}[Improbability of Favorable Information Resources]{theorem}{iofir}
Let $\mathcal{D}$ be a distribution over a set of information resources $\mathcal{F}$, let $F$ be a random variable such that $F \sim \mathcal{D}$, let $t \subseteq \Omega$ be an arbitrary fixed $k$-sized target set with corresponding target function $\bm{t}$, and let $q(t,F)$ be the expected per-query probability of success for algorithm $\mathcal{A}$ on search problem $(\Omega,t,F)$. Then, for any $q_{\mathrm{min}} \in [0,1]$,
\begin{align*}
\Pr(q(t, F) \geq q_\mathrm{min}) &\leq \frac{p + \bias(\mathcal{D}, \bm{t})}{q_{\mathrm{min}}}
\end{align*}
where $p = \frac{k}{|\Omega|}$.
\label{thm:iofir}
\end{restatable}
\noindent
Since the size of the target set $t$ is usually small relative to the size of the search space $\mathrm{\Omega}$, $p$ is also usually small. Following the above results, we see that the probability that a search problem with an information resource drawn from $\mathcal{D}$ is favorable is bounded by a low value. This bound tightens as we increase our minimum threshold of success, $q_\mathrm{min}$. Notably, our bound relaxes with the introduction of bias.
\begin{restatable}[Probability of Success Under Bias-Free Search]{corollary}{reducedprob}
When \(\bias(\mathcal{D}, \bm{t}) = 0\),
\begin{align*}
\Pr(q(t, F) \geq q_\mathrm{min}) &\leq \frac{p}{q_{\mathrm{min}}}
\end{align*}
\end{restatable}
\noindent Directly following Theorem~\ref{thm:iofir}, if the algorithm does not induce bias on $\bm{t}$ given a distribution over a set of information resources, the probability of successful search by a favorable information resource cannot be any higher than that of uniform random sampling divided by the minimum performance that we specify.
\begin{restatable}[Geometric Divergence]{corollary}{geometricdivergence}
\begin{align*}
\Pr(q(t, F) \geq q_\mathrm{min}) &\leq \frac{\sqrt{k} \cos (\theta)}{q_{\mathrm{min}}} \\
&= \frac{\| \bm{t} \| \cos (\theta)}{q_{\mathrm{min}}}
\end{align*}
\end{restatable}
\noindent
This result shows that greater geometric alignment between the target vector and expected distribution over the search space loosens the upper bound on the probability of successful search. Connecting this to our other results, the geometric alignment can be viewed as another interpretation of the bias the algorithm places on the target set.
\begin{restatable}[Conservation of Bias]{theorem}{conservation}
Let $\mathcal{D}$ be a distribution over a set of information resources and let $\tau_{k} = \{ \bm{t} | \bm{t} \in \{ 0, 1 \}^{ |\Omega| }, ||\bm{t}|| = \sqrt{k} \}$ be the set of all $|\Omega|$-length $k$-hot vectors. Then for any fixed algorithm $\mathcal{A}$,
\begin{align*}
\sum_{\bm{t} \in \tau_{k}} \bias(\mathcal{D},\bm{t}) = 0
\end{align*}
\label{thm:consbias}
\end{restatable}
\noindent
Since bias is a conserved quantity, an algorithm that is biased towards any particular target is equally biased against other targets, as is the case in Schaffer's conservation law for generalization performance~\cite{schaffer1994conservation}. This conservation property holds regardless of the algorithm or the distribution over information resources. Positive dependence between targets and information resources is the grounds for all successful machine learning~\cite{montanez2017dissertation}, and this conservation result is another manifestation of this general property of learning.
\begin{restatable}[Famine of Favorable Information Resources]{theorem}{fofir}
Let $\mathcal{B}$ be a finite set of information resources and let $t \subseteq \Omega$ be an arbitrary fixed $k$-size target set with corresponding target function $\bm{t}$. Define
\begin{align*}
\mathcal{B}_{q_{\mathrm{min}}} &= \{f \mid f \in \mathcal{B}, q(t,f) \geq q_{\mathrm{min}} \},
\end{align*}
where $q(t,f)$ is the expected per-query probability of success for algorithm $\mathcal{A}$ on search problem $(\Omega, t,f)$ and $q_{\mathrm{min}} \in [0,1]$ represents the minimally acceptable per-query probability of success. Then,
\begin{align*}
\frac{|\mathcal{B}_{q_{\mathrm{min}}}|}{|\mathcal{B}|} &\leq \frac{p + \bias(\mathcal{B}, \bm{t})}{q_{\mathrm{min}}}
\end{align*}
where $p = \frac{k}{|\Omega|}$.
\label{thm:fofir}
\end{restatable}
\noindent
This theorem shows us that unless our set of information resources is biased towards our target, only a small proportion of information resources will yield a high probability of search success. In most practical cases, $p$ is small enough that uniform random sampling is not considered a plausible strategy, since we typically have small targets embedded in large search spaces. Thus the bound is typically very constraining. The set of information resources will be overwhelmingly unhelpful unless we restrict the given information resources to be positively biased towards the specified target.
\begin{restatable}[Proportion of Successful Problems Under Bias-Free Search]{corollary}{reducedproportion}
When \(\bias(\mathcal{B}, \bm{t}) = 0\),
\begin{align*}
\frac{|\mathcal{B}_{q_{\mathrm{min}}}|}{|\mathcal{B}|} &\leq \frac{p}{q_{\mathrm{min}}}
\end{align*}
\label{cor:reducedproportion}
\end{restatable}
\noindent Directly following Theorem \ref{thm:fofir}, if the algorithm does not induce bias on $\bm{t}$ given a set of information resources, the proportion of successful search problems cannot be any higher than the single-query success probability of uniform random sampling divided by the minimum specified performance.
\begin{restatable}[Futility of Bias-Free Search]{theorem}{futility}
For any fixed algorithm $\mathcal{A}$, fixed target $t \subseteq \Omega$ with corresponding target function $\bm{t}$, and distribution over information resources $\mathcal{D}$, if $\bias(\mathcal{D}, \bm{t}) = 0$, then
\begin{align*}
\Pr(\omega \in t; \mathcal{A}) &= p
\end{align*}
where $\Pr(\omega \in t; \mathcal{A})$ represents the per-query probability of successfully sampling an element of $t$ using $\mathcal{A}$, marginalized over information resources $F \sim \mathcal{D}$, and $p$ is the single-query probability of success under uniform random sampling.
\end{restatable}
\noindent
This result shows that without bias, an algorithm can perform no better than uniform random sampling. This is a generalization of Mitchell's idea of the futility of removing biases for binary classification \cite{needforbiases} and Monta\~nez's formal proof for the need for bias for multi-class classification \cite{montanez2017dissertation}. This result shows that bias is necessary for any machine learning or search problem to have better than random chance performance.
\begin{restatable}[Famine of Applicable Targets]{theorem}{foat}
Let $\mathcal{D}$ be a distribution over a finite set of information resources. Define
\begin{align*}
\tau_k &= \{t \mid t \subseteq \Omega, |t| = k\} \\
\tau_{q_{\mathrm{min}}} &= \{t \mid t \in \tau_k, \bias(\mathcal{D}, \bm{t}) \geq q_{\mathrm{min}}\}
\end{align*}
where $\bm{t}$ is the target function corresponding to the target set $t$. Then,
\[
\frac{|\tau_{q_\mathrm{min}}|}{|\tau_k|} \leq \frac{p}{p + q_\mathrm{min}} \leq \frac{p}{q_\mathrm{min}}
\]
where $p = \frac{k}{|\Omega|}$.
\end{restatable}
\noindent
This theorem shows that the proportion of target sets for which our algorithm is highly biased is small, given that $p$ is small relative to $q_\mathrm{min}$. A high value of $\bias(\mathcal{D}, \bm{t})$ implies that the algorithm, given $\mathcal{D}$, places a large amount of mass on $\bm{t}$ and a small amount of mass on other target functions. Consequently, our algorithm is acceptably biased toward fewer target sets as we increase our minimum threshold of bias.
\begin{restatable}[Famine of Favorable Biasing Distributions]{theorem}{fofbd}
Given a fixed target function $\bm{t}$, a finite set of information resources $\mathcal{B}$, and a set $\mathcal{P} = \{\mathcal{D}\mid \mathcal{D} \in \mathbb{R}^{|\mathcal{B}|}, \sum_{f \in \mathcal{B}} \mathcal{D}(f) = 1 \}$ of all discrete $|\mathcal{B}|$-dimensional simplex vectors,
\[
\frac{\mu(\mathcal{G}_{\bm{t}, q_\mathrm{min}})}{\mu(\mathcal{P})} \leq \frac{p + \bias(\mathcal{B}, \bm{t})}{q_\mathrm{min}}
\]
where $\mathcal{G}_{\bm{t}, q_\mathrm{min}} = \{\mathcal{D} \mid \mathcal{D} \in \mathcal{P}, \bias(\mathcal{D}, \bm{t}) \geq q_\mathrm{min}\}$ and $\mu$ is Lebesgue measure.
\end{restatable}
\noindent
We see that the proportion of distributions over $\mathcal{B}$ for which our algorithm is acceptably biased towards a fixed target function $\bm{t}$ decreases as we increase our minimum acceptable level of bias, $q_\mathrm{min}$. Additionally, the greater the amount of bias induced by our algorithm given a set of information resources on a fixed target, the higher the probability of identifying a suitable distribution that achieves successful search. However, unless the set is already filled with favorable elements, finding a minimally favorable distribution over that set is difficult.
\begin{restatable}[Bias Over Distributions]{theorem}{density}
Given a finite set of information resources $\mathcal{B}$, a fixed target function $\bm{t}$, and a set $\mathcal{P} = \{\mathcal{D}\mid \mathcal{D} \in \mathbb{R}^{|\mathcal{B}|}, \sum_{f \in \mathcal{B}} \mathcal{D}(f) = 1 \}$ of discrete $|\mathcal{B}|$-dimensional simplex vectors,
\[
\int_\mathcal{P} \bias(\mathcal {D}, \bm{t}) \dif\mathcal {D} = C \cdot \bias(\mathcal{B}, \bm{t})
\]
where $C = \int_\mathcal{P} \dif\mathcal {D}$ is the uniform measure of set $\mathcal{P}$. For an unbiased set $\mathcal{B}$,
\[
\int_\mathcal{P} \bias(\mathcal{D}, \bm{t}) \dif\mathcal {D} = 0
\]
\end{restatable}
\noindent
This theorem states that the total bias on a fixed target function over all possible distributions is proportional to the bias induced by the algorithm given $\mathcal{B}$. When there is no bias over a set of information resources, the total bias over all distributions sums to $0$. It follows that any distribution over $\mathcal{D}$ for which the algorithm places positive bias on $\bm{t}$ is offset by one or more for which the algorithm places negative bias on $\bm{t}$.
\begin{restatable}[Conservation of Bias Over Distributions]{corollary}{conservationdistributions}
Let $\tau_{k} = \{ \bm{t} | \bm{t} \in \{ 0, 1 \}^{ |\Omega| }, ||\bm{t}|| = \sqrt{k} \}$ be the set of all $|\Omega|$-length $k$-hot vectors. Then, \[\sum_{\bm{t} \in \tau_{k}} \int_{\mathcal{P}} \bias(\mathcal{D}, \bm{t}) \dif\mathcal{D} = 0\]
\end{restatable}
\noindent
This result extends our conservation results, showing that the total bias over all distributions and all $k$-size target sets sums to zero, even when beginning with a set of information resources that is favorably biased towards a particular target.
\section{Examples}
\subsection{Genetic Algorithms}
Genetic algorithms are optimization methods inspired by evolutionary biology~\cite{reeves2002genetic}. We can represent genetic algorithms in our search framework as follows:
\begin{itemize}
\item $\mathcal{A}$ - a genetic algorithm, with standard variation (mutation, crossover, etc.) operators.
\item $\Omega$ - space of possible configurations (genotypes).
\item $T$ - set of all configurations which perform well on some task.
\item $F$ - a fitness function which can evaluate a configuration's fitness.
\item $(\Omega,T, F)$ - genetic algorithm task.
\end{itemize}
Given any genetic algorithm that is unbiased towards a particular small target when averaged over a set of fitness functions (as in No Free Lunch scenarios), the proportion of highly favorable fitness functions in that set must also be small, which we state as a corollary following directly from Corollary~\ref{cor:reducedproportion}.
\begin{restatable}[Famine of Favorable Fitness Functions]{corollary}{fofff}
For any fixed target $t \subseteq \Omega$ and fixed genetic algorithm unbiased relative to a finite set of fitness functions $\mathcal{B}$, the proportion of fitness functions in $\mathcal{B}$ with expected per-query probability of success at least $q_{\text{min}}$ is no greater than $|t|/(q_{\text{min}}|\Omega|)$.
\end{restatable}
\subsection{Binary Classification}
We can cast binary classification as a search problem, as follows~\cite{montanez2017fof}:
\begin{itemize}
\item $\mathcal{A}$ - classification algorithm, such as an SVM or neural network.
\item $\Omega$ - space of possible binary labelings over an instance space.
\item $t \subseteq \Omega$ - set of all hypotheses with less than 10\% classification error.
\item $F$ - set of training examples, where $F(\emptyset$) is the full set of training data and $F(c)$ is the loss on training data for hypothesis $c$.
\item $(\Omega,t, F)$ - binary classification learning task.
\end{itemize}
In our example, let $|\mathrm{\Omega}| = 2^{100}$. Assume the size of our target set is $|t| = 2^{10}$, the set of training examples $F$ is drawn from a distribution $\mathcal{D}$, and that the minimum performance $q_{\mathrm{min}}$ we want to achieve is $0.5$. Then, by Corollary 1, if our algorithm (relative to $\mathcal{D}$) does not place any bias on the target set,
\begin{align*}
\Pr\left(q(t, F) \geq \frac{1}{2}\right) &\leq \frac{p}{q_{\mathrm{min}}}
= \frac{\frac{2^{10}}{2^{100}}}{\frac{1}{2}}
= 2^{-89}.
\end{align*}
Thus, the probability that we will have selected a dataset that results in at least our desired level of performance is upper bounded by $2^{-89}$. Notice that if we raised the minimum threshold, then the probability would decrease---favorable datasets would become more unlikely.
To perform better than uniform random sampling, we would need to introduce bias into the algorithm. For example, predetermined information or assumptions about the target set could be used to determine which hypotheses are more plausible. The principle of Occam's razor \cite{Rasmussen:2000:OR:3008751.3008792} is often used, which is the assumption that the elements in the target set are likely the ``simpler" elements, by some definition of simplicity. Relating this to our formal definition of bias, if we introduce correct assumptions into the algorithm, then the expected alignment of the target set and the induced probability distribution over the search space increases accordingly.
\section{Conclusion}
We build on the algorithmic search framework and extend Famine of Forte results to search problems with fixed targets and varying information resources. Our notion of bias quantifies the extent to which an algorithm is predisposed to a particular fixed target. We show that bias towards any target necessarily implies bias against the other remaining targets, underscoring the fact that no universally applicable form of bias can exist. Furthermore, one cannot perform better than uniform random sampling without introducing a predisposition in the algorithm towards a desired target---unbiased algorithms are useless. Few information resources can be greatly favorable towards any fixed target, unless the algorithm is already predisposed to the target no matter the information resource given. Thus, in machine learning as elsewhere, biases are needed for better than chance performance. Biases must also be correct, since the effectiveness of any bias depends on how well it aligns with the given target actually being sought.
\bibliographystyle{splncs04} |
1207.5195 | \section{Introduction}
\newtheorem{Theorem}{Theorem}[section]
\newtheorem{Lemma}[Theorem]{Lemma}
\newtheorem{Corollary}[Theorem]{Corollary}
\newtheorem{Definition}[Theorem]{Definition}
In the theory of micromagnetics to any domain $\Omega\in \mathbb R^3$ and a unit vector field (called magnetization) $m\colon\Omega\to\mathbb S^2$ with $m=0$ in $\mathbb R^3\setminus \Omega$ the energy of micromagnetics is assigned:
$$E(m)=A_{ex}\int_\Omega|\nabla m|^2+K_d\int_{\mathbb R^3}|\nabla u|^2+Q\int_\Omega\varphi(m)-2\int_\Omega H_{ext}\cdot m,$$
where $A_{ex},$ $K_d$, $Q$ are material parameters, $H_{ext}$ is the externally applied magnetic field, $\varphi$ is the anisotropy energy density and $u$ is obtained from Maxwell's equations of magnetostatics,
$$
\begin{cases}
\mathrm{curl} H_{ind}=0 & \quad\text{in}\quad \mathbb R^3\\
\mathrm{div}(H_{ind}+m)=0 & \quad\text{in}\quad \mathbb R^3,
\end{cases}
$$
i.e., $u$ is a weak solution of
$$\triangle u= \textrm{div} m\qquad \text{in}\qquad \mathbb R^3.$$
According to micromagnetics, stable magnetization patterns are described by the minimizers of the micromagnetic energy functional, see [\ref{bib:D.S.KMO1},\ref{bib:D.S.KMO2},\ref{bib:HSch}].
The study of magnetic wires and thin films has attracted significant attention in the recent years, see [\ref{bib:AXFAPC},\ref{bib:BNKTE},\ref{bib:CL},\ref{bib:FSSSTDF},\ref{bib:HK},\ref{bib:Kuehn},\ref{bib:NT},\ref{bib:NWBKUFK},\ref{bib:PGDLFLOF},\ref{bib:Sanchez},
\ref{bib:SS},\ref{bib:WNU}] for wires and [\ref{bib:CMO},\ref{bib:D.S.KMO1},\ref{bib:D.S.KMO2},\ref{bib:GC1},\ref{bib:GC2},\ref{bib:KS1},\ref{bib:KS2},\ref{bib:Kurzke}] for thin films. It has been suggested in [\ref{bib:AXFAPC}] that magnetic nanowires can be effectively used as storage devices. When a homogenous external field is applied in the axial direction of a magnetic wire facing the homogenous magnetization direction (see Fig. 1), then at a critical strength of the field the reversal of the magnetization typically starts at one end of the wire creating a domain wall which starts moving along the wire. The domain wall separates the reversed and the not yet reversed parts of the wire (see Fig. 1). It is known that the magnetization pattern reversal time is closely related to the writing and reading speed of such a device, thus it is crucial to understand the magnetization reversal and switching processes. Several authors have numerically, experimentally and analytically observed two different magnetization modes in magnetic nanowires [\ref{bib:FSSSTDF},\ref{bib:HK},\ref{bib:Har},\ref{bib:Kuehn}]. In [\ref{bib:FSSSTDF}] the magnetization reversal process has been studied numerically in cobalt nanowires by the Landau-Lifshitz-Gilbert equation. Two different domain wall types were observed. For thin cobalt wires with 10nm in diameter the transverse mode has been observed: the magnetization is constant on each cross section and is moving along the wire. For thick wires, with diameters bigger that 20nm the vortex wall has been observed: the magnetization is approximately tangential to the boundary and forms a vortex which propagates along the wire. In [\ref{bib:HK}] the magnetization reversal process has been studied both numerically and experimentally. By considering a conical type wire so that the diameter of the cross section increases very slowly, the magnetization switching from the vortex wall to the transverse at a critical diameter has been observed, as the domain wall moves along the wire.
The results in [\ref{bib:FSSSTDF}] and [\ref{bib:HK}] were the same: in thin wires the transverse wall occurs, while in thick wires the vortex wall occurs.
\\
\setlength{\unitlength}{1.2mm}
\begin{picture}(90,28)
\put(23,27){\textbf{Homogenius magnetization}}
\put(10,10){\line(1,0){70}}
\put(80,10){\line(0,1){15}}
\put(10,25){\line(1,0){70}}
\put(10,10){\line(0,1){15}}
\thicklines
\put(11,12){\vector(1,0){5}}
\put(11,15){\vector(1,0){5}}
\put(11,18){\vector(1,0){5}}
\put(11,21){\vector(1,0){5}}
\put(11,24){\vector(1,0){5}}
\put(18,12){\vector(1,0){5}}
\put(18,15){\vector(1,0){5}}
\put(18,18){\vector(1,0){5}}
\put(18,21){\vector(1,0){5}}
\put(18,24){\vector(1,0){5}}
\put(25,12){\vector(1,0){5}}
\put(25,15){\vector(1,0){5}}
\put(25,18){\vector(1,0){5}}
\put(25,21){\vector(1,0){5}}
\put(25,24){\vector(1,0){5}}
\put(32,12){\vector(1,0){5}}
\put(32,15){\vector(1,0){5}}
\put(32,18){\vector(1,0){5}}
\put(32,21){\vector(1,0){5}}
\put(32,24){\vector(1,0){5}}
\put(39,12){\vector(1,0){5}}
\put(39,15){\vector(1,0){5}}
\put(39,18){\vector(1,0){5}}
\put(39,21){\vector(1,0){5}}
\put(39,24){\vector(1,0){5}}
\put(46,12){\vector(1,0){5}}
\put(46,15){\vector(1,0){5}}
\put(46,18){\vector(1,0){5}}
\put(46,21){\vector(1,0){5}}
\put(46,24){\vector(1,0){5}}
\put(53,12){\vector(1,0){5}}
\put(53,15){\vector(1,0){5}}
\put(53,18){\vector(1,0){5}}
\put(53,21){\vector(1,0){5}}
\put(53,24){\vector(1,0){5}}
\put(60,12){\vector(1,0){5}}
\put(60,15){\vector(1,0){5}}
\put(60,18){\vector(1,0){5}}
\put(60,21){\vector(1,0){5}}
\put(60,24){\vector(1,0){5}}
\put(67,12){\vector(1,0){5}}
\put(67,15){\vector(1,0){5}}
\put(67,18){\vector(1,0){5}}
\put(67,21){\vector(1,0){5}}
\put(67,24){\vector(1,0){5}}
\put(74,12){\vector(1,0){5}}
\put(74,15){\vector(1,0){5}}
\put(74,18){\vector(1,0){5}}
\put(74,21){\vector(1,0){5}}
\put(74,24){\vector(1,0){5}}
\end{picture}
\begin{picture}(90,26)
\put(25,27){\textbf{180 degree domain wall}}
\put(10,10){\line(1,0){70}}
\put(80,10){\line(0,1){15}}
\put(10,25){\line(1,0){70}}
\put(10,10){\line(0,1){15}}
\put(15,2){\textbf{ Figure 1.}}
\thicklines
\put(50,6){\vector(-1,0){15}}
\put(40,7){\textbf{$H_{ext}$}}
\put(11,12){\vector(1,0){5}}
\put(11,15){\vector(1,0){5}}
\put(11,18){\vector(1,0){5}}
\put(11,21){\vector(1,0){5}}
\put(11,24){\vector(1,0){5}}
\put(18,12){\vector(1,0){5}}
\put(18,15){\vector(1,0){5}}
\put(18,18){\vector(1,0){5}}
\put(18,21){\vector(1,0){5}}
\put(18,24){\vector(1,0){5}}
\put(25,12){\vector(1,0){5}}
\put(25,15){\vector(1,0){5}}
\put(25,18){\vector(1,0){5}}
\put(25,21){\vector(1,0){5}}
\put(25,24){\vector(1,0){5}}
\put(32,12){\vector(1,0){5}}
\put(32,15){\vector(1,0){5}}
\put(32,18){\vector(1,0){5}}
\put(32,21){\vector(1,0){5}}
\put(32,24){\vector(1,0){5}}
\put(39,12){\vector(1,0){5}}
\put(39,15){\vector(1,0){5}}
\put(39,18){\vector(1,0){5}}
\put(39,21){\vector(1,0){5}}
\put(39,24){\vector(1,0){5}}
\put(58,12){\vector(-1,0){5}}
\put(58,15){\vector(-1,0){5}}
\put(58,18){\vector(-1,0){5}}
\put(58,21){\vector(-1,0){5}}
\put(58,24){\vector(-1,0){5}}
\put(65,12){\vector(-1,0){5}}
\put(65,15){\vector(-1,0){5}}
\put(65,18){\vector(-1,0){5}}
\put(65,21){\vector(-1,0){5}}
\put(65,24){\vector(-1,0){5}}
\put(72,12){\vector(-1,0){5}}
\put(72,15){\vector(-1,0){5}}
\put(72,18){\vector(-1,0){5}}
\put(72,21){\vector(-1,0){5}}
\put(72,24){\vector(-1,0){5}}
\put(79,12){\vector(-1,0){5}}
\put(79,15){\vector(-1,0){5}}
\put(79,18){\vector(-1,0){5}}
\put(79,21){\vector(-1,0){5}}
\put(79,24){\vector(-1,0){5}}
\end{picture}
In Figure 2 one can see the transverse and the vortex wall longitudinal and cross section pictures for wires with a rectangular cross section. \\
\setlength{\unitlength}{1mm}
\begin{picture}(180,93)
\put(0,15){\line(0,1){71}}
\put(0,15){\line(1,0){20}}
\put(20,15){\line(0,1){71}}
\put(0,86){\line(1,0){20}}
\put(2,15){\vector(0,1){4}}
\put(5,15){\vector(0,1){4}}
\put(8,15){\vector(0,1){4}}
\put(11,15){\vector(0,1){4}}
\put(14,15){\vector(0,1){4}}
\put(17,15){\vector(0,1){4}}
\put(2,20){\vector(0,1){4}}
\put(5,20){\vector(0,1){4}}
\put(8,20){\vector(0,1){4}}
\put(11,20){\vector(0,1){4}}
\put(14,20){\vector(0,1){4}}
\put(17,20){\vector(0,1){4}}
\put(2,25){\vector(1,3){1.2}}
\put(5,25){\vector(1,3){1.2}}
\put(8,25){\vector(1,3){1.2}}
\put(11,25){\vector(1,3){1.2}}
\put(14,25){\vector(1,3){1.2}}
\put(17,25){\vector(1,3){1.2}}
\put(2,29){\vector(1,3){1.2}}
\put(5,29){\vector(1,3){1.2}}
\put(8,29){\vector(1,3){1.2}}
\put(11,29){\vector(1,3){1.2}}
\put(14,29){\vector(1,3){1.2}}
\put(17,29){\vector(1,3){1.2}}
\put(2,33){\vector(1,2){1.8}}
\put(5,33){\vector(1,2){1.8}}
\put(8,33){\vector(1,2){1.8}}
\put(11,33){\vector(1,2){1.8}}
\put(14,33){\vector(1,2){1.8}}
\put(17,33){\vector(1,2){1.8}}
\put(2,37){\vector(1,1){3.6}}
\put(5,37){\vector(1,1){3.6}}
\put(8,37){\vector(1,1){3.6}}
\put(11,37){\vector(1,1){3.6}}
\put(14,37){\vector(1,1){3.6}}
\put(17,37){\vector(1,1){3.6}}
\put(2,41){\vector(2,1){3.6}}
\put(5,41){\vector(2,1){3.6}}
\put(8,41){\vector(2,1){3.6}}
\put(11,41){\vector(2,1){3.6}}
\put(14,41){\vector(2,1){3.6}}
\put(17,41){\vector(2,1){3.6}}
\put(2,43.5){\vector(3,1){3.6}}
\put(5,43.5){\vector(3,1){3.6}}
\put(8,43.5){\vector(3,1){3.6}}
\put(11,43.5){\vector(3,1){3.6}}
\put(14,43.5){\vector(3,1){3.6}}
\put(17,43.5){\vector(3,1){3.6}}
\put(2,45.5){\vector(4,1){4}}
\put(5,45.5){\vector(4,1){4}}
\put(8,45.5){\vector(4,1){4}}
\put(11,45.5){\vector(4,1){4}}
\put(14,45.5){\vector(4,1){4}}
\put(17,45.5){\vector(4,1){4}}
\put(1,47.5){\vector(1,0){4}}
\put(6,47.5){\vector(1,0){4}}
\put(11,47.5){\vector(1,0){4}}
\put(16,47.5){\vector(1,0){4}}
\put(2,49.5){\vector(4,-1){4}}
\put(5,49.5){\vector(4,-1){4}}
\put(8,49.5){\vector(4,-1){4}}
\put(11,49.5){\vector(4,-1){4}}
\put(14,49.5){\vector(4,-1){4}}
\put(17,49.5){\vector(4,-1){4}}
\put(2,51.5){\vector(3,-1){3.6}}
\put(5,51.5){\vector(3,-1){3.6}}
\put(8,51.5){\vector(3,-1){3.6}}
\put(11,51.5){\vector(3,-1){3.6}}
\put(14,51.5){\vector(3,-1){3.6}}
\put(17,51.5){\vector(3,-1){3.6}}
\put(2,54){\vector(2,-1){3.6}}
\put(5,54){\vector(2,-1){3.6}}
\put(8,54){\vector(2,-1){3.6}}
\put(11,54){\vector(2,-1){3.6}}
\put(14,54){\vector(2,-1){3.6}}
\put(17,54){\vector(2,-1){3.6}}
\put(2,58){\vector(1,-1){3.6}}
\put(5,58){\vector(1,-1){3.6}}
\put(8,58){\vector(1,-1){3.6}}
\put(11,58){\vector(1,-1){3.6}}
\put(14,58){\vector(1,-1){3.6}}
\put(17,58){\vector(1,-1){3.6}}
\put(2,62){\vector(1,-2){1.8}}
\put(5,62){\vector(1,-2){1.8}}
\put(8,62){\vector(1,-2){1.8}}
\put(11,62){\vector(1,-2){1.8}}
\put(14,62){\vector(1,-2){1.8}}
\put(17,62){\vector(1,-2){1.8}}
\put(2,66){\vector(1,-3){1.2}}
\put(5,66){\vector(1,-3){1.2}}
\put(8,66){\vector(1,-3){1.2}}
\put(11,66){\vector(1,-3){1.2}}
\put(14,66){\vector(1,-3){1.2}}
\put(17,66){\vector(1,-3){1.2}}
\put(2,70){\vector(1,-3){1.2}}
\put(5,70){\vector(1,-3){1.2}}
\put(8,70){\vector(1,-3){1.2}}
\put(11,70){\vector(1,-3){1.2}}
\put(14,70){\vector(1,-3){1.2}}
\put(17,70){\vector(1,-3){1.2}}
\put(2,75){\vector(0,-1){4}}
\put(5,75){\vector(0,-1){4}}
\put(8,75){\vector(0,-1){4}}
\put(11,75){\vector(0,-1){4}}
\put(14,75){\vector(0,-1){4}}
\put(17,75){\vector(0,-1){4}}
\put(2,80){\vector(0,-1){4}}
\put(5,80){\vector(0,-1){4}}
\put(8,80){\vector(0,-1){4}}
\put(11,80){\vector(0,-1){4}}
\put(14,80){\vector(0,-1){4}}
\put(17,80){\vector(0,-1){4}}
\put(2,85){\vector(0,-1){4}}
\put(5,85){\vector(0,-1){4}}
\put(8,85){\vector(0,-1){4}}
\put(11,85){\vector(0,-1){4}}
\put(14,85){\vector(0,-1){4}}
\put(17,85){\vector(0,-1){4}}
\put(0,0){\textbf{The transverse wall}}
\put(60,0){\textbf{The vortex wall}}
\put(27,40){\line(0,1){10}}
\put(27,40){\line(1,0){21}}
\put(48,40){\line(0,1){10}}
\put(27,50){\line(1,0){21}}
\put(28,41){\vector(1,0){3}}
\put(28,43){\vector(1,0){3}}
\put(28,45){\vector(1,0){3}}
\put(28,47){\vector(1,0){3}}
\put(28,49){\vector(1,0){3}}
\put(32,41){\vector(1,0){3}}
\put(32,43){\vector(1,0){3}}
\put(32,45){\vector(1,0){3}}
\put(32,47){\vector(1,0){3}}
\put(32,49){\vector(1,0){3}}
\put(36,41){\vector(1,0){3}}
\put(36,43){\vector(1,0){3}}
\put(36,45){\vector(1,0){3}}
\put(36,47){\vector(1,0){3}}
\put(36,49){\vector(1,0){3}}
\put(40,41){\vector(1,0){3}}
\put(40,43){\vector(1,0){3}}
\put(40,45){\vector(1,0){3}}
\put(40,47){\vector(1,0){3}}
\put(40,49){\vector(1,0){3}}
\put(44,41){\vector(1,0){3}}
\put(44,43){\vector(1,0){3}}
\put(44,45){\vector(1,0){3}}
\put(44,47){\vector(1,0){3}}
\put(44,49){\vector(1,0){3}}
\put(55,15){\line(0,1){71}}
\put(55,15){\line(1,0){20}}
\put(55,86){\line(1,0){20}}
\put(75,15){\line(0,1){71}}
\put(55,28){\line(1,2){20}}
\put(75,28){\line(-1,2){20}}
\put(83,30){\line(0,1){30}}
\put(83,30){\line(1,0){30}}
\put(113,30){\line(0,1){30}}
\put(83,60){\line(1,0){30}}
\put(83,30){\line(1,1){30}}
\put(83,60){\line(1,-1){30}}
\put(90.5,30){\line(1,2){15}}
\put(113,37.5){\line(-2,1){30}}
\thicklines
\put(93,55){\vector(1,0){6}}
\put(105.5,55){\vector(1,-1){3.7}}
\put(103,35){\vector(-1,0){6}}
\put(90.5,35){\vector(-1,1){3.7}}
\put(88,40){\vector(0,1){6}}
\put(88,52.5){\vector(1,1){3.7}}
\put(108,50){\vector(0,-1){6}}
\put(108,37.5){\vector(-1,-1){3.7}}
\put(65,47){\vector(0,-1){5}}
\put(63.5,44){\vector(0,-1){5}}
\put(66.5,44){\vector(0,-1){5}}
\put(62,40){\vector(0,-1){5}}
\put(65,40){\vector(0,-1){5}}
\put(68,40){\vector(0,-1){5}}
\put(60.5,36){\vector(0,-1){5}}
\put(66.5,36){\vector(0,-1){5}}
\put(63.5,36){\vector(0,-1){5}}
\put(69.5,36){\vector(0,-1){5}}
\put(58,30){\vector(0,-1){5}}
\put(63,30){\vector(0,-1){5}}
\put(68,30){\vector(0,-1){5}}
\put(73,30){\vector(0,-1){5}}
\put(58,23){\vector(0,-1){5}}
\put(63,23){\vector(0,-1){5}}
\put(68,23){\vector(0,-1){5}}
\put(73,23){\vector(0,-1){5}}
\put(65,49){\vector(0,1){5}}
\put(63.5,52){\vector(0,1){5}}
\put(66.5,52){\vector(0,1){5}}
\put(62,56){\vector(0,1){5}}
\put(65,56){\vector(0,1){5}}
\put(68,56){\vector(0,1){5}}
\put(60.5,60){\vector(0,1){5}}
\put(66.5,60){\vector(0,1){5}}
\put(63.5,60){\vector(0,1){5}}
\put(69.5,60){\vector(0,1){5}}
\put(58,66){\vector(0,1){5}}
\put(63,66){\vector(0,1){5}}
\put(68,66){\vector(0,1){5}}
\put(73,66){\vector(0,1){5}}
\put(63,73){\vector(0,1){5}}
\put(68,73){\vector(0,1){5}}
\put(73,73){\vector(0,1){5}}
\put(58,73){\vector(0,1){5}}
\put(58,80){\vector(0,1){5}}
\put(63,80){\vector(0,1){5}}
\put(68,80){\vector(0,1){5}}
\put(73,80){\vector(0,1){5}}
\put(62,49.5){\vector(0,1){3}}
\put(62,46.5){\vector(0,-1){3}}
\put(68,49.5){\vector(0,1){3}}
\put(68,46.5){\vector(0,-1){3}}
\put(62,48){\circle*{0.7}}
\put(68,48){\circle*{0.7}}
\put(58,48){\circle*{0.7}}
\put(72,48){\circle*{0.7}}
\put(58,46){\vector(0,-1){3}}
\put(58,41.5){\vector(0,-1){4.5}}
\put(58,50){\vector(0,1){3}}
\put(58,55){\vector(0,1){4.5}}
\put(72,46){\vector(0,-1){3}}
\put(72,41.5){\vector(0,-1){4.5}}
\put(72,50){\vector(0,1){3}}
\put(72,55){\vector(0,1){4.5}}
\put(5,7){\textbf{ Figure 2.}}\\
\end{picture}
\\
\\
It has been observed that there is a distinctive crossover between two different modes, which occurs at a critical diameter of the wire and it was suggested that the magnetization switching process can be understood by analyzing the micromagnetics energy minimization problem for different diameters of the cross section. In [\ref{bib:Kuehn}], K. K\"uhn studied $180$ degree static domain walls in magnetic wires with circular cross sections by an asymptotic analysis proving that indeed the transverse mode must occur in thin magnetic wires. It is also shown in [16] that for thick wires the vortex wall has the optimal energy scaling and that the minimal energy scales like $R^2\sqrt{\ln R}.$ In [\ref{bib:SS}] V.V.Slastikov and C.Sonnenberg studied a similar problem for finite curved wires proving a $\Gamma$-convergence on energies as the diameter of the wire goes to zero. In [\ref{bib:Har}], the author studied the same problem as K.K\"uhn in [\ref{bib:Kuehn}] and independently of [\ref{bib:SS}] (see the submission and the publication dates of [\ref{bib:Har}] and [\ref{bib:SS}] respectively) extended some of the results proven in [\ref{bib:Kuehn}] for arbitrary wires with a rotational symmetry. In this paper we study the $180$ degree static domain walls in magnetic wires with arbitrary bounded, $C^1$ and rotationally symmetric cross sections. We generalize the existence of minimizers result proven by K.K\"uhn for circular cross sections, to wires with arbitrary bounded and $C^1$ cross sections. For a class of domains we prove the convergence of almost minimizers which is a new and much deeper result and it does not follow from the $\Gamma-$convergence of the energies. It actually requires much deeper analysis of minimization problem of minimizing the energy of micromagnetics and its minimizers. We also construct a vortex wall that has an energy of order $d^2\sqrt{d\ln d}$ for thick rectangular wires.
\section{The main results}
Assume $\Omega=\mathbb R\times \omega$, where $\omega\subset \mathbb R^2$ is a bounded $C^1$ domain. Consider the isotropic energy of micromagnetics without an external field like in [\ref{bib:Kuehn},\ref{bib:SS},\ref{bib:Har}],
$$E(m)=A_{ex}\int_\Omega|\nabla m|^2+K_d\int_{\mathbb R}|\nabla u|^2.$$
By scaling of all coordinates one can achieve the situation where $A_{ex}=K_d,$ so we will henceforth assume that $A_{ex}=K_d=1.$
Next we rescale the magnetization $m$ in the $y$ and $z$ coordinates such that the domain of the rescaled magnetization is fixed, i.e., if $d=\mathrm{diam}(\omega),$ then set $\acute m(x,y,z)=m(x,dy,dz).$
Denote
$$A(\Omega)=\{m\colon\Omega\to\mathbb S^2 \ : \ m\in H_{loc}^1(\Omega), \ E(m)<\infty\}.$$
We are interested in $180$ degree domain walls, so set
$$\tilde A(\Omega)=\{m\colon\Omega\to\mathbb S^2 \ : \ m-\bar e\in H^1(\Omega)\},$$
where
\begin{equation*}
\bar e(x,y,z) = \left\{
\begin{array}{rl}
(-1,0,0) & \text{if } \ \ x<-1 \\
(x,0,0) & \text{if } \ \ -1\leq x \leq 1 \\
(1,0,0) & \text{if } \ \ 1<x \\
\end{array} \right.
\end{equation*}
The objective of this work will be studying the existence of minimizers in the minimization problem
\begin{equation}
\label{minimization problem}
\inf_{m\in \tilde A(\Omega)}E(m),
\end{equation}
and the behavior of its almost minimizers, where the notion of "almost minimizers" will be defined later in Definition~\ref{def:almost.min}.
The following existence theorem is a generalization of the corresponding theorem proven for circular cross sections in [\ref{bib:Kuehn}].
\begin{Theorem}[Existence of minimizers]
\label{th:existence}
For every bounded $C^1$ domain $\omega\in\mathbb R^2$ there exists a minimizer of $E$ is $\tilde A(\Omega).$
\end{Theorem}
It has been shown for circular wires in [\ref{bib:Kuehn}] and later for any cross sections in [\ref{bib:SS}] and for cross sections with a rotational symmetry in [\ref{bib:Har}], that as $d$ goes to zero, the rescaled energy functional $\frac{E(m)}{d^2}$, $\Gamma$-converges to a one dimensional energy $E_0(m^0)$ under the following notion of convergence of magnetization vectors:
\begin{Definition}
\label{notion of convergence}
The sequence $\{\acute m^n\}\subset A(\Omega)$ is said to converge to $m^0$ as $n$ goes to infinity if,
\begin{itemize}
\item[(i)] $\nabla \acute m^n\rightharpoonup\nabla m^0 $ \ \ weakly in \ \ $L^2(\Omega)$
\item[(ii)] $\acute m^n \rightarrow m^0$ \ \ strongly in \ \ $L_{loc}^2(\Omega).$
\end{itemize}
\end{Definition}
The limit or reduced energy is given by
\begin{equation}
\label{linit.energy}
E_0(m)=
\begin{cases}
|\omega|\int_{\mathbb{R}}|\partial_x m|^2\ud x+\int_{\mathbb{R}}m M_\omega m^T\ud x,&\quad \text{if}\quad m=m(x),\\
\infty,&\qquad \text{otherwise},
\end{cases}
\end{equation}
Where $M_\omega$ is a symmetric matrix given by
$$M_\omega=-\frac{1}{2\pi}\int_{\partial \omega}\int_{\partial \omega} n(x)\otimes n(y)\ln |x-y|\ud x\ud y,$$
and $n=(0,n_2,n_3)$ is the outward unit normal to $\partial\omega,$ see [\ref{bib:SS}].
Since $M_\omega$ is symmetric it can be diagonalized by a rotation in the $OYZ$ plane. We choose the coordinate system such that
$M_\omega$ is diagonal. Assume now $\omega$ is fixed and $\mathrm{diam}(\omega)=1.$ Actually, the $\Gamma$-convergence theorem implies the following two properties of the minimal
energies and sequences of minimizers:
\begin{itemize}
\item[(i)]
\begin{equation}
\label{conv.energies}
\lim_{d\to 0}\min_{m\in \tilde A(d\cdot\Omega)}\frac{E(m)}{d^2}=\min_{m\in A_0}E_0(m),
\end{equation}
where $A_0=\{m\colon\mathbb R\to \mathbb R^3 : |m|=1,\ m(\pm\infty)=\pm 1 \}.$
\item[(ii)] If $m^n$ is any sequence of minimizers with $m^n$ defined in $d\cdot\Omega,$ then a subsequence of $\{\acute m^n\}$
converges to a minimizer of $E_0$ in the sense of Definition~\ref{notion of convergence}.
\end{itemize}
It turns out, that under some asymmetry condition
on $\omega$ a stronger convergence holds, namely an $H^1$ convergence of the whole sequence of almost minimizers holds.
\begin{Definition}
\label{def:almost.min}
Let $\{d_n\}$ be a sequence of positive numbers such that $d_n\to 0.$ A sequence of magnetizations $\{m^n\}$ defined in $d_n\cdot \Omega$
is called a sequence of almost minimizers if
\begin{equation}
\label{almost.min}
\lim_{n\to \infty}\frac{E(m^n)}{d_n^2}=\min_{m \in A_0}E(m).
\end{equation}
\end{Definition}
We are now ready to formulate the other result of the paper.
\begin{Theorem}[Convergence of almost minimizers]
\label{th:almost.minimizers}
Let $\{d_n\}$ be a sequence of positive numbers such that $d_n\to 0.$ Assume that the domain $\omega$ is so that $M_\omega$ has three different eigenvalues. Then for any sequence of almost minimizers $\{m^n\}$ defined in $d_n\cdot\Omega,$
there exist a sequence $\{T_n\}$ of translations in the $x$ direction and a sequence $\{R_n\}$ of rotations in the $OYZ$ plane, each of which is either the identity or the rotation by $180$ degrees such that for $\tilde m^n(x,y,z)=m^n(T_n(R_n(x,y,z)))$ for a minimizer $m^\omega$ of $E_0$, there holds,
$$\lim_{n\to\infty}\frac{1}{d_n}\|\tilde m^n-m^\omega\|_{H^1(\Omega_n)}=0.$$
\end{Theorem}
We refer to Appendix for the definition of $m^\omega.$
\begin{Theorem}[Bounds for thick wires]
\label{th:thick.bounds}
Let $\Omega=\mathbb R\times[-d,d]\times[-l,l]$ and $c\geq1.$ Then there exists $d_1>0$ such that if $l\geq d>d_1$ and $l\leq cd,$ then
$$C_1d^2\sqrt{\ln d}\leq \min_{m\in\tilde A(\Omega)}E(m)\leq C_2d^\frac{5}{2}\sqrt{\ln d},$$
where $C_1$ and $C_2$ depend only on $c.$
\end{Theorem}
\section{The oscillation preventing lemma}
In this section we prove a lemma, that will be crucial in proving both existence and convergence of almost minimizers results. The lemma
bounds the oscillations of a magnetization $m$ and the total measure of the set where $m$ develops oscillations by the
energy of $m.$ Uzing the idea of Kohn and Slastikov in [\ref{bib:KS2}] of the dimension reduction in thin domains, define
$$\bar m(x)=\int_{\omega}m(x,y,z)\ud y\ud z.$$
Using the definition of $M_\omega^1$ it is straightforward to show that $M_\omega^1$ is positive definite,
where $M_\omega^1$ is the lower right $2\times2$ block of $M_\omega.$
Denote for convenience
$$M_\omega^1=
\begin{bmatrix}
\alpha_2 & 0\\
0 & \alpha_3
\end{bmatrix}.
$$
It has been explicitly shown in [\ref{bib:Har}, Corollary 3.7.5] and implicitly in [\ref{bib:SS}, Proof of Lemma 4.1], that the inequality below holds uniformly in $m\colon (d\cdot\Omega)\to\mathbb S^2$:
\begin{equation}
\label{lower.bound.E}
\frac{E(m)}{d^2}\geq \int_{\Omega}|\nabla m|^2+\alpha_2\int_{\mathbb R}|\bar m_2|^2+\alpha_3\int_{\mathbb R}|\bar m_3|^2+O(1),
\end{equation}
as $d$ goes to zero.
\begin{Lemma}
\label{lem:m2.m.3.bdd.E}
Assume $m^d\in A(d\cdot\Omega).$ Then there exists $d_0>0$ such that,
\begin{align}
\label{bar.m.d<d0}
&\int_{\mathbb R}(|\bar m_2|^2+|\bar m_3|^2)\leq \frac{2E(m)}{d^2\min(\alpha_2,\alpha_3)},\quad\text{if}\quad d\leq d_0\\
\label{bar.m.d>d0}
&\int_{\mathbb R}(|\bar m_2|^2+|\bar m_3|^2)\leq \frac{2\max\left(\frac{d}{d_0},(\frac{d}{d_0})^3\right)E(m^d)}{dd_0\min(\alpha_2,\alpha_3)},\quad\text{if}\quad d>d_0
\end{align}
\end{Lemma}
\begin{proof}
Due to inequality (\ref{lower.bound.E}) there exists $d_0>0$ such that for $d\leq d_0$ we have
$$\frac{2E(m)}{d^2}\geq \alpha_2\int_{\mathbb R}|\bar m_2|^2+\alpha_3\int_{\mathbb R}|\bar m_3|^2,$$
and inequality (\ref{bar.m.d<d0}) follows. Assume now $d>d_0.$ It is straightforward that if $m^d\in A(d\cdot\Omega)$ then
$m_t^d(x,y,z)=m^d(tx,ty,tz)\in A(\frac{d}{t}\cdot\Omega)$ with $E(m_t^d)=tE_{ex}(m^d)+t^3E_{mag}(m^d)$, where
$E_{ex}(m)=\int_{\Omega}|\nabla m|^2$ is the exchange energy and $E_{mag}(m)=\int_{\mathbb R^3}|\nabla u|^2$ is the magnetostatic energy,
thus we get on one hand,
\begin{equation}
\label{E(m).E(mt)}
E(m_t^d)\leq \max(t,t^3)E(m^d).
\end{equation}
But on the other hand we have
$$\int_{\mathbb R}(|\bar m_2^d|^2+|\bar m_3^d|^2)=\frac{1}{t}\int_{\mathbb R}(|\bar m_{t2}^d|^2+|\bar m_{t3}^d|^2),$$
thus we obtain choosing $t=\frac{d}{d_0}$ and taking into account (\ref{bar.m.d<d0}) and (\ref{E(m).E(mt)}),
$$\int_{\mathbb R}(|\bar m_2^d|^2+|\bar m_3^d|^2)\leq\frac{2d_0E(m_t^d)}{dd_0^2\min(\alpha_2,\alpha_3)}\leq \frac{2\max\left(\frac{d}{d_0},(\frac{d}{d_0})^3\right)E(m^d)}{dd_0\min(\alpha_2,\alpha_3)}$$
which completes the proof.
\end{proof}
Next we prove a simple estimate between $m$ and $\bar m$ that will be useful in the proof of the oscillation preventing lemma.
\begin{Lemma}
\label{lem:ineq.m.bar.m}
For any $m\in A(\Omega)$ there holds
$$\int_{\omega}(|m|^2-|\bar m|^2)=\int_{\omega}|m-\bar m|^2\leq C_pd^2\int_{\omega}|\nabla_{yz}m|\qquad \text{for all}\qquad x\in \mathbb{R},
$$
where $C_p$ is the Poincar\'e constant of $\omega.$
\end{Lemma}
\begin{proof}
We have for any $x\in\mathbb{R}$
$$\int_{\omega}(m-\bar m)=\int_{\omega}m-|\omega|\cdot \bar m(x)=0,$$
thus by the Poincer\'e inequality we get
\begin{align*}
\int_{\omega}|m|^2&=\int_{\omega}|\bar m|^2+\int_{\omega}|m-\bar m|^2+2\bar m(x)\int_{\omega}(m-\bar m)\\
&=\int_{\omega}|\bar m|^2+\int_{\omega}|m-\bar m|^2\\
&\leq \int_{\omega}|\bar m|^2+C_pd^2\int_{\omega}|\nabla_{yz}m|,
\end{align*}
the proof is complete now.
\end{proof}
\begin{Lemma}[Oscillation preventing lemma]
\label{lem:oscilation.preventing}
Let $m\in A(\Omega)$ and let $\alpha,\beta,\rho\in \mathbb R$ such that $-1<\alpha<\beta<1$ and $0<\rho<1.$ Assume $\Re$ is a family of disjoint intervals $(a,b)$ satisfying the conditions
$$\{\bar m_1(a), \bar m_1(b)\}=\{\alpha,\beta\}\qquad \text{and}\qquad |\bar m_1(x)|\leq \rho,\qquad x\in (a,b).$$
Then,
\begin{itemize}
\item[(i)]
\begin{equation}
\label{card.sum}
\mathrm{card}(\Re)\leq M \qquad \text{and}\qquad \sum_{(a,b)\in\Re}(b-a)\leq M,
\end{equation}
where $M$ is a constant depending on $\alpha$, $\beta,$ $\rho,$ $\omega$ and $E(m)$.
\item[(ii)] The component $\bar m_1,$ satisfies
$\lim_{x\to\pm\infty}|\bar m_1(x)|=1.$
\end{itemize}
\end{Lemma}
\begin{proof}
Let us first prove the second inequality in (\ref{card.sum}). The function $\bar m$ is a one variable weakly differentiable function therefore it is locally absolutely continuous in $\mathbb{R}.$ For any $(a,b)\in \Re,$ we have by Lemma~\ref{lem:ineq.m.bar.m} and by the assumption of the lemma,
\begin{align*}
|\omega|(b-a)&=\int_{(a,b)\times \omega}|m|^2\\
&\leq \int_{(a,b)\times \omega}|\bar m|^2+C_pd^2\int_{(a,b)\times \omega}|\nabla_{yz}m|^2\\
&\leq \rho^2|\omega|(b-a)+\int_{(a,b)\times \omega}(\bar m_2^2+\bar m_3^2)+C_pd^2\int_{(a,b)\times \omega}|\nabla m|^2.
\end{align*}
Summing up the inequalities for all $(a,b)\in \Re$ we get,
\begin{align*}
|\omega|\cdot\sum_{(a,b)\in\Re}(b-a)&\leq \rho^2|\omega|\sum_{(a,b)\in\Re}(b-a)+\int_{\Sigma}(\bar m_2^2+\bar m_3^2)+
C_pd^2\int_{\Sigma}|\nabla m|^2\\
&\leq \rho^2|\omega|\sum_{(a,b)\in\Re}(b-a)+\int_{\Omega}(\bar m_2^2+\bar m_3^2)+C_pd^2\int_{\Omega}|\nabla m|^2,
\end{align*}
where $\Sigma=\bigcup_{(a,b)\in\Re}(a,b)\times \omega.$
By virtue of Lemma~\ref{lem:m2.m.3.bdd.E} we have
$$\int_{\Omega}(\bar m_2^2+\bar m_3^2)\leq C_1,$$
for some $C_1$ depending on $\omega$ and $E(m).$ Therefore we obtain
\begin{equation}
\label{sum (b-a)}
\sum_{(a,b)\in\Re}(b-a)\leq \frac{C_1+C_pd^2E(m)}{|\omega|(1-\rho^2)}.
\end{equation}
Next we have for any point $(y,z)\in \omega $ and any interval $(a,b)\in \Re,$
$$
\int_a^b|\partial_xm_1(x,y,z)|^2\ud x\geq \frac{1}{b-a}\bigg(\int_a^b|\partial_xm_1(x,y,z)|\ud x\bigg)^2,
$$
Thus integrating over $\omega$ we get
\begin{align*}
\int_{(a,b)\times \omega}|\partial_xm_1|^2\ud \xi&\geq \frac{1}{b-a}\int_{\omega}\bigg(\int_a^b|\partial_xm_1(x,y,z)|\ud x\bigg)^2\ud y\ud z\\
&\geq\frac{1}{b-a}\int_{\omega}|m_1(a,y,z)-m_1(b,y,z)|^2\ud y\ud z\\
&\geq\frac{1}{|\omega|(b-a)}\bigg(\int_{\omega}\big(m_1(a,y,z)-m_1(b,y,z)\big)\ud y\ud z\bigg)^2\\
&=\frac{|\omega|(\alpha-\beta)^2}{b-a},
\end{align*}
thus
$$\int_{(a,b)\times \omega}|\partial_xm_1|^2\ud \xi\geq\frac{|\omega|(\alpha-\beta)^2}{b-a}.$$
Summing up the last inequalities for all $(a,b)\in \Re$ we arrive at
\begin{align*}
\sum_{(a,b)\in \Re}\frac{1}{b-a}&\leq \frac{1}{|\omega|(\alpha-\beta)^2}\int_{\Sigma}|\partial_xm_1|^2\ud \xi\\
&\leq\frac{1}{|\omega|(\alpha-\beta)^2}\int_{\Omega}|\nabla m|^2\ud \xi\\
&\leq \frac{E(m)}{|\omega|(\alpha-\beta)^2},
\end{align*}
thus
\begin{equation}
\label{sum 1/(b-a)}
\sum_{(a,b)\in \Re}\frac{1}{b-a}\leq \frac{E(m)}{|\omega|(\alpha-\beta)^2}.
\end{equation}
Combining now (\ref{sum (b-a)}) and (\ref{sum 1/(b-a)}) we obtain,
\begin{equation}
\label{(a,b).finite.estimate}
\sum_{(a,b)\in \Re}\bigg(\frac{1}{b-a}+b-a\bigg)\leq\frac{1}{|\omega|}\bigg(\frac{E(m)}{(\alpha-\beta)^2}+
\frac{C_1+C_pd^2E(m)}{1-\rho^2}\bigg):=M(\alpha,\beta,\rho,\omega,E(m)).
\end{equation}
The last inequality and the inequality $\frac{1}{b-a}+b-a\geq 2$ yield $M(\alpha,\beta,\rho,\omega,E(m))\geq 2\mathrm{card}(\Re),$ which finishes the proof of the first part. It is clear that
$$|\bar m_1(x)|=\frac{1}{|\omega|}\bigg|\int_{\omega}m_1(x,y,z)\ud y\ud z\bigg| \leq\frac{1}{|\omega|}\int_{\omega}|m_1(x,y,z)|\ud y\ud z\leq 1$$
thus
$$0\leq 1-\bar m_1^2(x)\leq 1,\qquad x\in\mathbb{R}.$$
By virtue of Lemma~\ref{lem:m2.m.3.bdd.E} and Lemma~\ref{lem:ineq.m.bar.m} we have,
$$\int_{\Omega}(1-\bar m_1^2)\ud \xi\leq\int_{\Omega}(\bar m_2^2+\bar m_3^2)\ud \xi+C_pd^2E(m)<\infty,$$
thus
\begin{equation}
\label{1-bar m_x^2 has finite norm}
\int_{\mathbb{R}}(1-\bar m_x^2)\ud x<\infty.
\end{equation}
The integrand is continuous and positive thus for any $0<\delta<1$ and $N>0$ there exists $x_\delta>N$ such that $|\bar m_1(x_\delta)|>1-\frac{\delta}{2}$. Therefore there exists an increasing sequence $\{x_n\}$ such that $x_n\to\infty$ and $|\bar m_1(x_n)|>1-\frac{\delta}{2}$. Thus for infinitely many indices $n$ one has one of the following: $\bar m_1(x_n)>1-\frac{\delta}{2}$\ or \ $\bar m_1(x_n)<-1+\frac{\delta}{2}$. Assume that for a subsequence (not relabeled) there holds $\bar m_1(x_n)>1-\frac{\delta}{2}$. Let us then show, that
$\bar m_1(x)>1-\delta$ for all $x>N_{\delta}$ and some $N_{\delta}$. Assume in the contrary that for an increasing sequence $(\tilde x_n)_{n\in\mathbb{N}}$ \ with $\tilde x_n\to\infty$ one has $\bar m_1(\tilde x_n)\leq 1-\delta$. We construct an infinite family of disjoint intervals $(a_n,b_n)$ such that the value of $\bar m_1$ at one end of $(a_n,b_n)$ is less or equal than $1-\delta$ and at the other end is bigger than $1-\frac{\delta}{2}$ for all $n\in \mathbb{N}$. We start with taking the smallest $n$ such that $\tilde x_n>x_1$ and denote it by $\tilde n_1$ and set $a_1=x_1$, $b_1=\tilde x_{\tilde n_1}$. In the second step we take the smallest $n$ such that $x_n>b_1$ and denote it by $n_2$ and then we take the smallest $n$ such that
$\tilde x_n>x_{n_2}$ and denote it by $\tilde n_2$ and set $a_2=x_{n_2}$ and $b_2=\tilde x_{\tilde n_2}$. This process will never stop, thus the intervals $(a_n,b_n)$ are constructed such that $\bar m_x(a_n)>1-\frac{\delta}{2}$ \ and
\ $\bar m_x(b_n)<1-\delta.$ Since $\bar m_x$ is continuous in $\mathbb{R}$ the new sequence of disjoint intervals $(\acute a_n, \acute b_n)$ where $\acute a_n=\sup\{ x\in (a_n,b_n) \ | \ \bar m_x(x)\geq 1-\frac{\delta}{2}\}$ and $\acute b_n=\inf\{ x\in (\acute a_n,b_n) \ | \ \bar m_x(x)\leq 1-\delta\}$ have the properties $\bar m_1(\acute a_n)=1-\frac{\delta}{2},$\ $\bar m_1(\acute b_n)=1-\delta$ and $|\bar m_x(x)|\leq 1-\frac{\delta}{2} $ for all $x\in[\acute a_n,\acute b_n]$ which contradicts (\ref{card.sum}). The same can be done for $-\infty$.
\end{proof}
\newtheorem{Remark}[Theorem]{Remark}
\begin{Remark}
\label{rem:lim.bar.m1=1}
If $m\in\tilde A(\Omega)$ then $\lim_{x\to\pm\infty}\bar m_1(x)=\pm 1.$
\end{Remark}
\begin{proof}
By Lemma~\ref{lem:oscilation.preventing} we have $\lim_{x\to\pm\infty}|\bar m_1(x)|=1.$ Since $\bar m_1(x)$ is continuous and $\bar m-\bar e\in H^1(\Omega),$
then the proof follows.
\end{proof}
\begin{Remark}
\label{rem:lim.m0}
If $|m|=1$ and $E_0(m)<\infty$ then $\lim_{x\to\pm\infty}|m_1(x)|=1.$
\end{Remark}
\begin{proof}
The proof is analogues to the proof of property $(ii)$ in Lemma~\ref{lem:oscilation.preventing}.
\end{proof}
\section{Existence of minimizers}
We start by proving a simple compactness lemma that will be crucial in the proof of the existence theorem.
\begin{Lemma}
\label{lem:compactness}
Assume that the sequence of magnetizations $\{m^n\}$ defined in the same domain $\Omega$ satisfies and $E(m^n)\leq C$ for some constant $C.$ Then there exists a magnetization $m^0\colon \Omega \rightarrow \mathbb{S}^2 $ such that for a subsequence of $\{m^n\}$ (not relabeled) the following statements hold:
\begin{itemize}
\item[(i)] $\nabla m^n\rightharpoonup\nabla m^0$ weakly in $L^2(\Omega)$
\item[(ii)] $m^n\rightarrow m^0$ strongly in $L_{loc}^2(\Omega)$
\item[(iii)] $E(m^0)\leq \liminf E(m^n)$.
\end{itemize}
\end{Lemma}
\begin{proof}
Let $u_n$ be a weak solution of $\triangle u=\mathrm{div}m^n$. From $\int_{\Omega}|\nabla m^n|^2+\int_{\mathbb R^3}|\nabla u^n|^2\leq C$ we get by a standard compactness argument that,
$\nabla m^n\rightharpoonup \nabla m^0$ in $ L^2(\Omega),$ $\nabla u_n\rightharpoonup g$ in $L^2(\mathbb{R}^3)$ and $m^n\rightarrow m^0$ in $L_{loc}^2(\Omega),$ for the same subsequence (not relabeled) of $\{\nabla m^n\}$ and $\{\nabla u_n\}$ and some $f\in L^2(\Omega)$ and $ g\in L^2(\mathbb{R}^3).$ We extend $m^0$ outside $\Omega$ as zero. The identities
$$\int_{\Omega} m^n\cdot \nabla\varphi=\int_{\mathbb{R}^3} \nabla u_n\cdot \nabla\varphi\quad \text{for all}\quad n\in\mathbb N\quad\text{and}\quad\varphi\in C_0^\infty(\mathbb R^3),$$
will then yield
$$\int_{\Omega} m^0\cdot \nabla\varphi =\int_{\mathbb{R}^3} g\cdot \nabla\varphi\quad \text{for all}\quad \varphi\in C_0^\infty(\mathbb R^3).$$
Since $g\in L^2(\mathbb{R}^3)$ then the Helmoltz projection of $g$ onto the subspace of gradient fields in $L^2(\mathbb R^3)$ will have
the form $\nabla u_0,$ will satisfy $\|\nabla u_0\|_{L^2(\mathbb{R}^3)}\leq\|g\|_{L^2(\mathbb{R}^3)}$ and will be a weak solution of $\triangle u=\mathrm{div} g$ which is equivalent to
$$\int_{\mathbb{R}^3} g\cdot \nabla\varphi =\int_{\mathbb{R}^3} \nabla u_0\cdot \nabla\varphi \quad
\text{for all} \quad\varphi\in C_0^{\infty}(\mathbb{R}^3),$$
thus we get
$$\int_{\mathbb{R}^3} m^0\cdot \nabla\varphi=\int_{\mathbb{R}^3} \nabla u_0\cdot \nabla\varphi\quad
\text{for all} \quad \varphi\in C_0^{\infty}(\mathbb{R}^3)$$
which means that $u_0$ is a weak solution of
$$\triangle u=\mathrm{div} m^0.$$
Therefore from the weak convergence $\nabla m^n\rightharpoonup \nabla m^0$ and $\nabla u_n\rightharpoonup g$ we obtain,
\begin{align*}
\|\nabla u_0\|_{L^2(\mathbb{R}^3)}&\leq\|g\|_{L^2(\mathbb{R}^3)}\leq \liminf_{n\to\infty} \|\nabla u_n\|_{L^2(\mathbb{R}^3)}\\
\|\nabla m^0\|_{L^2(\mathbb{R}^3)}&\leq \liminf_{n\to\infty} \|\nabla m^n\|_{L^2(\mathbb{R}^3)}
\end{align*}
which yields $E(m^0)\leq \liminf_{n\to\infty} E(m^n).$
\end{proof}
Now we have enough tolls to prove the existence theorem.\\
\textbf{Proof of Theorem~\ref{th:existence}}. We adopt the direct method of proving an existence of a minimizer. The idea is starting with
any minimizing sequence, we construct another minimizing sequence that has a limit in $\tilde A(\Omega)$ in the sense of Lemma~\ref{lem:compactness}.
Let $\{m^n\}$ be a minimizing sequence, i.e.,
$$\lim_{n\rightarrow \infty}E(m^n)=\inf_{m\in\tilde A(\Omega)}E(m).$$
First of all note that minimization problem (\ref{minimization problem}) is invariant under translations in the $x$ direction, that is if $m\in \tilde A(\Omega)$ then obviously $m_c(x,y,z)=m(x-c,y,z)\in \tilde A(\Omega)$ and $E(m_c)=E(m).$ We have that $|E(m^n)|\leq M$ for some $M$ and for all $n \in \mathbb{N}$. For any $n\in\mathbb{N}$ consider the sets $A_n$, $B_n$ and $C_n$ defined as follows:
\begin{align*}
A_n&=\bigg\{x\in \mathbb{R} \ \ : \ -1\leq \bar m_1^n(x)< -\frac{1}{2}\bigg\}\\
B_n&=\bigg\{x\in \mathbb{R} \ \ : \ -\frac{1}{2}\leq \bar m_1^n(x)\leq \frac{1}{2}\bigg\}\\
C_n&=\bigg\{x\in \mathbb{R} \ \ : \ \frac{1}{2}<\bar m_1^n(x)\leq 1\bigg\}\\
\end{align*}
Since $\bar m_1^n$ is continuous in $\mathbb{R}$ then for all $n \in\mathbb{N}$, $A_n,$ $B_n$ and $C_n$ are a finite or countable union of disjoint intervals. We distinguish two types of intervals in $B_n.$ A composite interval $(a,b)\subset B_n$ is said to be of the first type if $|\bar m_1^n(a)-\bar m_1^n(b)|=1,$ and of the second type otherwise.
By Lemma~\ref{lem:oscilation.preventing} the sum of the lengths of all intervals, as well as the number of the first type intervals in $B_n$ is bounded by a number $s$ depending only on $M$ and $\omega$, i.e., a constant not depending on $n$. Consider two cases:\\
\textbf{CASE1.} \textit{There are no second type intervals in $B_n$ for all $n\in\mathbb{N}.$}\\
Let us paint all the points of $A_n$, $B_n$ and $C_n$ with respectively black, yellow and red color for all $n\in \mathbb{N}$. We call the increasing sequence $\{n_k\}\subset \mathbb N$ "good" if for every $k\in\mathbb{N}$ there exist two intervals $(a_1^k,a_2^k)\subset A_{n_k}$ and
$(c_1^k,c_2^k)\subset C_{n_k}$ such that
$$a_2^k-a_1^k\rightarrow +\infty,\qquad c_2^k-c_1^k\rightarrow +\infty,\qquad 0<c_1^k-a_2^k\leq C$$
for a constant $C$ not depending on $k.$ The endpoints $a_1^k$ and $c_2^k$ can also take values
$-\infty$ and $+\infty$ respectively. If $\{n_k\}$ is "good", the subsequence $\{m^{n_k}\}$ will also be called "good". We show, that any minimizing sequence $\{m^n\}\subset\tilde A(\Omega)$ can be translated in the $x$ coordinate such that the new sequence contains a "good" subsequence. For every fixed $n$ there are some black, yellow and red intervals between $(-\infty, a_n)$ and $(c_n,+\infty)$. Note that there is obviously at least one yellow interval between any two black and any two red ones, thus the number of both black and yellow intervals is at most $s+1$, hence the number of all intervals in the $n$-th family is bounded by the same number $S=3s+2$ for all $n.$ Let us number both the red and the black intervals in any family of intervals. Let us prove the proposition below, which is a reformulation of our problem:\\
\textbf{Proposition.} Assume a sequence of natural numbers $l_n$ and a sequence of families of $l_n$ disjoint intervals on the real line pained with black and red color are given for all $n\in\mathbb{N}$. Assume $l_n\leq l$ and the sum of the lengths of $l_n-1$ gaps between the intervals in the $n$-th family is bounded by the same number $M$ for all $n$. Assume furthermore that for any $n,$ the far left placed interval is black and the far right placed interval is red and their lengths tend to $\infty$ as $n$ goes to infinity. Then there exists a subsequence $\{n_k\}$ and two associated intervals $(a_1^k, a_2^k)$ and
$(c_1^k, c_2^k)$ in the $n_k$-th family such that $(a_1^k, a_2^k)$ is black, $(c_1^k, c_2^k)$ is red, and
\begin{equation}
a_2^k-a_1^k \rightarrow +\infty,\qquad c_2^k-c_1^k \rightarrow +\infty \qquad 0<c_1^k-a_2^k\leq M_1
\end{equation}
for a constant $M_1$ and all $k\in \mathbb{N}$.\\
\textbf{Proof of proposition.} The case $l=2$ is evident. Assume that the proposition is true for $l\leq N$ and let us prove it for $l=N+1$. Since $l\geq 3$, in every family there are at least two intervals of the same color. Assume that for infinitely many indices $n$ there are at least two black intervals in the $n$-th family. Consider the far right placed black intervals for all such families. There are two possible cases:\\
\textbf{Case 1.} \textit{For a subsequence their lengths tend to $+\infty$}.\\
In this case we can omit all the intervals placed on their left side which leads to a situation with less intervals in every family (in such a subsequence) fulfilling the requirements of the proposition, so by induction the existence of a "good" subsequence is proven.\\
\textbf{Case 2.} \textit{Their lengths are bounded by the same constant.}\\
In this case we can remove this intervals and this will lead us to a situation with less intervals in all families fulfilling the requirements of the statements so by the induction the existence of a "good" subsequence is proven.\\
Let us get now back to our situation. If we remove all the yellow intervals from the real line for all $n\in \mathbb{N}$ then the families of the black and the red intervals fulfill the requirements of the proposition, thus the existence of a "good" sequence is proven. Take the two intervals $[a_1^k,a_2^k]$ and $[c_1^k,c_2^k]$ for all $k\in\mathbb{N}$ and denote the the "good" subsequence of magnetizations again by $\{m^k\}$ which will also be a minimizing sequence. Let us translate $m^k$ by a factor of $a_2^k$ and denote
$$m_{good}^k(x,y,z)=m^k(x+a_2^k,y,z).$$
Then $\{m_{good}^k\}$ is a minimizing sequence and furthermore denoting $a_3^k=a_2^k-a_1^k$, $c_3^k=c_1^k-a_2^k$ and $c_4^k=c_2^k-a_2^k,$ we obtain,
\begin{align}
\label{conditions.good1}
&\bar m_{good}^k(x)\leq -\frac{1}{2} \quad\text{for} \quad x \in [-a_3^k,0] \quad \text{and} \quad\bar m_{good}^k(x)\geq \frac{1}{2} \quad\text{for}\quad x \in [c_3^k,c_4^k],\\
&a_3^k\rightarrow \infty,\qquad c_4^k-c_3^k\rightarrow \infty,\qquad 0<c_3^k<M_1.
\label{conditions.good2}
\end{align}
Owing to Lemma~\ref{lem:compactness} one can extract a subsequence from $\{m_{good}^k\}$ (not relabeled) with a limit $m^0\in A(\Omega).$
Let us now prove that conditions (\ref{conditions.good1}) and (\ref{conditions.good2}) imply that $m^0\in \tilde A(\Omega).$ We have for any fixed $R>0,$
\begin{align*}
\int_{-R}^{R}|\bar m_1^0-\bar m_{good1}^k|\ud x&=\frac{1}{|\omega|}\int_{-R}^{R}\bigg|\int_{\omega}( m_1^0-m_{good1}^k)\ud y\ud z\bigg|\ud x\\
&\leq\frac{1}{|\omega|}\int_{-R}^{R}\int_{ \omega}| m_1^0-m_{good1}^k|\ud y\ud z\ud x\\
&\leq\frac{1}{|\omega|}\Bigg(2R|\omega|\cdot\int_{[-R,R]\times \omega}|m_1^0-m_{good1}^k|^2\ud\xi\Bigg)^{\frac{1}{2}}\\
&=\sqrt{\frac{2R}{|\omega|}}\cdot \|m_1^0-m_{good1}^k\|_{L^2([-R,R]\times \omega)}\rightarrow 0
\end{align*}
as $k\to\infty$ because of the strong convergence $m_{good}^k\rightarrow m^0$ in $L_{loc}^2(\Omega)$. Therefore a subsequence of $\{\bar m_{good1}^k(x)\}$ converges pointwise to $\bar m_1^0(x)$ almost everywhere in $[-R,R].$ Giving $R$ all natural values and applying a diagonal argument we establish that a subsequence of $\{\bar m_{good1}^k(x)\}$ converges pointwise to $\bar m_1^0(x)$ almost everywhere in $\mathbb{R},$ therefore
\begin{equation}
\label{cond.m0}
\bar m_1^0(x)\leq -\frac{1}{2}\quad\text{ a.e. in}\quad (-\infty,0)\quad\text{ and}\quad \bar m_1^0(x)\geq\frac{1}{2} \quad\text{ a.e. in}\quad [M_1,+\infty)
\end{equation}
Let us now show that conditions $E(m^0)<\infty$ and (\ref{cond.m0}) imply $m^0\in \tilde A(\Omega).$ We have by the triangle inequality
$$\|\nabla (m^0-\bar e)\|_{L^2(\Omega)}^2\leq 2\|\nabla m^0\|_{L^2(\Omega)}^2+2\|\nabla \bar e\|_{L^2(\Omega)}^2\leq 2E(m^0)+4|\omega|<\infty, $$
thus it remains to prove that $m^0-\bar e\in L^2(\Omega)$. We have again by the triangle inequality and by Lemma~\ref{lem:ineq.m.bar.m},
\begin{align*}
\|m^0-\bar e\|_{L^2(\Omega)}^2&\leq 2\|\bar m^0-\bar e\|_{L^2(\Omega)}^2+2\|m^0-\bar m^0\|_{L^2(\Omega)}^2\\
&\leq 2C_pd^2\|\nabla m^0\|_{L^2(\Omega)}^2+2\|\bar m^0-\bar e\|_{L^2(\Omega)}^2\\
&\leq 2C_pd^2E(m^0)+2\|\bar m^0-\bar e\|_{L^2(\Omega)}^2,
\end{align*}
thus it remains to prove that $\bar m^0-\bar e\in L^2(\Omega)$. One can assume without loss of generality that $M_1\geq1$ in (\ref{cond.m0}). We calculate,
\begin{align*}
\int_\Omega |\bar m^0-\bar e|^2=\int_{[-1,M_1]\times\omega}|\bar m^0-\bar e|^2+\int_{[-\infty,-1]\times\omega}|\bar m^0-\bar e|^2+\int_{[M_1,\infty]\times\omega}|\bar m^0-\bar e|^2=I_1+I_2+I_3.
\end{align*}
The estimation of $I_1,$ $I_2$ and $I_3$ is straightforward:
$$I_1\leq 4(1+M_1)|\omega|.$$
Due to condition (\ref{cond.m0}) and Lemma~\ref{lem:ineq.m.bar.m} we have,
\begin{align*}
I_2&=\int_{[-\infty,-1]\times\omega}(1+|\bar m^0|^2+2\bar m_1^0)\\
&=2\int_{[-\infty,-1]\times\omega}(1+\bar m_1^0)+\int_{[-\infty,-1]\times\omega}(|m^0|^2-|\bar m^0|^2)\\
&\leq 2\int_{[-\infty,-1]\times\omega}(1+\bar m_1^0)(1-\bar m_1^0)+C_pd^2\int_{[-\infty,-1]\times\omega}|\nabla m^0|^2\\
&=2\int_{[-\infty,-1]\times\omega}(|m^0|^2-|\bar m^0|^2)+2\int_{[-\infty,-1]\times\omega}(|\bar m_2^0|^2+|\bar m_3^0|^2)+C_pd^2\int_{[-\infty,-1]\times\omega}|\nabla m^0|^2\\
&\leq 3C_pd^2\int_{[-\infty,-1]\times\omega}|\nabla m^0|^2+2\int_{[-\infty,-1]\times\omega}(|\bar m_2^0|^2+|\bar m_3^0|^2).
\end{align*}
Analogues analysis for $I_3$ gives
$$I_3\leq3C_pd^2\int_{[M_1,\infty]\times\omega}|\nabla m^0|^2+2\int_{[M_1,\infty]\times\omega}(|\bar m_2^0|^2+|\bar m_3^0|^2).$$
Therefore combining the estimates for $I_1$ $I_2$ and $I_3$ and taking into account Lemma~\ref{lem:m2.m.3.bdd.E} we discover $I_1+I_2+I_3<\infty$
as wished. CASE1 is now established.\\
\textbf{CASE2.} \textit{There are some second type intervals in $B_n$ for some $n.$}\\
Removing all the second type yellow intervals from the real line we can regard the rest as a real line without gaps simply by shifting all the intervals to the left hand side such that after that operation no overlap occurs and there is no gap left. Precisely, we shift each interval to the left by a factor equal to the sum of the lengths of the gaps between that interval and $-\infty.$ During that operation we unify the black and red intervals with the neighboring intervals of the same color but we regard the possible neighboring first type yellow intervals as separate. We get a situation like in CASE1 and therefore we can prove the existence of a "good" subsequence. It is easy to show that since that sum of the lengths of the second type yellow intervals in each family is bounded by the same constant then the in Lemma~\ref{lem:compactness} described limit of the obtained "good" subsequence will belong to $\tilde A(\Omega)$ and
hence will be an energy minimizer in $\tilde A(\Omega)$. The proof is complete now.
\section{Convergence of almost minimizers}
Throughout this section we will consider a sequence of domain-magnetization-energy triples $(\Omega_n, m^n,E(m^n))_{n\in\mathbb{N}}$ such that $\Omega_n=\mathbb R\times(d_n\cdot\omega),$ $m^n\in\tilde A(\Omega_n),$ $d_n\to 0$ and $\lim_{n\to\infty}\frac{E(m^n)}{d_n^2}=\min_{m\in A_0}E_0(m),$
i.e., $\{m^n\}$ is a sequence of almost minimizers. Assume furthermore that $\omega$ has 180 degree rotational symmetry and the matrix $M_\omega$ has three different eigenvalues, i.e., $\alpha_2\neq\alpha_3,$ hence one can assume without loss of generality, that $\alpha_2<\alpha_3.$ Note that due to (\ref{almost.min}) we have
\begin{equation}
\label{E(m^n)is.bdd}
E(m^n)\leq Cd_n^2\quad\text{for all}\quad n.
\end{equation}
\begin{Lemma}
\label{morms of avrages converges to the norm of limit}
If $\{\acute m^n\}$ converges to some $m^0(x)\in\tilde A(\Omega)$ in the sense of Definition~\ref{notion of convergence}, then
\begin{itemize}
\item[(i)] $ \lim_{n\to\infty}\|\nabla \acute{\bar m}^n\|_{L^2(\Omega)}=\|\nabla m^0\|_{L^2(\Omega)},$
\item[(ii)] $ \lim_{n\to\infty}\|\acute{\bar m}_2^n\|_{L^2(\Omega)}=\|m_2^0\|_{L^2(\Omega)},\qquad \lim_{n\to\infty}\|\acute{\bar m}_3^n\|_{L^2(\Omega)}=\|m_3^0\|_{L^2(\Omega)}.$
\end{itemize}
\end{Lemma}
\begin{proof}
The inequality $ \liminf_{n\to\infty}\|\nabla \acute{\bar m}^n\|_{L^2(\Omega)}\geq\|\nabla m^0\|_{L^2(\Omega)}$ is trivial, while the inequality
$ \liminf_{n\to\infty}\|\acute{m}_2^n\|_{L^2(\Omega)}\geq\|m_2^0\|_{L^2(\Omega)}$ follows from the convergence $m_2^n\to m_2^0$ in $L_{loc}^2(\Omega).$
We have furthermore by Lemma~\ref{lem:ineq.m.bar.m} and by (\ref{E(m^n)is.bdd}) that,
\begin{align*}
\|\acute m_2^n-\acute{ \bar m}_2^n\|_{L^2(\Omega)}^2&=\frac{1}{d_n^2}\|m_2^n-\bar m_2^n\|_{L^2(\Omega_n)}\\
&\leq C_p\|\nabla m^n\|_{L^2(\Omega_n)}^2\\
&\leq C_pCd_n^2,
\end{align*}
thus
\begin{equation}
\label{acute.acute.bar}
\|\acute m_2^n-\acute{ \bar m}_2^n\|_{L^2(\Omega)}\to 0.
\end{equation}
Therefore we get $ \liminf_{n\to\infty}\|\acute{\bar m}_2^n\|_{L^2(\Omega)}\geq\|m_2^0\|_{L^2(\Omega)}$ and a similar inequality for $m_3$
is also fulfilled. It remains to only show the opposite inequalities with $\limsup.$ It is clear that
$\|\nabla \acute{\bar m}^n\|_{L^2(\Omega)}\leq \|\nabla \acute{m}^n\|_{L^2(\Omega)},$ thus it suffices to prove that
$ \limsup_{n\to\infty}\|\nabla \acute{ m}^n\|_{L^2(\Omega)}\leq\|\nabla m^0\|_{L^2(\Omega)}.$
Assume now in contradiction that one of the three inequalities with $\limsup$, we intend to prove, fails. Therefore we have owing to (\ref{lower.bound.E}), that for some $\delta>0$ there holds,
\begin{align*}
\limsup_{n\to\infty}\frac{E(m^n)}{d_n^2}&\geq\max\bigg(\limsup_{n\to\infty}\|\nabla \acute m^n\|_{L^2(\Omega)}^2+\liminf_{n\to\infty}
\alpha_2\|\bar m_2^n\|_{L^2(\mathbb R)}^2+\liminf_{n\to\infty}\alpha_3\|\bar m_3^n\|_{L^2(\mathbb R)}^2,\\
&\liminf_{n\to\infty}\|\nabla \acute m^n\|_{L^2(\Omega)}^2+\limsup_{n\to\infty}
\alpha_2\|\bar m_2^n\|_{L^2(\mathbb R)}^2+\liminf_{n\to\infty}\alpha_3\|\bar m_3^n\|_{L^2(\mathbb R)}^2,\\
&\liminf_{n\to\infty}\|\nabla \acute m^n\|_{L^2(\Omega)}^2+\liminf_{n\to\infty}
\alpha_2\|\bar m_2^n\|_{L^2(\mathbb R)}^2+\limsup_{n\to\infty}\alpha_3\|\bar m_3^n\|_{L^2(\mathbb R)}^2\bigg)\\
&\geq E_0(m^0)+\delta\\
&\geq \min_{m\in A_0}E_0(m)+\delta,
\end{align*}
which contradicts (\ref{almost.min}). The lemma is proved now.
\end{proof}
\begin{Corollary}
\label{norms convergs to the norm of the limit}
Let $\{m^n\}$ and $m^0$ be as in Lemma \ref{morms of avrages converges to the norm of limit}. Then
\begin{itemize}
\item[(i)] $ \lim_{n\to\infty}\|\acute{m}_2^n\|_{L^2(\Omega)}=\|m_2^0\|_{L^2(\Omega)},\qquad\lim_{n\to\infty}\|\acute{m}_3^n\|_{L^2(\Omega)}=\|m_3^0\|_{L^2(\Omega)}.$
\end{itemize}
\end{Corollary}
\begin{proof}
It follows from Lemma~\ref{morms of avrages converges to the norm of limit} and equality (\ref{acute.acute.bar}).
\end{proof}
\begin{Lemma}
\label{strong convergence1}
Let $\{m^n\}$ and $m^0$ be as in Lemma~\ref{morms of avrages converges to the norm of limit}. Then
\begin{itemize}
\item[(i)]$\lim_{n\to\infty}\|\nabla \acute m^n-\nabla m^0\|_{L^2(\Omega)}=0$
\item[(ii)] $\lim_{n\to\infty}\|\acute m_2^n- m_2^0\|_{L^2(\Omega)}=0, \qquad
\lim_{n\to\infty}\|\acute m_3^n- m_3^0\|_{L^2(\Omega)}=0.$
\end{itemize}
\end{Lemma}
\begin{proof}
The inequality $\liminf_{n\to\infty}\|\nabla \acute m^n\|_{L^2(\Omega)}\geq \|\nabla m^0\|_{L^2(\Omega)}$ ia a consequence of the weak convergence $
\nabla \acute m^n\rightharpoonup\nabla m^0.$ The opposite inequality $\limsup_{n\to\infty}\|\nabla \acute m^n\|_{L^2(\Omega)}\leq \|\nabla m^0\|_{L^2(\Omega)}$ has been proven in the proof of Lemma~\ref{morms of avrages converges to the norm of limit}. Therefore
$\limsup_{n\to\infty}\|\nabla \acute m^n\|_{L^2(\Omega)}=\|\nabla m^0\|_{L^2(\Omega)}$ which combined with the weak convergence
$\acute m^n \rightharpoonup m^0$ gives $(i)$.
Fix now $l>0.$ We have by virtue of Corollary \ref{norms convergs to the norm of the limit},
\begin{align*}
\limsup_{n\to\infty}\int_{\Omega}|\acute m_2^n- m_2^0|^2&\leq
\limsup_{n\to\infty}\int_{[-l,l]\times\omega}|\acute m_2^n- m_2^0|^2+\limsup_{n\to\infty}\int_{\Omega\setminus([-l,l]\times\omega)}|\acute m_2^n- m_2^0|^2\\
&\leq 2\limsup_{n\to\infty}\int_{\Omega\setminus([-l,l]\times\omega)}\big(|\acute m_2^n|^2+| m_2^0|^2\big)\\
&\leq 2\limsup_{n\to\infty}\int_{\Omega}\big(|\acute m_2^n|^2+|m_2^0|^2\big)-2\liminf_{n\to\infty}\int_{[-l,l]\times\omega}\big(|\acute m_2^n|^2+| m_2^0|^2\big)\\
&=4|\omega|\int_{\mathbb{R}\setminus[-l,l]}| m_2^0(x)|^2\ud x.
\end{align*}
From the arbitrariness of $l$ we get the validity of the first equality in $(ii).$ The proof of the second equality in $(ii)$ is straightforward.
\end{proof}
\begin{Lemma}
\label{strong convergence2}
Let $\{m^n\}$ and $m^0$ be as in Lemma~\ref{morms of avrages converges to the norm of limit}. Assume in addition that for some $N\in\mathbb{N}$ and $l>0$ we have for all $n\geq N$
$$\bar m_1^n(x)\leq 0, \ x\in(-\infty,-l] \ \ \text{and}\ \ \ \bar m_1^n(x)\geq 0, \ x\in[l,+\infty).$$
Then
$$\lim_{n\to\infty}\|\acute m^n-m^0\|_{H^1(\Omega)}=0.$$
\end{Lemma}
\begin{proof}
By Lemma~\ref{strong convergence1} it suffices to show that $\lim_{n\to\infty}\|\acute m_1^n-m_1^0\|_{L^2(\Omega)}=0.$
Since $m^0(x)\in \tilde A(\Omega)$ then due to Remark~\ref{rem:lim.bar.m1=1} there exists $l_1>0$ such that
$$m_1^0(x)\leq-\frac{1}{2},\qquad x\in(-\infty, l_1]\qquad \text{and}\qquad m_1^0(x)\geq\frac{1}{2},\qquad x\in[l_1, +\infty).$$
For any fixed $l_2>\max(l,l_1)$ we have,
$$\int_{\Omega}|\acute m_1^n-m_1^0|^2=\int_{[-l_2,l_2]\times\omega}|\acute m_1^n-m_1^0|^2+\int_{\Omega\setminus([-l_2,l_2]\times\omega)}|\acute m_1^n-m_1^0|^2.$$
The first summand converges to zero and we have furthermore that
$\|\acute m_1^n-\acute{\bar m}_1^n\|_{L^2(\Omega)}\to 0,$ thus it suffices to show that
$$\lim_{n\to\infty}\int_{\Omega\setminus([-l_2,l_2]\times\omega)}|\acute{\bar m}_1^n-m_1^0|^2=0.$$
For $n\geq N$ we have
\begin{align*}
\int_{\Omega\setminus([-l_2,l_2]\times\omega)}|\acute{\bar m}_1^n-m_1^0|^2&\leq
\int_{\Omega\setminus([-l_2,l_2]\times\omega)}\big||\acute{\bar m}_1^n|^2-|m_1^0|^2\big|\\
&\leq\int_{\Omega\setminus([-l_2,l_2]\times\omega)}\big||\acute{\bar m}_1^n|^2-|m_1^n|^2\big|+
\int_{\Omega\setminus([-l_2,l_2]\times\omega)}\big||\acute m_1^n|^2-|m_1^0|^2|.
\end{align*}
The first summand converges to zero, for the second summand we have by Lemma~\ref{norms convergs to the norm of the limit}
\begin{align*}
\limsup_{n\to\infty}\int_{\Omega\setminus([-l_2,l_2]\times\omega)}\big||\acute m_1^n|^2-|m_1^0|^2\big|
&\leq\limsup_{n\to\infty}\int_{\Omega\setminus([-l_2,l_2]\times\omega)}(|\acute m_2^n|^2+|\acute m_3^n|^2+|m_2^0|^2+|m_3^0|^2)\\
&\leq 2\int_{\Omega\setminus([-l_2,l_2]\times\omega)}(|m_2^0|^2+|m_3^0|^2),
\end{align*}
which converges to zero as $l_2$ goes to infinity.
\end{proof}
\begin{Lemma}
\label{bar m and [b^1,b^2] lemma}
Let $0<\epsilon<1$ and let the sequence of intervals $\big([b_n^1, b_n^2]\big)_{n\in\mathbb N}$ be such that
$$\bar m_1^n(b_n^1)=-1+\epsilon,\quad\bar m_1^n(b_n^2)=1-\epsilon.$$
Then for sufficiently big $n$ there holds
\begin{align*}
\bar m_1^n(x)&<-1+2\epsilon,\quad x\in(-\infty, b_n^1],\quad\bar m_1^n(x)>1-2\epsilon,\quad x\in[b_n^2,+\infty),\\
-1+\frac{\epsilon}{2}&<m_1^n(x)<1-\frac{\epsilon}{2},\quad x\in [b_n^1,b_n^2].
\end{align*}
\end{Lemma}
\begin{proof}
Assume in contradiction that for a subsequence $\{n_k\}$(not relabeled) there is a point $b_n^3\in(-\infty, b_n^1)$ such that
$\bar m_x^n(b_n^3)\geq -1+2\epsilon.$ Since $\bar m_1^n(-\infty)=-1$ and $\bar m_1^n$ is continuous we can without loss of generality assume that
$\bar m_1^n(b_n^3)=-1+2\epsilon.$ Utilizing Lemma~\ref{lemma with f} for the intervals $(-\infty, b_n^3],$ $[b_n^3,b_n^1],$ $[b_n^1, +\infty)$ and
(\ref{lower.bound.E}) we discover,
\begin{align*}
\frac{E(m^n)}{d_n^2}&\geq\int_{\Omega}|\nabla \acute m^n|^2+\alpha_2\int_{\mathbb R}|m_2^n|^2+\alpha_3\int_{\mathbb R}|m_2^n|^2+o(1)\\
&\geq 2\sqrt{\alpha_2|\omega|}\bigg(|2\epsilon|+|\epsilon|+|2-2\epsilon|\bigg)+o(1)\\
&=(4+2\epsilon)\min_{m\in A_0}E_0(m)+o(1),
\end{align*}
which contradicts the almost minimizing property of $\{m^n\}.$ Similarly we get the bounds near $\infty$ and in $[b_n^1,b_n^2].$
\end{proof}
\subsection{Proof of Theorem~\ref{th:almost.minimizers}}
\begin{proof}
The proof splits into some steps:\\
\textbf{Step1.} Let us prove that if a sequence of magnetizations converges to some $m^0\in\tilde A(\Omega)$ in the sense of Definition \ref{notion of convergence} and satisfies conditions (\ref{almost.min}) and $\bar m_2^n(x_0)\geq 0$ for some $x_0\in\mathbb R$ and for big $n$ then $m_2^0(x_0)\geq 0.$
We have due to (\ref{almost.min}), that
$$\int_{\Omega_n}|\partial_x \bar m^n|^2\leq\int_{\Omega_n}|\partial_x m^n|^2\leq Cd_n^2,$$ thus
$$\int_{\mathbb R}|\partial_x \bar m^n(x)|^2\ud x\leq \frac{C}{|\omega|}$$
which yields that the sequence $\{\bar m^n\}$ is equicontinuous in $\mathbb R,$ and therefore by the Arzela-Ascoli theorem $\{\bar m^n(x)\}$ has a subsequence with a uniform limit in the interval $[x_0-1,x_0+1].$ It is trivial that the limit is $m^0,$ and thus $\bar m_2^n(x_0)\geq 0$ yields $m_2^0(x_0)\geq 0.$
Evidently, the same sing preserving property holds for the first and the third components of $\bar m^n$ and also for the opposite sign. This means in particular that if $\bar m_1^n(x_0)=0$ for big $n$ then $m_1^0(x_0)=0.$\\
\textbf{Step2.} In the second step we construct the sequences $\{T_n\}$ and $\{R_n\}.$ Note first, that the change of variables mentioned in the theorem translates the domain $\Omega$ to itself and preserves the energy, thus the minimization problem (\ref{minimization problem}) is invariant under that kind of transformations. Let us now evaluate the constant in estimate (\ref{(a,b).finite.estimate}).
The constant $C_1$ in (\ref{(a,b).finite.estimate}) comes from Lemma~\ref{lem:m2.m.3.bdd.E} and is given by
$$C_1=\frac{2d_n^2E(m^n)}{\alpha_2d_n^2}\leq \frac{2Cd_n^2}{\alpha_2},$$
for big $n.$ Thus we get
\begin{align*}
M(\alpha,\beta,\rho,\omega_n,E(m^n))&=\frac{1}{|\omega_n|}\left(\frac{E(m^n)}{(\alpha-\beta)^2}+\frac{C_1+C_pd_n^2E(m^n)}{1-\rho^2}\right)\\
&\leq \frac{1}{|\omega|}\left(\frac{C}{(\alpha-\beta)^2}+\frac{\frac{2C}{\alpha_2}+C_pCd_n^2}{1-\rho^2}\right)\\
&\leq M_1
\end{align*}
uniformly in $n.$ Next we choose the intervals $[b_n^1,b_n^2]$ to be as in Lemma~\ref{bar m and [b^1,b^2] lemma} with $\epsilon=\frac{1}{3},$ which is possible due to the continuity of $\bar m_1^n$ and the fact that $\bar m_1^n(\pm\infty)=\pm1.$ Owing to Lemma~\ref{bar m and [b^1,b^2] lemma} we get
\begin{align}
\label{b1b2.est}
\bar m_1^n(x)&<-\frac{1}{3},\quad x\in(-\infty, b_n^1],\quad\bar m_1^n(x)>\frac{1}{3},\quad x\in[b_n^2,+\infty),\\
-\frac{5}{6}&<m_1^n(x)<\frac{5}{6},\quad x\in [b_n^1,b_n^2].
\end{align}
Therefore, we obtain by the uniform estimate on $M(\alpha,\beta,\rho,\omega_n,E(m^n))$ and by the estimate (\ref{(a,b).finite.estimate}) of Lemma~\ref{lem:oscilation.preventing} that for sufficiently big $n$ there holds,
\begin{equation}
\label{b2-b1.bdd}
b_n^2-b_n^1\leq M_1.
\end{equation}
Let now $x_n\in [b_n^1,b_n^2]$ be such that $\bar m_1^n(x_n)=0$. For any $n\in\mathbb N$ we choose $T_n$ to be the translation by $x_n$ and the rotation $R_n$ to be the identity if $\bar m_2^n(x_n)\geq 0$ and the rotation by $180$ degree otherwise. In the last step we prove that the whole sequence
$\{\acute{\tilde m}^n\}$ converges to $m^\omega$ in $H^1(\Omega).$\\
\textbf{Step3.} For convenience of notation we will omit the "tilde" in $\acute{\tilde m}^n.$ We are now ready to prove that $\|\acute m^n-m^\omega\|_{H^1(\Omega)}\to 0$ as $n\to\infty.$ Assume in contradiction that for a subsequence (not relabeled) $\|\acute m^n-m^\omega\|_{H^1(\Omega)}\geq \delta>0$ for some $\delta.$ Like in the proof of Lemma~\ref{lem:compactness} we can show that a subsequence of $\{\acute m^n\}$ converges to some $m^0$ in the sense of Definition~\ref{notion of convergence}. By the $\Gamma$-convergence theorem we then have $E_0(m^0)\leq \liminf_{n\to\infty}\frac{E(m^n)}{d_n^2}$ thus
\begin{equation}
\label{m0.mins.E0}
E_0(m^0)=\min_{m\in A_0}E_0(m).
\end{equation}
Next we have by the sign-preserving property of Step1 and by bounds (\ref{b1b2.est})--(\ref{b2-b1.bdd}), that
\begin{equation}
\label{m0.M1.est}
\bar m_1^0(x)\leq-\frac{1}{3},\quad x\in(-\infty, -M_1],\quad\bar m_1^0(x)\geq\frac{1}{3},\quad x\in[M_1,+\infty).
\end{equation}
Invoking now Remark~\ref{rem:lim.m0} and the properties (\ref{m0.mins.E0}) and (\ref{m0.M1.est}) we discover $m_1^0(\pm\infty)=\pm1,$ which yields
\begin{equation}
\label{m0.in.A0}
m^0\in A_0,
\end{equation}
i.e., $m_0$ is a minimizer of the minimization problem (\ref{min.prob.E0}). Again, by the sign-preserving property we have $m_1^0(0)=0$ and $m_2(0)\geq 0,$ thus by
the analysis on the minimization problem (\ref{min.prob.E0}) in Appendix, we establish that actually $m^0$ and $m^\omega$ coincide. Note, finally, that the requirements of
Lemma~\ref{strong convergence2} are satisfied, thus we get
$$\lim_{n\to\infty}\|\acute m^n-m^\omega\|_{H^1(\Omega)}=\lim_{n\to\infty}\|\acute m^n-m^0\|_{H^1(\Omega)}=0,$$ which is s contradiction. The theorem is proven now.
\end{proof}
We mention that it is easy to see that any rectangle that is not a square and any ellipse that is not a circle satisfies the condition $0<\alpha_2<\alpha_3.$
This condition shows that the cross section $\omega$ does not have many rotational symmetries in some sense. For instance, if $\omega$ has a 90 degree rotational symmetry, then one can show that $\alpha_2=\alpha_3.$ It is also worth mentioning that one can prove a modified version of Theorem~\ref{th:almost.minimizers} in the case when $\omega$ is a disc or a canonical polygon with even number of vertices, namely due to the symmetry it is not true that any of the rotations $R_n$ is either the identity the rotation by $180$ degree, but one can prove their existence. In conclusion we state that Theorem~\ref{th:almost.minimizers} shows that in thin wires energy minimizers with a 180 degree domain wall are transverse (Ne\'el) walls that have the shape of $m^\omega.$
\section{Upper and lower bounds for thick wires}
Throughout this section we assume that $l,d\geq 1$ and are comparable to each other. For convenience we will assume that $d=l.$ We prove Theorem~\ref{th:thick.bounds} by constructing a vortex wall that has an energy of order $d^{\frac{5}{2}}\sqrt{\ln d}$ following the idea of
DeSimone, Konh, M\"uller and Otto in [\ref{bib:D.S.KMO1}], namely it is the idea of divergence free fields, that are tangential to the boundary, which are preferable for the magnetostatic energy. Assume $L>0$ and denote by $\Omega_L$ the domain $[-L,L]\times[-d,d]\times[-d,d]$. Take the rectangular parallelepiped $\Omega_L$ and cut off from it the two cones with the vertex at $(0,0,0)$ and the bases ${-L}\times[-d,d]\times[-d,d]$ and ${L}\times[-d,d]\times[-d,d]$ respectively and denote the obtained domain by $R_L.$ The main diagonals of $\Omega_L$ divide $R_L$ into four parts. Taking into account the orientation in the plane $OYZ$ denote that parts by $R_L^{up},$ $R_L^{right},$ $R_L^{bottom}$ and $R_L^{left}$ respectively. We first construct a magnetization $\tilde m$ that has infinite exchange energy but a magnetostatic energy easy to bound. Consider the following vector field:
\begin{equation*}
\tilde m = \left\{
\begin{array}{rl}
\big(\sin\frac{\pi dx}{2Lz}, \cos\frac{\pi dx}{2Lz}, 0\big)& \text{in } \ \ R_L^{up}\\
\big(\sin\frac{\pi dx}{2Ly}, 0, -\cos\frac{\pi dx}{2Ly}\big)& \text{in } \ \ R_L^{right}\\
\big(-\sin\frac{\pi dx}{2Lz}, -\cos\frac{\pi dx}{2Lz}, 0\big)& \text{in } \ \ R_L^{bottom}\\
\big(-\sin\frac{\pi dx}{2Ly}, 0, \cos\frac{\pi dx}{2Ly}\big)& \text{in } \ \ R_L^{left}\\
\end{array} \right.
\end{equation*}
Note that the vector field $(0,\tilde m_2, \tilde m_3)$ is divergence free ( see cross section Figure 2.1).
\setlength{\unitlength}{0.8mm}
\begin{picture}(100,100)
\put(10,10){\line(1,0){70}}
\put(80,10){\line(0,1){70}}
\put(10,10){\line(0,1){70}}
\put(10,80){\line(1,0){70}}
\put(10,10){\line(1,1){70}}
\put(80,10){\line(-1,1){70}}
\put(30,30){\line(1,0){30}}
\put(30,30){\line(0,1){30}}
\put(30,60){\line(1,0){30}}
\put(60,30){\line(0,1){30}}
\thicklines
\put(20,37){\vector(0,1){16}}
\put(70,53){\vector(0,-1){16}}
\put(37,70){\vector(1,0){16}}
\put(53,20){\vector(-1,0){16}}
\put(10,0){\textbf{A cross section for $\tilde m$}}
\end{picture}
$$\textbf{Figure 2.1} $$
Therefore we have
$$ \mathrm{div}{\tilde m}=\frac{\partial \tilde m_1}{\partial x}\geq 0\quad\text{in}\quad \Omega.$$
We have furthermore that $u$ is the weak solution of $\triangle u=\mathrm{div}\tilde m,$ thus
$$\int_{\mathbb R^3}\nabla u\nabla \varphi=-\int_{\Omega}\varphi \cdot\mathrm{div}\tilde m+\int_{\partial\Omega}\varphi \tilde m\cdot\nu,$$
where $\nu$ is the outward unit normal to $\partial\Omega.$
Note that $\tilde m$ is tangential to the boundary of $\Omega$ thus substituting $\varphi=u$ in the above equality, we find the magnetostatic energy of $\tilde m,$
$$\int_{\mathbb R^3}|\nabla u|^2=-\int_{\Omega}u \cdot\mathrm{div}\tilde m=
\int_{\Omega}\int_{\Omega}\Gamma(\xi-\xi_1)\frac{\partial \tilde m_1(\xi)}{\partial x}\frac{\partial \tilde m_1(\xi_1)}{\partial x}\ud\xi\ud\xi_1,$$
where $\Gamma(\xi)=\frac{1}{4\pi|\xi|}$ is the Green map in $\mathbb R^3.$ Note that the integrand is zero in the complement of $R_L,$ so we first estimate it if the first integration is done over $R_L^{up}.$
We have in $R_L^{up}$,
$$0\leq\frac{\partial \tilde m_1(\xi)}{\partial x}=\frac{\pi d}{2Lz}\cos\frac{\pi dx}{2Lz}\leq \frac{\pi d}{2Lz},$$ thus
$$\int_{R_L^{up}}\Gamma(\xi-\xi_1)\frac{\partial \tilde m_1(\xi)}{\partial x}\ud\xi\leq
\int_0^d\frac{\pi d}{2Lz}\ud z\int_{-\frac{Lz}{d}}^{\frac{Lz}{d}}\int_{-z}^z\Gamma(\xi-\xi_1)\ud y\ud x.$$
Due to Lemma~\ref{lem:bound.Green.function} we have,
$$\int_{-\frac{Lz}{d}}^{\frac{Lz}{d}}\int_{-z}^z\Gamma(\xi-\xi_1)\ud y\ud x\leq \frac{10z}{4\pi}\big(1+\ln\frac{L}{d}\big)$$ and
$$\int_{R_L^{up}}\Gamma(\xi-\xi_1)\frac{\partial \tilde m_1(\xi)}{\partial x}\ud\xi\leq\frac{5d^2}{4L}\big(1+\ln\frac{L}{d}\big).$$
The integrals over the other parts of $R_L$ have the same upper bound, thus we obtain
\begin{equation}
\label{E.mag.tilde}
E_{mag}(\tilde m)\leq \frac{20d^4}{L}\big(1+\ln\frac{L}{d}\big).
\end{equation}
The reason for $\tilde m$ having an infinite exchange energy is that it has singularities on the part of the boundary of $R_L$ that belongs to $\Omega_L.$ We ignore for a moment this boundary charges and compute $E_{ex}(\tilde m)$ taking into account only the volume charges.
We have formally by direct calculation that,
\begin{align*}
E_{ex}^{formal}(\tilde m)&=4\int_0^d\frac{\pi^2d^2}{4L^2z^2} \int_{-\frac{Lz}{d}}^{\frac{Lz}{d}}\int_z^z\Big(1+\frac{x^2}{z^2}\Big)\ud y\ud x\ud z\leq\\
&\leq 4\int_0^d\frac{\pi^2d^2}{4L^2z^2} \int_{-\frac{Lz}{d}}^{\frac{Lz}{d}}\int_z^z\Big(1+\frac{L^2}{d^2}\Big)\ud y\ud x\ud z\\
&=4\pi^2\big(\frac{d^2}{L}+L\big),
\end{align*}
thus
\begin{equation}
\label{Eex.formal<}
E_{ex}^{formal}(\tilde m)\leq 4\pi^2\big(\frac{d^2}{L}+L\big).
\end{equation}
In the next step we build a magnetization $m$ with a finite exchange energy by slightly modifying $\tilde m$ near the singularity points. It works as follows: First take the planes $\{z=\frac{d}{d-1}y\}$ and $\{z=-\frac{d-1}{d}y\}.$ To get a continuous $m$ from $\tilde m$
we change $\tilde m$ in the following two regions: The first one is the intersection of $\Omega_L$ with the region between the planes $\{z=\frac{d}{d-1}y\}$ and $\{z=y\}$ and the second one is the intersection of $\Omega_L$ with the region between the planes $\{z=-\frac{(d-1)}{d}y\}$ and $\{z=-y\}.$ For more transparency see Figures 2.2 and 2.3
\setlength{\unitlength}{1mm}
\begin{picture}(100,55)
\put(0,10){\line(1,0){100}}
\put(0,40){\line(1,0){100}}
\put(20,10){\line(2,1){20}}
\put(20,40){\line(2,-1){20}}
\put(80,10){\line(-2,1){20}}
\put(80,40){\line(-2,-1){20}}
\put(40,20){\line(0,1){10}}
\put(60,20){\line(0,1){10}}
\put(40,20){\line(1,0){20}}
\put(40,30){\line(1,0){20}}
\put(60,22){\vector(1,0){6}}
\put(60,25){\vector(1,0){6}}
\put(60,28){\vector(1,0){6}}
\put(40,22){\vector(-1,0){6}}
\put(40,25){\vector(-1,0){6}}
\put(40,28){\vector(-1,0){6}}
\put(58.3,28){\vector(3,1){6}}
\put(56.6,28){\vector(2,1){5.2}}
\put(55,28){\vector(1,1){4}}
\put(53.3,28){\vector(1,2){2.5}}
\put(51.7,28){\vector(1,3){1.9}}
\put(50,28){\vector(0,1){6}}
\put(41.7,28){\vector(-3,1){6}}
\put(43.3,28){\vector(-2,1){5.2}}
\put(45,28){\vector(-1,1){4}}
\put(46.6,28){\vector(-1,2){2.5}}
\put(48.3,28){\vector(-1,3){1.9}}
\put(20,10){\circle*{0.6}}
\put(80,10){\circle*{0.6}}
\put(20,40){\circle*{0.6}}
\put(80,40){\circle*{0.6}}
\put(10,6){$(-L,-d,c)$ }
\put(72,6){$(L,-d,c)$ }
\put(10,42){ $(-L,d,c)$}
\put(74,42){$(L,d,c)$ }
\put(67,23.5){\vector(1,0){6}}
\put(67,26.5){\vector(1,0){6}}
\put(67,29.5){\vector(1,0){6}}
\put(67,32.5){\vector(1,0){6}}
\put(67,20){\vector(1,0){6}}
\put(67,17){\vector(1,0){6}}
\put(33,23.5){\vector(-1,0){6}}
\put(33,26.5){\vector(-1,0){6}}
\put(33,29.5){\vector(-1,0){6}}
\put(33,32.5){\vector(-1,0){6}}
\put(33,20){\vector(-1,0){6}}
\put(33,17){\vector(-1,0){6}}
\put(42,17){\vector(-1,0){4}}
\put(48,17){\vector(-1,0){2}}
\put(52,17){\vector(1,0){2}}
\put(58,17){\vector(1,0){4}}
\put(50,17){\circle*{0.4}}
\put(7,25){$m_x=-1$}
\put(76,25){$m_x=1$}
\put(33,23.5){\vector(-1,0){6}}
\end{picture}
\textbf{A longitudinal section $\{z=c>0\}$}
$$\textbf{ Figure 2.2}$$
\setlength{\unitlength}{1mm}
\begin{picture}(50,90)
\put(10,10){\line(1,0){70}}
\put(80,10){\line(0,1){70}}
\put(10,10){\line(0,1){70}}
\put(10,80){\line(1,0){70}}
\put(10,10){\line(1,1){70}}
\put(80,10){\line(-1,1){70}}
\put(30,30){\line(1,0){30}}
\put(30,30){\line(0,1){30}}
\put(30,60){\line(1,0){30}}
\put(60,30){\line(0,1){30}}
\put(27.5,10){\line(1,2){35}}
\put(80,27.5){\line(-2,1){70}}
\put(20,37){\vector(0,1){10}}
\put(70,53){\vector(0,-1){10}}
\put(37,70){\vector(1,0){10}}
\put(53,20){\vector(-1,0){10}}
\put(27,20){\vector(-1,1){8}}
\put(20,63){\vector(1,1){8}}
\put(63,70){\vector(1,-1){8}}
\put(70,27){\vector(-1,-1){8}}
\put(10,0){\textbf{A cross section for $m.$}}
\end{picture}
$$\textbf{Figure 2.3}$$
Denote the upper part of the first new narrow region (where $z\geq 0$) by $\Omega_{L,1}^{up}$ and the bottom part by $\Omega_{L,1}^{bottom}.$ We make the same notation also for the second narrow region. Finally, define the magnetization $m$ in $\Omega_{L,1}^{up}$ by
$$m(x,y,z)=(\sin\frac{\pi dx}{2Lz}, \cos\frac{\pi dx}{2Lz}\sin\frac{\pi d(z-y)}{2z}, -\cos\frac{\pi dx}{2Lz}\cos\frac{\pi d(z-y)}{2z} ).$$
The definition of $m$ in the other three regions is analogues. Note that the vector field $m$ has now a single singularity at the origin.
Owing to Lemma~\ref{lem:mag.m_1.m_2} we have by direct calculation
$$|E_{mag}(m)-E_{mag}(\tilde m)|\leq \|m-\tilde m\|_{L^2(\Omega_L)}^2+2\|m-\tilde m\|_{L^2(\Omega_L)}\sqrt{E_{mag}(\tilde m)} $$
\begin{equation}
\label{mag.til.m.m}
\leq 16dL+16\sqrt{5}d^2\sqrt{d\ln L}.
\end{equation}
Using the inequalities $|y|\leq z$ and $|x|\leq \frac{L}{d}z$ in ${\Omega_{L,1}^{up}}$ it is not difficult to estimate,
$$|\partial_y m_2|^2+|\partial_z m_2|^2+|\partial_y m_3|^2+|\partial_z m_3|^2\leq
\frac{\pi^2}{4z^2}(2d^2+1)\quad\text{in} \quad{\Omega_{L,1}^{up}}.$$
We can calculate now,
$$\int_{\Omega_{L,1}^{up}}\frac{1}{z^2}\ud\xi=2\int_0^L\int_{\frac{dx}{L}}^d\int_{\frac{d-1}{d}z}^z
\frac{1}{z^2}\ud y\ud z\ud x=\frac{1}{d}\int_0^L(\ln L-\ln x)\ud x=\frac{L}{d}.$$
Therefore we obtain
\begin{equation}
\label{Eex.formal.Eex}
|E_{ex}^{formal}(\tilde m)-E_{ex}(m)|\leq 4\int_{\Omega_{L,1}^{up}}(|\partial_y m_2|^2+|\partial_z m_2|^2+|\partial_y m_3|^2+|\partial_y m_3|^2)\ud\xi\leq 2\pi^2dL+\frac{\pi^2L}{d}.
\end{equation}
Finally combining estimates (\ref{E.mag.tilde})-(\ref{Eex.formal.Eex}) and choosing $L=d^\frac{3}{2}\sqrt{\ln d}$ we arrive at
$$E(m)\leq 150d^\frac{5}{2}\sqrt{\ln d}.$$
It has been shown in [\ref{bib:Kuehn}] that there exists a number $R_0>0$ such that if $R\geq R_0$ then the minimal energy is bigger than a constant times $R^2\sqrt{\ln R}$, where the cross section of the domain $\Omega$ is a disc with radius $R.$ It is easily seen that the proof in there works also for a rectangular cross section, thus the proof of Theorem~\ref{th:thick.bounds} finishes. |
1207.5155 | \section{introduction}
A \emph{repetition} of length $h$ ($h\geqslant 1$) in a sequence is a subsequence of consecutive terms of the form: $x_1\ldots x_h x_1\ldots x_h$. A sequence is \emph{nonrepetitive} if it does not contain a repetition of any length.
In 1906 Thue proved that there exist arbitrarily long nonrepetitive sequences over only $3$ different symbols (see \cite{Ber95,Thu06}). The method discovered by Thue is constructive and uses substitutions over a given set of symbols. Recently a completely different approach to creating long nonrepetitive sequences emerged (see \cite{GKM}). Consider the following naive procedure: generate consecutive
terms of a sequence by choosing symbols at random and every time a repetition occurs, erase the repeated block
and continue. For instance, if the generated sequence is $abcbc$, we must
cancel the last two symbols, which brings us back to $abc$. By a simple counting one can prove that with positive probability the
length of a constructed sequence exceeds any finite bound, provided the
number of symbols is at least $4$. This is slightly weaker than Thue's
result, but the argument seems to be more flexible for adaptations to other settings. This approach leads e.g.\ to a very short proof (see \cite{GKM}) that for every $n\geq1$ and every sequence of sets $L_1,\ldots,L_n$, each of size at least $4$, there is a nonrepetitive sequence $s_1,\ldots,s_n$ where $s_i\in L_i$ (first proved with an enhanced Local Lemma in \cite{GPZ}). The analogous statement for lists of size $3$ remains an exciting open problem. In this paper we make use of the above-mentioned approach to nonrepetitive colorings of trees.
For a given graph $G$ we denote by $V(G)$ the set of vertices of $G$. A coloring function $f:V(G)\to\mathbb N$ is a \emph{nonrepetitive coloring} of $G$ if there is no repetition on the color sequence of any simple path in $G$. The minimum number of colors used in a nonrepetitve coloring of $G$ is called the \emph{Thue number} of $G$ and denoted by $\pi(G)$. The dependence between the Thue number and maximum degree of graphs is already quite well understood.
\begin{theorem}[Alon et al.\ \cite{AGHR02}]\label{thm:Alon}
For any graph $G$ with maximum degree $\Delta$ there is a nonrepetitive coloring of $G$ using at most $16\Delta^2$ colors. Moreover, for every $\Delta>1$ there is a graph with maximum degree $\Delta$ which needs $\Omega\left(\frac{\Delta^2}{\log\Delta}\right)$ colors in any nonrepetitive coloring.
\end{theorem}
The Thue number of any tree is at most $4$ (see \cite{AGHR02}). K{\"u}ndgen and Pelsmajer \cite{KP08} proved that $\pi(G)\leq 12$ for all outerplanar $G$, and $\pi(G)\leq 4^k$ for all graphs $G$ with tree-width at most $k$. Probably the most intriguing question in the area concerns planar graphs.
\begin{conjecture}[Grytczuk 2007 \cite{Gry07b}]
There is a constant such that $\pi(G)\leq c$, for all planar graphs $G$.
\end{conjecture}
\noindent Very recently Dujmovi\'{c} et al.\ \cite{DFJW} showed $\pi(G)=O(\log n)$ for all planar $G$ on $n$ vertices.
Now, we turn to the list-version of nonrepetititve colorings of graphs. This is an analog of the classical graph choosability introduced by Vizing \cite{Viz76} and independently by Erd\"{o}s, Rubin and Taylor \cite{ERT80}. Given a graph $G$ suppose that each $v\in V(G)$ has a preassigned set of colors $L_v$. We call $\set{L_v}_{v\in V(G)}$ a \emph{list assignment} of $G$, or just \emph{lists} of $G$. A coloring $f$ is \emph{chosen from} $\set{L_v}$ if $f(v)\in L_v$ for all $v\in V(G)$. The \emph{Thue choice number} of $G$, denoted by $\pi_l(G)$, is the minimum $k$ such that for any list assignment $\set{L_v}$ of $G$ with each $\norm{L_v}\geq k$ there is a nonrepetitive coloring of $G$ chosen from $\set{L_v}$. The upper bound from Theorem \ref{thm:Alon} works also in the list-setting, i.e., $\pi_l(G)\leq 16\Delta^2$ for all $G$ with maximum degree $\Delta$. As we mentioned $\pi_l(P_n)\leq 4$ for all paths $P_n$ and the problem whether $3$ or $4$ is the right bound remains open. The first significant difference between the Thue number and the Thue choice number has been proved recently for trees.
\begin{theorem}[Fiorenzi et al.\ \cite{FOOZ11}]\label{thm:Thue-choice-for-trees}
For any constant $c$ there is a tree $T$ such that $\pi_l(T)\geq c$.
\end{theorem}
\noindent In fact one can extract from \cite{FOOZ11} that for any $\Delta>1$ there is a tree $T$ with $\pi_l(T)=\Omega(\frac{\log\Delta}{\log \log \Delta})$. We propose two results complementary to Theorem \ref{thm:Thue-choice-for-trees}. First is an improved upper bound for the Thue choice number of trees.
\begin{theorem}\label{thm:1+epsi}
For every $\varepsilon>0$ there is a constant $c$ such that $\pi_l(T)\leq c\Delta^{1+\varepsilon}$ for all trees $T$ with maximum degree $\Delta$.
\end{theorem}
A sequence is \emph{of the form} $x^r$ for real $r\geq1$ if it can be divided into $\lceil r\rceil$ blocks where all the blocks but the last are the same, say $x_1\ldots x_n$ for some $n\geq1$, and the last block is the prefix of $x_1\ldots x_n$ of size $\lceil\myFrac(r)\cdot n\rceil$, where $\myFrac(r)$ is the fractional part of $r$. The sequence $x_1\ldots x_n$ repeated in those blocks is also called \textit{the base} of the given sequence. For example any repetition is a sequence of the form $x^2$ and $abcdabcdab$ is of the form $x^{2.5}$ with the base $abcd$. A coloring of a graph $G$ is $x^r$-free for real $r>1$ if there is no sequence of the form $x^r$ among the color sequences of simple paths in $G$. Thus, an $x^2$-free coloring is simply a nonrepetitive coloring while an $x^3$-free coloring satisfies a weaker condition, in particular it allows a coloring to have a repetitions. A consequence of our second result is that for any tree $T$ and lists $\set{L_v}$ each of size $8$ there is an $x^3$-free coloring of $T$ chosen from $\set{L_v}$. This explains somehow the tightness of Theorem \ref{thm:Thue-choice-for-trees}.
\begin{theorem}\label{thm:2+epsi}
For every $\varepsilon>0$ there is a constant $c$ such that for every tree $T$ and lists $\set{L_v}_{v\in V(T)}$ each of size $c$ there is $x^{2+\varepsilon}$-free coloring of $T$ chosen from $\set{L_v}$.
\end{theorem}
\section{Proofs}
In both proofs given a tree $T$ we are going to fix an arbitrary vertex for a \emph{root} and denote it by $\myRoot(T)$. For $u,v\in V(T)$ we say that $u$ is a \emph{descendant} of $v$ if the unique simple path from $u$ to $\myRoot(T)$ contains $v$. The set of all descendants of $v$, including $v$, is denoted by $v\desc$. The $\depth(v)$ is the number of vertices on a simple path from $v$ to $\myRoot(T)$. A vertex $u$ is a \emph{child} of $v$ if $u$ is a descendant of $v$ and they are adjacent in $T$. We also pick an arbitrary planar embedding of $T$. This means we fix an ordering of children of every vertex in $T$.
If $v$ has a child, the first child of $v$ in a determined order is $\firstChild(v)$.
If $u$ is a child of $v$, but not the last child, then $\nextChild(v,u)$ is the child of $v$ that is next to $u$.
A \emph{vertical} path in a rooted tree is a simple path whose first vertex is a descendant of the last or vice versa. A coloring of a rooted tree $T$ is \emph{vertically $x^r$-free} for real $r>1$ if there is no sequence of the form $x^r$ among the color sequences of vertical paths in $T$.
For any planar embedding of a given rooted tree $T$ and list assignment $\set{L_v}_{v\in V(T)}$, a pair $(f,u)$ is a \emph{partial coloring} if $u\in V(T)$ and $f$ is a partial function from $V(T)$ to $\mathbb N$ defined only for the vertices of $T$ up to $u$ in the preorder traversal of $T$ and $f(v)\in L(v)$, whenever $f(v)$ defined. The set of all partial colorings of a tree $T$ with fixed $\set{L_v}_{v\in V(T)}$ is denoted by $\PVA$.
Following usual convention we define $[n]$ to be $\{1,\ldots,n\}$. For a set of integers $A$ we use $A^+$ to denote the set of finite sequences over $A$ of length at least 1. For $s\in A^+$ and $n \in \mathbb N$ we write $s \cdot n$ to denote the sequence $s$ with appended element $n$. For a sequence $s= (s_1, \ldots, s_n)$ we put $s_{1..i}= (s_1, \ldots, s_i)$.
Consider a coloring of a rooted tree with an $x^{2+\varepsilon}$-block on some simple path. Clearly, at least half of the vertices of this path forms a vertical path whose color sequence is of the form $x^{1+\varepsilon/2}$. Thus, Theorem \ref{thm:2+epsi} is an immediate consequence of the following lemma.
\begin{lemma}\label{lem:ver-free}
For every $\varepsilon>0$ there is a constant $c=4\cdot\lceil\frac{1}{\varepsilon}\rceil$ such that every rooted tree is vertically $x^{1+\varepsilon}$-free colorable from any lists of size $c$.
\end{lemma}
\begin{proof}
For a given $\varepsilon>0$ put $c=4\cdot\lceil\frac{1}{\varepsilon}\rceil$. Let $T$ be a rooted tree and $\set{L_v}_{v\in V(T)}$ be the list assignment with each $\norm{L(v)}=c$. In order to get a contradiction, suppose that there is no vertically $x^{1+\varepsilon}$-free coloring of $T$ chosen from $\set{L_v}$. Fix an arbitrary planar embedding of $T$.
We propose a very naive procedure struggling to build a proper coloring of $T$ from $\set{L_v}$. The procedure maintains $(f,v)$, a partial coloring of $T$ from $\set{L_v}$, with no color sequence of the form $x^{1+\varepsilon}$ on any vertical paths other than paths going upwards from $v$. To start the procedure we just pick a color for $\myRoot(T)$ from $L(\myRoot(T))$ and all other vertices are uncolored. Every consecutive step of the procedure tries to correct and/or extend the current partial coloring. This is encapsulated by the call of $\nextV((f,v),n)$ function (see Algorithm \ref{alg-next}), where $(f,v)$ is the current partial coloring and $n$ is the hint for the next decision to be made. The call of $\nextV$ checks first whether $(f,v)$ is vertically $x^{1+\varepsilon}$-free. If not then the colors from vertices in the repeated $\varepsilon$-part of $x^{1+\varepsilon}$ occurrence starting from $v$ are erased (as well as colors of all descendants of erased vertices) and the color for the top-most vertex with erased color is set again to be the $n$-th color from its list. If $(f,v)$ is vertically $x^{1+\varepsilon}$-free, $\nextV((f,v),n)$ tries to extend the partial coloring $(f,v)$ onto the consecutive subtrees of $v$.
We will keep an invariant that any extension of an input partial coloring $(f,v)$ onto all descendants of $v$ contains a vertical $x^{1+\varepsilon}$-block.
We will extend $(f,v)$ onto $u\desc$ for $u$ being consecutive childs of $v$ and if $u$ is the first child of $v$ whose subtree cannot be colored in this way then $\nextV$ sets the color of $u$ to be the $n$-th color from $L(u)$.
\begin{algorithm-hbox}[!ht]
\caption{$\nextV((f,v),n)$}\label{alg-next}
\uIf{\textup{$x^{1+\varepsilon}$ occurs in $(f,v)$ starting from $v$ on the way to $\myRoot(T)$}}{
$l=$ the length of the base of $x^{1+\varepsilon}$ sequence \label{alg:negative-step-start}\;
$m =\lceil l \cdot \varepsilon\rceil$\;
$(v_{l+m},\ldots,v_1) =$ the path starting from $v_{l+m}=v$ going upwards in $T$\;\quad with $f(v_i)=f(v_{l+i})$ for $1\leq i\leq m$\;
$u\gets v_{l+1}$\;
erase all values of f in $u\desc$\label{alg:negative-step-end}
}
\uElse{
$u=\firstChild(v)$\label{alg:positive-step-start}\;
\While{\textup{$f$ has a vertically $x^{1+\varepsilon}$-free extension onto $u\desc$}}{extend $f$ onto $u\desc$ in vertically $x^{1+\varepsilon}$-free manner\;
$u=\nextChild(v,u)$\label{alg:positive-step-end}\;
}
}
extend $f$ with $\set{u\rightarrow \alpha}$, where $\alpha$ is the $n$-th element of $L(u)$\label{alg:set-a-color}\;
\Return $(f,u)$\;
\end{algorithm-hbox}
The partial function $\nextV:\PVA\times [c]\to\PVA$ is defined by Algorithm \ref{alg-next}. Note that $\nextV((f,v),n)$ is well-defined for partial colorings $(f,v)$ with
\begin{enumeratei}
\item no color sequence of the form $x^{1+\varepsilon}$ on a vertical path other than paths going upwards from $v$,\label{item:no-vertical-path} and
\item no $x^{1+\varepsilon}$-free extension of $(f,v)$ onto $v\desc$.\label{item:no-extension}
\end{enumeratei}
Moreover, if $(f',u)=\nextV((f,v),n)$ then this new partial coloring also satisfies \ref{item:no-vertical-path} and \ref{item:no-extension}. This allows us to iterate the calls of $\nextV$. Note also that vertex $u$ is determined only by $(f,v)$, i.e.\ the first argument of $\nextV$, while $f'(u)$ is simply the $n$-th color in $L(u)$.
Now, we define recursively a function $h:[c]^+\to\PVA$ which captures the idea of our naive procedure trying to color $T$ from $\set{L_v}$. For $s\in [c]^+$, $1\leq n\leq c$ and $\alpha$ being the $n$-th color in $L(\myRoot(T))$ put
\begin{align*}
h(n)&=(\set{\myRoot(T) \rightarrow \alpha},\myRoot(T)),\\
h(s \cdot n)&=\nextV(h(s),n).
\end{align*}
First of all note that $h(s)$ is well-defined for all $s\in [c]^+$. Indeed, $h(s)$ is explicitly constructed for all $s$ of length $1$ and it trivially satisfies \ref{item:no-vertical-path}, while \ref{item:no-extension} holds as we supposed that there is no vertically $x^{1+\varepsilon}$-free coloring of $T$ from $\set{L_v}$. Now $h(s \cdot n)$ is well-defined as $\nextV$ is well-defined for partial colorings satisfying \ref{item:no-vertical-path}-\ref{item:no-extension} and a new partial coloring also satisfies \ref{item:no-vertical-path}-\ref{item:no-extension}.
It is convenient to see $s\in [c]^+$ as a seed driving to a sequence of partial colorings of $T$: $h(s_{1}),h(s_{1..2}),h(s_{1..3}),\ldots,h(s)$. Now, we aim to get a concise description of this sequence. Let $(f_i,v_i)=h(s_{1..i})$ for $1\leq i \leq \norm{s}$. We define $\chosen(s)=(f_1(v_1),\ldots,f_{\norm{s}}(v_{\norm{s}}))$. In other words, $\chosen(s)$ is a sequence of colors set by instruction \ref{alg:set-a-color} of Algorithm \ref{alg-next} in consecutive calls of $\nextV$ on a way to build $h(s)$.
\begin{claim}
The function $\chosen$ is injective.
\end{claim}
\begin{proof}[Proof of the Claim]
Note that the length of $\chosen(s)$ is equal the length of $s$. To get a contradiction let $s \neq s'$ be the shortest sequences for which $\chosen(s)=\chosen(s')$. Let $n=\norm{s}=\norm{s'}$. By minimality of $s,s'$ we have $s_{1..(n-1)}= s'_{1..(n-1)}$. The first $n-1$ values of $\chosen(s)$ depend only on $s_{1..(n-1)}$, therefore they are the same for both sequences. Moreover, the last values of $\chosen(s)$ and $\chosen(s')$ are picked from the same list. By the construction of the procedure, the list is determined by $h(s_{1..(n-1)})= h(s'_{1..(n-1)})$ or it is just $L(\myRoot(t))$ in case when $n=1$. Since $s\neq s'$ they must differ on the last coordinate. It means that indices of the last colors in $\chosen(s)$ and $\chosen(s')$ on the list are different, and hence the colors are different.
\end{proof}
Let $s\in [c]^+$, $(f_i,v_i)=h(s_{1..i})$ for all $1\leq i \leq \norm{s}$.
For $2\leq i \leq \norm{s}$ we denote by $l_i,m_i$ the evaluations of variables $l, m$ in the $(i-1)$-th call to the procedure $\nextV$ (for some calls, they may be undefined). Then $W(s)=(\depth(v_1),\ldots,\depth(v_{\norm{s}}))$ is a \textit{supporting walk} of $s$. The walk contains two kind of steps: positive, when $W(s)_i = W(s)_{i-1}+1$, and negative, when $W(s)_i \leq W(s)_{i-1}$. Positive steps occur when procedure $\nextV$ descends into a subtree, i.e.\ evaluates the case from line \ref{alg:positive-step-start} to \ref{alg:positive-step-end}. Negative steps correspond to the calls in which repeated part of an $x^{1+\varepsilon}$-block in the partial assignment is erased (lines \ref{alg:negative-step-start} to \ref{alg:negative-step-end}). Let us suppose that the $i$-th step was negative. Note that just from $W(s)$ we can decode the length of the erased block, i.e.\ the value of $m_i$. This is exactly $W(s)_{i-1} - W(s)_i+1$. However, to decode the corresponding value $l_i$ we need some additional information. All we know is that $m_i =\lceil l_i \cdot \varepsilon\rceil$, which leaves $\lceil 1/\varepsilon \rceil$ possible values for $l_i$. Therefore we annotate every step of $W(s)$ with a number from $\set{0, \ldots, \lceil 1/\varepsilon \rceil -1}$. The number is meaningful only for negative steps. Formally the annotation function $A:[c]^+ \to \{0, \ldots, \lceil 1/\varepsilon \rceil-1\}^+$ is defined as follows. For $1\leq i \leq \norm{s}$,
\[
A(s)_i = \begin{cases}
l_i - \lfloor m_i/\varepsilon \rfloor & \text{if $i$-th step is negative} \\
0 & \text{otherwise.}
\end{cases}
\]
Let $s\in [c]^+$ and $(f,v)=h(s)$ then $\myPath(s)$ is the sequence of colors on the path from $\myRoot(T)$ to $v$ in a partial coloring $f$. Thus, the last value in $\myPath(s)$ is $f(v)$.
Finally we define a total encoding function $\LOG:[c]^+ \to \mathbb N^+ \times [1/\varepsilon]^+ \times \mathbb N^+$ as $\LOG(s)= (W(s), A(s), \myPath(s))$.
\begin{claim}
The function $\LOG$ is injective.
\end{claim}
\begin{proof}
Let $s\in [c]^+$. First we show that $\LOG(s)$ uniquely determines sequences $\myPath(s_{1..i})$ for all $1\leq i\leq \norm{s}$. Recall that $\myPath(s)$ is written explicitly in $\LOG(s)$.
Suppose $\myPath(s_{1..i})$ is already known and now we reconstruct $\myPath(s_{1..(i-1)})$. If the $i$-th step of $W(s)$ is positive, i.e.\ $W(s)_i>W(s)_{i-1}$ then the length of the path from the root to the current vertex increased by 1 in step $i$. Thus, $\myPath(s_{1..(i-1)})$ is exactly the same as $\myPath(s_{1..i})$ but with the last color erased. If the $i$-th step of $W(s)$ is negative then $m_i=W(s)_i-W(s)_{i-1}+1$ is the size of the repeated $\varepsilon$-block and $l_i=\lfloor m_i/\varepsilon\rfloor + A(s)_i$ is the size of the base of a $x^{1+\varepsilon}$ sequence fixed in this step. Clearly, the last color in $\myPath(s_{1..i})$ is introduced in the $i$-th step and $l_i$ colors before form a base of the $x^{1+\varepsilon}$ sequence that was retracted. Let $(\alpha_1,\ldots,\alpha_{l_i},\beta)$ be the suffix of $\myPath(s_{1..i})$. Then $\myPath(s_{1..(i-1)})$ is just $\myPath(s_{1..i})$ with the last color, namely $\beta$, erased and sequence $(\alpha_1,\ldots,\alpha_m)$ appended.
Once we have sequences $\myPath(s_{1..i})$ for all $1\leq i \leq \norm{s}$, we may simply read their last values to reconstruct $\chosen(s)$. Now, the previous Claim assures that $\chosen(s)$ uniquely determines $s$.
\end{proof}
Let us fix $M\in \mathbb N$. We are going to give a bound for the number of distinct $\LOG(s)$ for $s$ of length $M$ based on the structure of $\LOG(s)$. For $s\in [c]^M$, supporting walk $W(s)$ is a sequence of $M$ positive integers with $W(s)_i-W(s)_{i-1}\leq 1$. Now replace all negative steps $W(s)_i, W(s)_{i+1}$ with a sequence $W(s)_i, W(s)_i+1, W(s)_i, W(s)_i -1, W(s)_i -2 , \ldots, W(s)_{i+1}$. It is easy to see that such an operation is reversible and it results in a sequence of positive integers of size at most $2M$ with all steps in $\set{-1,1}$. The number of such sequences is well-known to be $o(2^{2M})$. The number of possible annotation sequences $A(s)$ is bounded by $\lceil 1/\varepsilon \rceil^M$. Finally, $\myPath(s)$ is a sequence of colors which appear on some simple path starting from $\myRoot(T)$ in a final partial coloring $h(s)$. There are $\norm{V(T)}$ simple paths starting from $\myRoot(T)$ and each of them has at most $c^\norm{V(T)}$ possible color assignments.
By the last Claim the number of distinct $\LOG(s)$ for $s\in [c]^M$ is simply $c^M$. On the other hand the upper bounds from obtained just now we get the following inequality
\[
\displaystyle{c^M\leq o\left(4^M\right)\cdot \left\lceil\frac{1}{\varepsilon}\right\rceil^M\cdot \left( \norm{V(T)} \cdot c^{\norm{V(T)}}\right)}.
\]
For $c = 4\lceil\frac{1}{\varepsilon}\rceil$ this gives a contradiction for sufficiently large $M$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:1+epsi}]
Clearly, it suffices to prove the theorem for small values of $\varepsilon$. We fix any $\varepsilon \in (0,1)$, and choose $\delta$ so that it satisfies $1+\varepsilon=\frac{1+\delta}{1-\delta}$ (note that $\delta<\frac{1}{2}$). We are going to prove a bit stronger statement.
There is a constant $c$ such that for every rooted tree $T$ with maximum degree at most $\Delta$ and lists $\set{L_v}_{v\in V(T)}$ each of size at least $c \Delta^{1+\varepsilon}$, there exists a coloring of $T$ from $\set{L_v}$ with
\begin{enumeratenum}
\item no color sequences of the form $x^2$ on simple paths in $T$,\label{item:no-x2} and
\item no color sequences of the form $x^{1+\delta}$ on vertical paths in $T$.\label{item:no-x1+epsi}
\end{enumeratenum}
Let $c$ be sufficiently large integer ($c\geq 12 \cdot (\lceil \frac{1}{\delta} \rceil+1)$ will do). Let $T$ be a tree and $\set{L_v}_{v\in V(T)}$ be a lists assignment with each $\norm{L(v)}= \hat{c} \geq c\Delta^{1+\varepsilon}$. In order to get a contradiction, suppose that there is no coloring of $T$ chosen from $\set{L_v}$ with \ref{item:no-x2} and \ref{item:no-x1+epsi} satisfied. Fix an arbitrary planar embedding of $T$.
Like in the proof of Lemma \ref{lem:ver-free} we propose a procedure struggling to accomplish an impossible mission that is to produce a coloring of $T$ from $\set{L_v}$ satisfying \ref{item:no-x2} and \ref{item:no-x1+epsi}. The procedure maintains $(f,v)$ a partial coloring of $T$ from $\set{L_v}$ with the only possible violations of \ref{item:no-x2} and \ref{item:no-x1+epsi} on paths starting at $v$. To start the procedure we just pick a color for $\myRoot(T)$ from $L(\myRoot(T))$ and all other vertices are uncolored. Every consecutive step of the procedure tries to correct and/or extend the current partial coloring. This is encapsulated by the call of $\nextT((f,v),n)$ function (see Algorithm \ref{alg-nextT}), where $(f,v)$ is the current partial coloring and $n$ is the hint for the next decision to be made. The call of $\nextT$ checks first whether $(f,v)$ is vertically $x^{1+\delta}$-free. If not then the colors from vertices in the repeated $\delta$-part of $x^{1+\delta}$ occurrence starting from $v$ are erased (as well as colors of all descendants of erased vertices) and the color for the top-most vertex with color cleared is set again to be the $n$-th color from its list. If $(f,v)$ is vertically $x^{1+\delta}$-free then $\nextT$ checks whether it is $x^2$-free (see lines \ref{alg2:x2-start}-\ref{alg2:x2-end} of Algorithm \ref{alg-nextT}). If there is a path $P$ with a color sequence of the form $x^2$ then it must start at $v$ and $\nextT$ clears the colors along $P$ up to the last vertex which is a predecessor of $v$ or up to the vertex which finishes the repeated block of $x^2$ occurence. Again, the color of the top-most vertex with color cleared is set to be the $n$-th color from its list. Finally, if there is no violation of \ref{item:no-x2} and \ref{item:no-x1+epsi} then $\nextT((f,v),n)$ tries to extend the partial coloring $(f,v)$ onto subtrees rooted at consecutive childs of $v$.
We will keep an invariant that such an extension of an input partial coloring $(f,v)$ can not be done, and if $u$ is the first child of $v$ whose subtree cannot be colored in this way then $\nextT$ sets the color of $u$ to be the $n$-th color from $L(u)$.
\begin{algorithm-hbox}[!ht]
\caption{$\nextT((f,v),n)$}\label{alg-nextT}
\uIf{\textup{$x^{1+\delta}$ occurs in $(f,v)$ starting at $v$ on the way to $\myRoot(T)$}}{
$l=$ the length of the base of $x^{1+\delta}$ sequence\label{alg2:x1+epsi-start}\;
$m =\lceil l \cdot \delta\rceil$\label{alg2:m}\;
$(v_{l+m},\ldots,v_1) =$ the path starting at $v_{l+m}=v$ going upwards in $T$\;\quad with $f(v_i)=f(v_{l+i})$ for $1\leq i\leq m$\;
$u\gets v_{l+1}$\;
erase all values of f in $u\desc$\label{alg2:x1+epsi-end}
}
\uElseIf{\textup{$x^2$ occurs in $(f,v)$ starting at $v$ }}{
$(v_{2l}, \ldots, v_1) =$ the path starting at $v_{2l}=v$\label{alg2:x2-start}\label{alg2:l}\;
\quad with $f(v_i)=f(v_{l+i})$ for $1\leq i\leq l$ \;
$k=$ the least integer $i$ such that $v$ is a descendant of $v_i$\label{alg2:k}\;
\uIf{$k \leq l$} {$u \gets v_{l+1} $}
\uElse{$u \gets v_{k+1} $}
erase all values of f in $u\desc$\label{alg2:x2-end}\label{alg2:x2-erase}
}
\uElse{
$u=\firstChild(v)$\label{alg2:positive-start}\;
\While{\textup{$f$ has an extension onto $u\desc$ satisfying \ref{item:no-x2} and \ref{item:no-x1+epsi}}}{extend $f$ onto $u\desc$ and keep \ref{item:no-x2} and \ref{item:no-x1+epsi} satisfied\label{alg2:extend}\;
$u=\nextChild(v,u)$\label{alg2:positive-end}\;
}
}
extend $f$ with $\set{u\rightarrow \alpha}$, where $\alpha$ is the $n$-th element of $L(u)$\label{alg2:set-a-color}\;
\Return $(f,u)$\;
\end{algorithm-hbox}
The partial function $\nextT:\PVA\times [\hat{c}]\to\PVA$ is defined by Algorithm \ref{alg-nextT}. Note that $\nextT((f,v),n)$ is well-defined for partial colorings $(f,v)$ with
\begin{enumeratei}
\item no color sequence of the form $x^{1+\delta}$ on a vertical path other than paths going upwards from $v$,\label{item2:no-vertical-path} and
\item no color sequence of the form $x^2$ on simple paths other than paths starting at $v$,\label{item2:no-repetition} and
\item no extension of $(f,v)$ onto $v\desc$ preserving \ref{item:no-x2} and \ref{item:no-x1+epsi}.\label{item2:no-extension}
\end{enumeratei}
Moreover, if $\nextT((f,v),n)$ exists then this new partial coloring also satisfies \ref{item2:no-vertical-path}-\ref{item2:no-extension}. This allows us to iterate the calls of $\nextT$.
Now, we define recursively function $h:[\hat{c}]^+\to\PVA$ which captures the idea of our naive procedure trying to color $T$ from $\set{L_v}$. For $s\in [\hat{c}]^+$, $1\leq n\leq c$ and $\alpha$ being the $n$-th color in $L(\myRoot(T))$ put
\begin{align*}
h(n)&=(\set{\myRoot(T) \rightarrow \alpha},\myRoot(T)),\\
h(s\cdot n)&=\nextT(h(s),n).
\end{align*}
First of all note that $h(s)$ is well-defined for all $s\in [\hat{c}]^+$. Indeed, $h(s)$ is explicitly constructed for all $s$ of length $1$ and it trivially satisfies \ref{item2:no-vertical-path} and \ref{item2:no-repetition}, while \ref{item2:no-extension} holds as we supposed that there is no coloring of $T$ from $\set{L_v}$ satisfying \ref{item:no-x2} and \ref{item:no-x1+epsi}. Now $h(s\cdot n)$ is well-defined as $\nextT$ is well-defined for partial colorings satisfying \ref{item2:no-vertical-path}-\ref{item2:no-extension} and a new partial coloring also satisfies \ref{item2:no-vertical-path}-\ref{item2:no-extension}.
Now, for given $s\in[c]^+$ we aim to get a concise description of $h(s_{1})$, $h(s_{1..2})$, $h(s_{1..3}),\ldots,h(s)$. Let $(f_i,v_i)=h(s_{1..i})$ for $1\leq i \leq \norm{s}$. We define $\chosen(s)=(f_1(v_1),\ldots,f_{\norm{s}}(v_{\norm{s}}))$. In other words (and exactly as in the proof of Lemma \ref{lem:ver-free}), $\chosen(s)$ is a sequence of colors set by instruction \ref{alg2:set-a-color} of Algorithm \ref{alg-nextT} in consecutive calls of $\nextT$ on a way to build $h(s)$.
\begin{claim}
The function $\chosen$ is injective.
\end{claim}
\noindent Note that if $(f',u)=\nextT((f,v),n)$ is defined then vertex $u$ is determined only by $(f,v)$, i.e.\ the first argument of $\nextT$, while $f'(u)$ is simply the $n$-th color in $L(u)$. That is why the proof of the claim above follows exactly the same lines as the proof of the corresponding claim in the proof of Lemma \ref{lem:ver-free}.
For a partial coloring $(f,v)$ let $\myPath((f,v))$ be the color sequence in $f$ on vertices from $\myRoot(T)$ to $v$. In particular, the last color in $\myPath(s)$ is simply $f(v)$.
\begin{claim}
The function $\myPath$ is injective on partial colorings from the image of $h$.
\end{claim}
\begin{proof}
We are going to prove that for any two partial colorings $(f,v)$, $(f',v')$ from the image of $h$, if $\myPath((f,v))=\myPath((f',v'))$ then $(f,v)=(f',v')$. The proof goes by induction on the length of $\myPath((f,v))$.
When the length of $\myPath((f,v))$ and so $\myPath((f',v'))$ is $1$ then $v=v'=\myRoot(T)$. Thus, $\myRoot(T)$ is the only vertex colored by $f$ and $f'$, and the statement is trivial.
Suppose that $\norm{\myPath((f,v))}=\norm{\myPath((f',v'))}=n$ and the claim holds for all shorter sequences. Since $(f,v)$ and $(f',v')$ are in the image of $h$ there exist $s,s'\in [\hat{c}]^+$ such that $h(s)=(f,v)$ and $h(s')=(f',v')$. Let $(f_i,v_i)=h(s_{1\ldots i})$ for $1\leq i\leq \norm{s}$ and $(f'_i,v'_i)=h(s'_{1\ldots i})$ for $1\leq i\leq \norm{s'}$. Let $j$ be the least index such that $\depth(v_i)\geq n$ for $j< i\leq \norm{s}$. Analogously, let $j'$ be the least index such that $\depth(v'_i)\geq n$ for $j'< i\leq \norm{s'}$. Now, we need a basic property of Algorithm \ref{alg-nextT} that is, if $(g',u')=\nextT(g,u)$ then the coloring of a path from $\myRoot(T)$ to $u'$, with excluded $u'$, is the same in $g$ and $g'$. This implies that the color sequence from $\myRoot(T)$ to $v_j$ is the same in partial colorings $h(s_{1\ldots i})$ for all $j\leq i \leq \norm{s}$, which is just the prefix of $\myPath((f,v))$ of length $n-1$. Analogously, a color sequence from $\myRoot(T)$ to $v'_{j'}$ is the same in partial colorings $h(s'_{1\ldots i})$ for all $j'\leq i \leq \norm{s'}$, which is just the prefix of $\myPath((f',v'))=\myPath((f,v))$ of length $n-1$. In particular this means that $\myPath((f_j,v_j))=\myPath((f'_{j'},v'_{j'}))$. By the induction hypothesis we get $(f_j,v_j)=(f'_{j'},v'_{j'})$. Now we do know that partial colorings $(f_{j+1},v_{j+1})$ and $(f'_{j'+1},v'_{j'+1})$ are generated by the calls of $\nextT$ with the same first arguments.
Note that Algorithm \ref{alg-nextT} is deterministic (in particular line \ref{alg2:extend}) in a sense that for the same input it always generates the same output.
Thus, we immediately get that $v_{j+1}=v'_{j'+1}$, say $w=v_{j+1}$, and two partial colorings $(f_{j+1},v_{j+1})$, $(f'_{j+1},v'_{j+1})$ differ at most with the color of $w$. By the definition of $j$ and $j'$, in all the consecutively built partial colorings $(f_i,v_i)$ for $j< i\leq \norm{s}$, $(f'_i,v'_i)$ for $j'<i\leq \norm{s'}$ vertex $w$ is on the path from $\myRoot(T)$ to the current vertex, i.e. $v_i$ or $v'_i$, respectively. Moreover, all these partial colorings differ at most in the subtree of $w$. But the only vertex from $w\desc$ colored in the final colorings (i.e. $(f,v)$ and $(f',v')$) is $w$ itself. Finally, in both of these colorings a vertex $w$ receives the same color which is at the end of $\myPath((f,v))=\myPath((f',v'))$. Thus, $(f,v)=(f',v')$.
\end{proof}
Again (as in the proof of Lemma \ref{lem:ver-free}) we aim to get a concise description of all these partial colorings and then apply a double counting argument. For $s\in [\hat{c}]^+$, let $(f_i,v_i)=h(s_{1\ldots i})$ for all $1\leq i \leq \norm{s}$. For $2\leq i \leq \norm{s}$ we denote by $l_i,k_i$ the valuations of variables $l, k$ in the $(i-1)$-th call to the procedure $\nextT$ (for some calls, they may be undefined). Define $W(s)=(\depth(v_1),\ldots,\depth(v_{\norm{s}}))$ to be a supporting walk of $s$. We distinguish three kind of steps (differences) in $W(s)$
\begin{enumeratealph}
\item positive, when $W(s)_i=W(s)_{i-1}+1$, i.e.\ no obstruction occures in the $i$-th step and Algorithm \ref{alg-nextT} evaluates lines \ref{alg2:positive-start}-\ref{alg2:positive-end},
\item $x^{1+\delta}$-negative, when $W(s)_i \leq W(s)_{i-1}$ and color sequence of the form $x^{1+\delta}$ is fixed in the $i$-th step; this corresponds to the evaluation of lines \ref{alg2:x1+epsi-start}-\ref{alg2:x1+epsi-end},
\item $x^2$-negative, when $W(s)_i \leq W(s)_{i-1}$ and color sequence of the form $x^2$ is fixed in $i$-th step; this corresponds to the evaluation of lines \ref{alg2:x2-start}-\ref{alg2:x2-end}.
\end{enumeratealph}
Additionally we put $m_i=W(s)_i-W(s)_{i-1}+1$. For $x^{1+\delta}$-negative steps, $m_i$ corresponds to the value of variable $m$ in the corresponding call to the procedure $\nextT$.
This time we need three kinds of annotations enriching the information given in $W(s)$. The first is analogous to the one in the proof of Lemma \ref{lem:ver-free} and helps to recover lengths of the base of the $x^{1+\delta}$ sequence in $x^{1+\delta}$-negative steps. Suppose that the $i$-th step was $x^{1+\delta}$-negative. Note that just from $W(s)$ we can decode the length of the repeated block, i.e.\ the value of variable $m_i$. However, to decode a corresponding value $l_i$ we need some additional information. All we know is that $m_i =\lceil l_i \cdot \delta\rceil$, which leaves $\lceil 1/\delta \rceil$ possible values for $l_i$. Therefore we annotate every negative step with a number from $\set{0, \ldots, \lceil 1/\delta \rceil -1}$ and use an extra value for all steps which are not $x^{1+\delta}$-negative. The annotation function $A:[\hat{c}]^+ \to \{-1,0, \ldots, \lceil 1/\delta \rceil-1\}^+$ is defined as follows. For $1\leq i \leq \norm{s}$,
\[
A(s)_i = \begin{cases}
l_i - \lfloor m_i/\delta \rfloor & \text{if $i$-th step is $x^{1+\delta}$-negative} \\
-1 & \text{otherwise.}
\end{cases}
\]
The second annotation function will serve to recover basic information concerning the paths whose part was retracted in $x^2$-negative step. Suppose that the $i$-th step is $x^2$-negative. We want to recover the values of $l_i$ and $k_i$ set in lines \ref{alg2:l} and \ref{alg2:k}, which represents the half of length of the path forming a repetition and the position of the tip in this path. Note that $m_i=W(s)_{i-1} - W(s)_i+1$ is equal to $\min(l_i,2l_i - k_i)$. Hence, we need information what is the difference between $l_i$ and $k_i$. For $1\leq i \leq \norm{s}$ let
\[
B(s)_i = \begin{cases}
l_i-k_i & \text{if $i$-th step is $x^2$-negative} \\
\text{whatever} & \text{otherwise.}
\end{cases}
\]
To get a more convenient description of function $B$, we make a list of important values of function $B$ and encode it into a sequence over $\set{-1,0,1}$. If the $i$-th step is $x^2$-negative then we convert $B(s)_i$ into a sequence of $0$'s of length $m_i=W(s)_i - W(s)_{i-1}+1$ and if $B(s)_i\neq 0$ we put $\signum(B(s)_i)$ in $\norm{B(s)_i}$-th position. We need to argue here that $\norm{B(s)_i}\leq m_i$. Indeed, as the partial coloring in the $i$-th step has no $x^{1+\delta}$ occurrence we get that $\norm{l_i-k_i}\leq \delta l_i$ and $l_i-m_i\leq \delta l_i$, which give
\[
\norm{B(s)_i}=\norm{l_i-k_i}\leq \delta l_i\leq \frac{\delta}{1-\delta}m_i\leq m_i.
\]
The last inequality holds as $\delta<\frac{1}{2}$. We define $B^{*}(s)$ to be the concatenation of the sequences produced for all $x^2$-negative steps.
The third annotation contains the further description of the paths involved in $x^2$-negative steps. Suppose that the $i$-th step is $x^2$-negative and let $P=(v_{2l_i},\ldots,v_1)$ be the path whose color sequence forms a repetition. Already from $W(s)$ and $B^*(s)$ we will recover the size of the path and the value of $k_i$ such that $v_{k_i}$ is the tip of $P$. Now, we want to describe the way in which $P$ goes down in $T$ from $v_{k_i}$ up to $v_1$. Let $n_j$ for $1<j\leq k-1$ be the position of $v_{j-1}$ on the list of children of vertex $v_j$. Then put $C(i)=(n_1,\ldots,n_{k_i-1})$ and $C^{*}(s)$ be the concatenation of $C(i)$'s for $i$ being the indices of $x^2$-negative steps.
A total encoding function is defined as $\LOG(s)= (W(s), A(s),B^{*}(s),C^{*}(s), h(s))$ for $s\in[\hat{c}]^+$. Length of a $\LOG(s)= (W(s), A(s),B^{*}(s),C^{*}(s), h(s))$ is defined to be the length of $W(S)$, hence $\norm{\LOG(s)}= \norm{s}$.
Here comes the key property of $\LOG$ function.
\begin{claim}
The function $\LOG$ is injective.
\end{claim}
\begin{proof}
Take any $L$ from the image of $\LOG$. Suppose that $\norm{L}= n$. Then, there exists $s\in [\hat{c}]^n$ such that $\LOG(s)=L$. We are going to show that there is only one such $s$. We prove that reconstructing the sequence $\chosen(s)$ from $L$. This will prove the claim as we already know that $\chosen(s)$ is injective.
Let $s'$ be the prefix of $s$ of size $n-1$. In one step of reconstruction we decode from $L$ the last chosen color $\alpha$ and the value of $\LOG(s')$. Then, by simple iteration of this process, we reconstruct the whole $\chosen(s)$. The value of $\alpha$ may be simply read from $h(s)$, which is explicitly given in $\LOG(s)$. In order to get $\LOG(s')$ note that $W(s')$ and $A(s')$ are just the prefixes of $W(s)$ and $A(s)$ of length $\norm{s}-1$. It remains to reconstruct $h(s')$, $B^{*}(s')$ and $C^{*}(s')$. The way we proceed depends on the type of the last step in $W(s)$, which can be recognized from $W(s)$ itself and $A(s)$. Indeed, if $W(s)_{n}=W(s)_{n-1}+1$, then the last step is positive. Otherwise the value of $A(s)$ indicates which type of negative step we deal with.
\smallskip
\noindent\textbf{Cases 1 and 2.} The last step in $W(s)$ is positive or $x^{1+\delta}$-negative. Then $B^{*}(s')=B^{*}(s)$, $C^{*}(s')=C^{*}(s)$. The partial coloring $h(s')$ is reconstructed exactly as in the analogous cases in the proof of Lemma \ref{lem:ver-free}.
\smallskip
\noindent\textbf{Case 3.} The last step in $W(s)$ is $x^2$-negative. Let $P=(v_{2l_n},\ldots,v_1)$ be the path whose color sequence forms a repetition and let $v_{k_n}$ be the tip of $P$. The number of vertices in $P$ with colors erased can be read from $W(s)$ and it is $m_n=W(s)_{n}-W(s)_{n-1}+1$. By the construction of the Algorithm \ref{alg-nextT} (lines \ref{alg2:x2-start}-\ref{alg2:x2-end}) we have
\[
2l_n-m_n=\max(l_n,k_n).
\]
From the last $m_n$ values of sequence $B^{*}(s)$ we can extract the value of $l_n-k_n$. If all these values are zeros then $l_n-k_n=0$. Otherwise exactly one of these $m_n$ values is equal to $1$ or $-1$ and the position of this non-zero value determines $\norm{l_n-k_n}$ while the sign of $l_n-k_n$ is the same as the sign of this non-zero entry. Once we know $d=l_n-k_n$ we can deduce that
\begin{align*}
&l_n=m_n \text{ and } k_n=m_n-d,&&\text{if $d=l_n-k_n\geq0$},\\
&l_n=m_n-d \text{ and } k_n=m_n-2d,&&\text{if $d=l_n-k_n<0$}.
\end{align*}
Let $h(s)=(f,u)$, $h(s')=(f',u')$. As we supposed that call of $\nextV$ generating $h(s)$ from $h(s')$ retracts a repetition on path $P$, we get that $u'=v_{2l_n}$ and $u=v_{2l_n-m_n+1}$. The color of $u=v_{2l_n-m_n+1}$ in $f'$ was erased by line \ref{alg2:x2-erase} and replaced in line \ref{alg2:set-a-color} of Algorithm \ref{alg-nextT}. The colors of $v_{2l_n-m_n+1},\ldots,v_{2l_n}$ were erased from $f'$ and are not visible in $f$ but the colors of $v_1,\ldots,v_{2l_n-m_n}$ remain unchanged. The vertex $v_{2l_n-m_n}$ is clearly the parent of $u$. As we already reconstructed the value of $k_n$, i.e.\ the position of the tip of $P$, we know the vertices of $P$ lying on a path from $v_{2l_n-m_n}$ to $\myRoot(T)$. In particular, we reconstructed the vertex $v_{k_n}$ in $T$. Now, we make use of $C^{*}(s)$. The last $k_n-1$ values of $C^{*}(s)$ indicates how the path $P$ goes down in $T$ from $v_{k_n}$ up tp $v_1$. This way we reconstructed the position of $(v_{2l_n-m_n},\ldots,v_1)$ in $T$ and we know that their colors are the same in $f$ and $f'$. Once we know the colors of at least first half of the vertices of $P$ (as $m_n\leq l_n$) and as the color sequence of vertices from $P$ forms a repetition in $f'$ we may deduce the colors of $v_{2l_n-m_n+1},\ldots,v_{2l_n}$.
Putting all together we finally reconstruct $\myPath(h(s'))$ which is the sequence of colors in $f'$ from $\myRoot(T)$ down to $u'=v_{2l_n}$. Indeed, the colors from $\myRoot(T)$ down to $v_{2l_n-m_n}$ are the same in $f'$ and $f$, while the colors from $v_{2l_n-m_n+1}$ to $v_{2l_n}$ has just been reconstructed. Now, recall that the function $\myPath$ is injective on the partial colorings from the image of $h$ which means that we can reconstruct from $\myPath(h(s'))$ a partial coloring $h(s')$ itself.
\end{proof}
We are going to bound the number of distinct $\LOG(s)$ for $s$ of length $M$.
For every $s\in [\hat{c}]^M$ we have $\LOG(s)=(W(s),A(s),B^{*}(s),C^{*}(s),h(s))$. Just like before, the number of integer walks $W(s)$ of length $M$ is $o(4^{M})$. The number of possible annotation sequences $A(s)$ is bounded by $(\lceil 1/\delta \rceil+1)^M$. The annotation $B^{*}(s)$ is a sequence of numbers $\set{-1,0,1}$ of length $\sum_i m_i$, where $i$ goes over all the indices of $x^{1+\delta}$-negative steps. Clearly, $\sum_i m_i \leq M$ and so the number of distinct $B^{*}(s)$ is bounded by $3^M$. The annotation $C^{*}(s)$ is the concatenation of sequences over $\set{1,\ldots,\Delta-1}$. The length of $C^{*}(s)$ is equal to $\sum_i k_i$, where the sum goes over the set $I_{x^2}$ of all the indices of $x^2$-negative steps. Clearly,
\[
\sum_{i\in I_{x^2}} k_i \leq \sum_{i\in I_{x^2}} (1+\delta) l_i\leq \sum_{i=1}^M \frac{1+\delta}{1-\delta}m_i\leq \frac{1+\delta}{1-\delta}M=(1+\varepsilon)M,
\]
By the last Claim the number of distinct $\LOG(s)$ for $s\in [\hat{c}]^M$ is simply $\hat{c}^M \geq (c\Delta)^{(1+\varepsilon)M}$. On the other hand we just obtained an independent upper bound and altogether we get the following inequality
\[
\displaystyle{ (c\Delta)^{(1+\varepsilon)M} \leq o\left(4^M\right)\cdot \left(\left\lceil\frac{1}{\delta}\right\rceil+1\right)^M \cdot 3^M \cdot \Delta^{(1+\varepsilon)M} \cdot \left(\norm{V(T)} c^{\norm{V(T)}}\right)}.
\]
For $c \geq 12 \cdot (\lceil \frac{1}{\delta} \rceil+1)$ and sufficiently large $M$ we get a contradiction.
\end{proof}
\bibliographystyle{plain} |
1709.07837 | \section{Introduction}
Close packed structures (CPS) are OD (Order-Disorder) structures built by stacking hexagonal layers in the direction perpendicular to the layer \cite{durovic97}. The stacking ambiguity raising from the two possible positions of a layer with respect to the previous one leads to a theoretically infinite number of possible polytypes if no constraint is made on the periodicity. However, by far the commonest ones are the cubic close packed or $3C$ (Bravais lattice of type $cF$) and the hexagonal close packed or $2H$ (Bravais lattice of type $hP$). These are MDO (Maximum Degree of Order) polytypes, meaning that they structure contains the minimal number of layer triples, quadruples etc. (one, in both cases). CPS usually exhibit some kind of planar disorder, or stacking faults, that viewed as a disruption in the otherwise periodic arrangement of layers, can be analyzed as non interacting defects. This is the basis of the so called random faulting model (RFM), that has been the most widely used model of faulting in layer structures, dating back to the early times in diffraction analysis \cite{wilson,wagner,warren}. The idea of the RFM is to consider certain types of faulting, such as intrinsic (removal of a layer from the sequence), extrinsic (addition of a layer in the sequence) and twinning (change of orientation in the sequence), assigning to each a fixed probability of occurrence, independent from the density of faulting and neglecting any spatial interaction between the faults present in the material. This simplifying assumption means that RFM, if suitable, should be at very low density of faulting, in which case it is justified, during the derivation of the correspondent expression for the displacement correlation function between layers (also known as the pairwise correlation) and subsequently the diffraction equation, to drop all terms above linear in the faulting probabilities. Analytical expressions are then found: see such mathematical development in the classical works of \cite{warren} for deformation and twin faulting in $FCC$ and $HCP$ structures.
Besides the assumption of low density of defects, the RFM also assumes that faults, when occur, go through the whole coherently diffracting domain, avoiding the need to account for the appearance of partial dislocations. Further, faulting are considered to happen along the stacking direction but not along any direction that is crystallographically equivalent in the non-faulted polytype. For example, in the case of the unfaulted $3C$ polytype, the four directions $\langle 111 \rangle$ are equivalent, as in any cubic crystal, but this is no longer the case if faulting occurs in one of the four. The reader can refer to \cite{estevez07} for further historical account of the subject.
In recent years there have been attempts to extend the mathematical applicability of the model, without modifying its fundamental assumptions. First, \cite{velterop} have observed that, even for low density of faulting, the assumption of only one faulting direction, is an unrealistically simplifying assessment of the diffraction behavior, which can lead to misleading conclusions. Another issue is the need to accommodate larger, more realistic, faulting densities within the model. Even if the physical assumption of non interacting faulting is too heavy for larger probability of faulting, it is still interesting to pursue such an extension for several reasons. RFM can be used as a reference model for other approaches. The fact that only one parameter for each faulting type is needed, makes it very attractive in practical analysis of materials. Additionally, RFM can be used as a suitable starting model in computer simulations of faulting. In this case, the need of a good starting proposal is essential in the convergence and convergence speed of numerical calculations.
In the last years, independently, Varn \textit{et al.} \cite{varn01,varn01a,varn02,varn04}, and \cite{estevez08}, have attempted to rewrite the RFM in a modern framework, using a Hidden Markov Model (HMM) description of the faulting dynamics. The more ambitious idea is to go beyond the faulting model and try to understand the disordering process in layer structures, as a dynamical process of a system capable of performing (physical) computation and, in this sense, able to store and process information \cite{crutchfield92}. The attractiveness of the proposal is that such approach can harvest from a powerful set of tools, developed within the study of complexity, grounded in information theory concepts such as Shannon entropy and mutual information. This framework is known as computational mechanics and have been used in a wide range of subjects \cite{crutchfield12}.
A first attempt to use the HMM description of random faulting was done by \cite{estevez08} for intrinsic and twinning faults. Their analysis allowed to calculate the displacement correlation function and the diffraction equation for the whole range of faulting probabilities. They also derived useful expression concerning the hexagonality of the stacking event, the average size of cubic and hexagonal neighborhood blocks, the correlation length, all as function of the faulting probabilities. While correct, this approach is ad hoc in nature, only applicable to the problem considered by them, namely using as starting structure the $3C$ layer ordering, and working through the appropriate equations.
A more recent breakthrough came from the work of \cite{riechers15}, who proved that calculation of the pairwise correlation function could be systematized in an elegant way, allowing its applicability to a wide number of situations such as those found in close packed arrangements. The idea is to find the description of the stacking arrangement as a HMM and, from there, build the transition matrix, find the stationary probabilities of the HMM states and the pairwise correlation function [See equation (13) in \cite{riechers15} or in this contribution further on]. In their contribution, they also discussed a number of examples that showed how the formalism can reproduce previous results, such as those reported by \cite{estevez08} and also be applied to other cases.
The result of \cite{riechers15} opens the possibility to study, in a systematic way, the RFM for different types of faulting and their combinations, something which proved to be at least cumbersome and, in certain instances, intractable by previous tools. This is what we intend to do in this contribution for extrinsic faults. Extrinsic faulting has been dealt before \cite{johnson63,warren63,lele67,holloway69,holloway69a,howard77,takahashi78,howard79}.
The main goal of the manuscript is to report several analytical expressions for disorder of extrinsic faulted CPS. These expressions relates disorder magnitudes such as those derived within mechanical computation with the extrinsic faulting probability which in turn allows comparison with similar expression derived for twin and deformation fault already reported \cite{estevez08}. Also, closed analytical expression for the probability of finding different stacking sequences in the faulted structure is reported and from there an expression is derived for the hexagonality and the average length of perfectly coherent $FCC$ sequence within the CPS. The analytical expression of the pair correlation function as a function of faulting probability is derived and its decaying and oscillation behavior are discussed. Finally, the expression for the interference function is reported and peak shift and asymmetry as a result of the extrinsic faulting is commented.
First the main concepts used and the notation are introduced.
\section{Order and disorder in close packed stacking arrangements and the pairwise correlation function}
In the OD structures built from hexagonal layers, the layers can be found only in three positions perpendicular to the stacking direction, which are usually labeled $A$, $B$, and $C$ \cite{durovic97,pandey2}. Close packed is the constraint that two layers which bear the same letter, and are thus exactly overlapped in the projection along the stacking direction can not occur consecutively. According to this description, the ideal $FCC$ structure is described by $ABCABCAB\ldots$ sequences \cite{verma}, while the ideal $HCP$ structure, has a stacking order described by $ABABABA\ldots$ and the double hexagonal close packed ($DHCP$) in turn is described by the stacking $ABCBABCB\ldots$.
An equivalent, and less redundant coding is the H\"agg code \cite{hagg43}, where pair of consecutive layers is given a plus (or 1) symbol if they form a ''forward'' sequence $AB$, $BC$ or $CA$, and a minus (or 0) sign otherwise\footnote{An alternative notation by Nabarro-Frank \cite{frank51} uses $\bigtriangledown$ and $\bigtriangleup$ for $+$ (or 1) and $-$ (or 0) respectively.}. There is a one-to-one relation between both coding \cite{estevez05a}. It is also important to introduce a three layer hexagonal environment as one where a layer $X$ has the two adjacent layers in the same position (e.g. $ABA$, $ACA$, $BAB$, $BCB$, $CAC$, $CBC$); if a layer environment is not hexagonal then it is cubic. A hexagonal environment is denoted by a letter $h$ and a cubic environment by a letter $k$, this is the basis of the Jagodzinski coding of the stacking arrangement, as before, there is a one-to-one correspondence between the $ABC$ coding and the Jagodzinski coding \cite{estevez08a}. Hexagonality then refers to the fraction of hexagonal environments in the stacking sequence. Also, it can be easily checked that, when in the H\"agg code the pair of characters $10$ or $01$ is found, a hexagonal environment is found.
Faulting is generically meant as a disruption of the ideal periodic ordering of a stacking arrangement and therefore constitute a defect in the structure. In close packed structures, the most simple type of faults that are usually considered are (1) deformation faults, which are jogs in the otherwise perfect periodic sequence, (2) extrinsic or double-deformation fault, which is the insertion of an extraneous layer in the perfect sequence and; (3) twin faults, which cause reversions in the stacking ordering. In what follows, the probability for the occurrence of a deformation fault will be denoted by $\alpha$, of an extrinsic fault by $\gamma$, while the probability for the occurrence of a twin faulting will be denoted by $\beta$.
The pair correlation function between layers, known as the pairwise correlation function $Q_{\xi}(\Delta)$, is the key to calculate the effect of the stacking arrangement in the diffraction intensity \cite{estevez01,estevez03a}. Consider a stacking direction and sense, $Q_{\xi}(\Delta)$, where $\xi=\{c,a,s\}$ is the probability of finding two layers, $\Delta$ layers apart, and displaced the first with respect to the second one as (1) $\xi=c$: $A-B$ or $B-C$, or $C-A$; (2) $\xi=a$: $B-A$ or $C-B$, or $A-C$ and (3) $\xi=s$: $A-A$ or $B-B$, or $C-C$\footnote{The notation is that used by Varn et al. \cite{varn01,varn02}, where $c$ stands for ''cyclic'', $a$ stands for ''anti-cyclic and $s$ for ''same''}. It should be noted that $Q_{s}(1)=0$ due to the close packed constraint and $Q_{s}(0)=1$ by construction.
It is possible, for any of the described codings ABC, H\"agg and Jagodzinski, to construct a Hidden Markov Model (HMM) describing a broad range of both ordered and disordered stacking process. A HMM description comprises a finite, or at least enumerable, set of states $\mathcal{S}$ and the associated initial set of probability $\pi_0$ of being in each state; the set of transition matrices $\mathbf{T}$, and a set of symbols drawn from a finite alphabet $\mathcal{A}$. Each transition matrix $\mathcal{T}^{[\upsilon]}$ is a square matrix with number of rows equal to the number of states, where each entry $t^{(\upsilon)}_{ij}$ represents the probability of jumping from state $i$ to state $j$, while emitting the symbol $\upsilon\in\mathcal{A}$. The HMM transition matrix $\mathcal{T}$ is defined as the sum of the $\mathcal{T}^{(\upsilon)}$ over all symbols $\upsilon$ in the alphabet. Figure \ref{fig:perfseq} shows the HMM for the $FCC$, $HCP$ and $DHCP$ stacking structures. For further details the reader is referred to previous papers on the subject \cite{varn01,varn01a,varn02,varn04,estevez08}.
When seen through a HMM description, stacking arrangements are cast as an information processing system that sequentially outputs symbols as it makes transition between states. The system output is then an infinite string of characters $\Upsilon=\ldots \upsilon_{-2} \upsilon_{-1} \upsilon_{0} \upsilon_{1} \upsilon_{2}\ldots$ each character $\upsilon_i\in\mathcal {A}$. For the purposes of analysis it is common to, at a given point, divide the output string in two halves, the left halves $\overleftarrow{\Upsilon}=\ldots \upsilon_{-2} \upsilon_{-1}$ is known as the past, while the right halve $\overrightarrow{\Upsilon}=\upsilon_{0} \upsilon_{1} \upsilon_{2}\ldots$ is called the future \footnote{The terms of past and future are taken from the analysis usually carried out in dynamical systems and is kept even when the considered variable is not time, as in the case of stacking order where the pertinent variable is layer position in the stacking. In any case, stacking and faulting are usually cast as a sequential process \cite{warren63}, the HMM analysis just makes this explicit. One could understand the meaning of past and future in this sense.} \cite{varn01,varn01a,varn02,varn04}. There can be many HMM describing the same process, the minimal HMM describing the system dynamics is considered to be optimal in the sense of using fewer resources while providing the best predictive power and will be the one relevant in this contribution, such model is called an $\epsilon$-machine \cite{crutchfield92,crutchfield12}. The $\epsilon$-machine has, among others, the important property of unifilarity which means that, from a given state, the emitted symbols determines unambiguously the transition to another state.
Let us denote, following the common use of brac and kets in physics, by $\langle \pi |$ the vector of state probabilities and by $|1\rangle$ a vector of $1s$. If the HMM description is known, then the probability of any finite sequence $\upsilon^N=\upsilon_i \upsilon_{i+1} \upsilon_{i+2}\ldots \upsilon_{i+N-1}$ will be given by
\begin{equation}
P(\upsilon^N)=\langle \pi | \mathcal{T}^{[\upsilon_{i}]} \mathcal{T}^{[\upsilon_{i+1}]}\ldots\mathcal{T}^{[\upsilon_{i+N-1}]}|1\rangle.\label{eq:psn}
\end{equation}
Where in this case $\langle x | A | y\rangle$ is a real number resulting from the scalar product between vectors and matrices.
Several information theory magnitudes can be defined once the minimal HMM description of the stacking process is known. Shannon defined information entropy $H(X)$ for an event set $X$ with discrete probabilities distribution $p(X)$ as \cite{arndt}
\begin{equation}
H(X)=-\sum_i p(X) \log p(X),
\end{equation}
where the sum is taken over all the probability distribution and here and in what follows the logarithm is taken base two which makes the units of the entropy to be bits.
For the $\epsilon$-machine, the statistical complexity $C_\mu$ is defined as the Shannon entropy over the HMM states,
\begin{equation}
C_\mu=H(\mathcal{S})=-\sum_i p_i \log p_i,\label{eq:sc}
\end{equation}
where $p_i$ is the stationary probability of the $i$th-state in the minimal HMM description and the sum is over all states probabilities. $C_\mu$ measures the amount of information the system stores.
Excess entropy $E$ is also used to characterize the system information processing capabilities and is used as a measure of predictability, defined as the mutual information between the left half and the right half in the system output,
\begin{equation}
E=H(\overleftarrow{\Upsilon})+H(\overrightarrow{\Upsilon})-H(\Upsilon).
\end{equation}
Entropy density $h_\mu$ \cite{arndt} is defined as
\begin{equation}
h_\mu=\lim_{N\rightarrow\infty}\frac{H(\Upsilon^N)}{N},
\end{equation}
when such limit exist, with $\Upsilon^N$denotes all substrings of $\Upsilon$ of length $N$. $h_\mu$ is used to answer how random the process is \cite{feldman03}.
Finally, \cite{riechers15} described a procedure for computing the pairwise correlation function from the transition matrices, that can be summarized as follows:
\begin{enumerate}
\item The HMM of the stacking process in the ABC notation is given together with $\{\mathcal{A}, \mathcal{S}, \pi_0, \mathbf{T}\}$. If this description is given in the H\"agg coding then the expansion to the ABC coding must be performed \cite{riechers15}.
\item The stationary probabilities $\pi$ over the HMM states is calculated as the normalized left eigenvector of the transition matrix $\mathcal{T}$ with eigenvalue unity.
\begin{equation}
\langle\pi|=\langle\pi|\mathcal{T},\label{eq:eigen}
\end{equation}
\item The pairwise correlation function follows from the definition and the use of equation (\ref{eq:psn}):
\begin{equation}
\displaystyle Q_{\xi}(\Delta)=\sum_{x_0\in\mathcal{A}}\langle \pi | \mathcal{T}^{[x_0]}\mathcal{T}^{\Delta-1}\mathcal{T}^{[\hat{\xi}(x_0)]}|\mathbf{1}\rangle.\label{eq:Q}
\end{equation}
Where $\hat{\xi}=\{\hat{c}, \hat{a}, \hat{s}\}$ is a family of permutation functions given by
\begin{equation}
\begin{array}{lll}
\hat{c}(A)=B & \hat{c}(B)=C & \hat{c}(C)=A\\
\hat{a}(A)=C & \hat{a}(B)=A & \hat{a}(C)=B\\
\hat{s}(A)=A & \hat{s}(B)=B & \hat{s}(C)=C
\end{array}
\end{equation}
and $\mathbf{1}$ represents a vector of $1$'s (See also equations (20) and (24) in \cite{riechers15} for alternatives expression for equation (\ref{eq:Q})).
\end{enumerate}
\section{Extrinsic fault in the face centered cubic stacking order}
An extrinsic fault (In the case of the FCC structure also known as double deformation fault) in a $3C$ stacking is depicted in Fig. \ref{fig:ef} along a perfect sequence for comparison. It can be seen that in the H\"agg code, the extrinsic fault is equivalent to the flip (bitwise negation) of two consecutive characters. The probability of occurrence of such faulting will be denoted by $\gamma$. It will be assumed that $\gamma$ can take any value between 0 and 1. Building from the effect of the extrinsic fault over the H\"agg code, the HMM of the faulting process is shown in Fig. \ref{fig:fsaext}, where it is assumed that the ideal $3C$ structure goes in the $A\rightarrow B \rightarrow C$ sequence. The $p$ state represents the non faulted condition, as long as the system stays in that state, the output symbol $\upsilon=1$ will correspond to the perfect $3C$ structure. If faulting occurs, a $0$ is emitted and the system goes to the $e$ state, where a second $0$ is printed with certainty while returning to the $p$ state\footnote{The described dynamics implicitly assumes that an inserted layer can not follow another inserted layer. The later case has been approached by \cite{howard77}.}. The HMM of figure \ref{fig:fsaext} represents a biased even process (see Appendix A of \cite{crutchfield13} and Example D in \cite{varn13}).
It should be observed that any sequence with an odd number of $0$ can not be the result of such HMM. Such sequence will be called forbidden, moreover, the forbidden sequences are called irreducible as they do not contain a proper subsequence which itself is forbidden. The number of irreducible forbidden sequence in the even process is infinite, in such case, the process is called a sofic system \cite{feldman03}. The fact that any sequence from the HMM of the even process contains an even number of $0's$ has important consequences as will be discussed further down.
The corresponding transition matrix will be given by
\begin{equation}
\begin{array}{ll}
\mathcal{T}^{[1]}=\left ( \begin{array}{ll}\overline{\gamma}&0\\0&0\end{array} \right ) & \mathcal{T}^{[0]}=\left ( \begin{array}{ll}0&\gamma\\1&0\end{array} \right ) \\\\
\mathcal{T}=\left ( \begin{array}{ll}\overline{\gamma}&\gamma\\1&0\end{array} \right ), &
\end{array}
\end{equation}
where $\overline{\gamma}$ stands for $1-\gamma$. The stationary probabilities over the recurrent states $p$ and $e$ can be calculated following equation (\ref{eq:eigen}) which results in
\begin{equation}
\displaystyle \langle\pi|=\left \{\frac{1}{1+\gamma},\frac{\gamma}{1+\gamma}\right \},\label{eq:pi}
\end{equation}
the first value corresponds to the $p$ state.
Hexagonality in terms of computational mechanics has been analyzed in a more general context previously \cite{varn07}. Hexagonality can be calculated from the probability of occurrence of $01$ or $10$ in the H\"agg code of the sequence. Both probabilities are equal and, from equation (\ref{eq:psn}), given by
\begin{equation}
P(01)=\langle \pi | \mathcal{T}^{[0]} \mathcal{T}^{[1]}|1\rangle=\gamma \frac{1-\gamma}{1+\gamma},
\end{equation}
from which the hexagonality is given by $2P(01)$. Hexagonality has a maximum value of $2(3-2\sqrt{2})\approx 0.343$ at $\gamma=\sqrt{2}-1\approx 0.414$ (Fig. \ref{fig:hexhe}a).
The statistical complexity can be derived from equation (\ref{eq:sc}) using equation (\ref{eq:pi}) and is given by
\begin{equation}
\displaystyle C_{\mu}=\frac{1}{1+\gamma}\left ( \log (1+\gamma)-\gamma \log \frac{\gamma}{1+\gamma}\right ).
\end{equation}
logarithm is taken usually in base two and then the units of $C_\mu$ is bits. For an $\epsilon$-machine the entropy density is given by \cite{crutchfield13}
\begin{equation}
\displaystyle h_\mu=-\sum_{k\in \mathcal{S}}P(k)\sum_{x \in \mathcal{A}} P(x|k)\log P(x|k),
\end{equation}
where $P(a|b)$ means the probability of $a$ conditioned on $b$. The units of the entropy density is bits/site. The expression for the entropy density will not be derived explicitly and the reader is referred to \cite{crutchfield13}, the resulting expression for the entropy density is
\begin{equation}
\displaystyle h_{\mu}=-\frac{1}{1+\gamma}\left [ \gamma\log \gamma+(1-\gamma)\log (1-\gamma)\right ].
\end{equation}
The calculation of the excess entropy is more involved and explained in detail in the Appendix. The results is
\begin{equation}
\displaystyle E=\frac{1}{1+\gamma}\left ( \log (1+\gamma)-\gamma \log \frac{\gamma}{1+\gamma}\right ),
\end{equation}
which is identical to the statistical complexity.
Figure \ref{fig:hexhe}b shows the behavior of the excess entropy as a function of $\gamma$. Observe that at $\gamma=1$, the excess entropy has a discontinuity, as $E$ drops to zero when the finite state automata description has a topological change to a certain process with only one state and emitting always a $0$ symbol. This discontinuity is not seen by the entropy density (Fig. \ref{fig:hexhe}a) that has a maximum at $\gamma=1/2(3-\sqrt{5})\approx 0.382$ with value $h_\mu=0.6942$ bits/site and then smoothly drops to zero as $\gamma$ approaches $1$.
The probability of a chain of 0's of length $n$ is given by
\begin{equation}
P(0^n)=\left \{ \begin{array}{ll}\gamma^l\left(1-\frac{2 \sqrt{\gamma}}{1+\gamma}\right)& n=2 l\\0 & n=2l+1\end{array}\right. .\label{eq:0}
\end{equation}
For chains of 1's
\begin{equation}
P(1^n)=\frac{(1-\gamma)^n}{1+\gamma}.\label{eq:1}
\end{equation}
From equation (\ref{eq:0}) and (\ref{eq:1}) the average length of blocks of 0's and 1's can be calculated
\begin{equation}
\begin{array}{l}
\langle L_0 \rangle=\sum_{n=1}^{\infty}n P(0^n)=\frac{4 \gamma}{(1-\gamma)^2}\\
\\
\langle L_1\rangle=\sum_{n=1}^{\infty}n P(1^n)=\frac{1}{\gamma^2}\frac{1-\gamma}{1+\gamma}.
\end{array}
\end{equation}
$\langle L_0\rangle=\langle L_1\rangle$ at $\gamma=0.3623$.
In Fig. \ref{fig:hexvse} hexagonality as a function of excess entropy and entropy density are shown. The higher the entropy density is, the higher the hexagonality,
which comes as no surprise as hexagonal neighborhoods are result of faulting events, which in turn implies larger disorder. It can be seen though, that hexagonality is not a function of entropy density. On the contrary, hexagonality seems to be a function of excess entropy. A maximum value of hexagonality is found for an excess entropy of $0.8724$ bits.
\subsection{The pairwise correlation function.}
The HMM for the $ABC$ coding describing the extrinsic fault can be constructed from the H\"agg description and is shown in Fig. \ref{fig:abcfsa}. For each state in the HMM over the H\"agg code (Fig. \ref{fig:fsaext}), three states are induced in the HMM of the $ABC$ coding corresponding to subsequences starting with $A$, $B$ and $C$. Using the same procedure described for the H\"agg HMM, the transition matrices can be written and the stationary probability over the recurrent states calculated for the HMM over the $ABC$ coding:
\begin{equation}
\langle \pi_{abc}|=\frac{1}{3(1+\gamma)}\{\gamma, \gamma, \gamma,1,1,1\}.\label{eq:piabc}
\end{equation}
where the order in the states has been taken as $\{A_e, B_e, C_e, A, B, C\}$. Using equation (\ref{eq:Q}) the pairwise correlation follows
\begin{equation}
\begin{array}{l}
\displaystyle Q_{s}(\Delta)=\frac{1}{3}\left [ 1+\right.\\\\
\left (\frac{|p|}{4}\right )^{\Delta}\left(\left[1+\frac{\cos(3 \phi_r)|r|}{\sqrt{3}(1+\gamma)}\right]\cos(\Delta\phi_p)+\frac{\sin(3\phi_r)|r|}{\sqrt{3}(1+\gamma)}\sin(\Delta \phi_p)\right)+\\\\
\left.\left (\frac{|q|}{4}\right )^{\Delta}\left(\left[1-\frac{\cos(3 \phi_r)|r|}{\sqrt{3}(1+\gamma)}\right]\cos(\Delta\phi_q)-\frac{\sin(3\phi_r)|r|}{\sqrt{3}(1+\gamma)}\sin(\Delta \phi_q)\right)\right ]\\\\
=\frac{1}{3}\left( 1+Q^{[1]}_{s}(\Delta)+Q^{[2]}_{s}(\Delta)\right ), \label{eq:q0}
\end{array}
\end{equation}
where
\[\begin{array}{l}
r=|r|e^{i \phi_r}=\sqrt{i \sqrt{3}(6\gamma-\gamma^2-1)-(1+\gamma)^2},\\\\
x=(\gamma-1)(1-i\sqrt{3}),\\\\
p=|p|e^{\phi_p}=x+\sqrt{2}s,\\\\
q=|q|e^{\phi_q}=x-\sqrt{2}s.
\end{array}
\]
The obtained equation is equivalent to that result given by \cite{holloway69}, as can be seen by comparing numerical results from equation (\ref{eq:q0}) for $\Delta=0,1,2,3$ and those reported in equations (35), (36), (37), (38) in \cite{holloway69a} (making $\alpha=0$)\footnote{When comparing with \cite{holloway69a} results, it must be noticed that in their notation $Q_{s}(\Delta)=P(m)$, $Q_{c}(\Delta)=Q(m)$ and $Q_{a}(\Delta)=R(m)$}. In turn these authors have shown that their result reduces to that of \cite{johnson63}. Holloway and Klamkin do not give a close form of $Q_{s}(\Delta)$ for $\Delta > 3$.
There are two terms in the expression for $Q_{s}(\Delta)$, each with an oscillating and a decaying part. Figure \ref{fig:qp}a shows the behavior of both decaying terms with faulting probability. $p$ and $q$ have a jump (discontinuity) at the same value of $\gamma \approx \gamma_0= 0.1716$, where the real part of $r$ has a minimum, and the imaginary part has a jump from negative value to a positive one. Interesting, the combined plot of both terms result in two smooth continuous curves. At $\gamma=0$, $p$ is zero while $q=1$, the oscillating part of the second term in $Q_{s}$ dominates. At $\gamma=1$, both $p$ and $q$ have the same value of $1$ and the combine effect of both oscillating terms determines the pairwise correlation function. for both cases( $\gamma=0$ and $\gamma=1$), $Q_{s}(\Delta)$ reduces to
\[
\displaystyle Q_{s}(\Delta)=\frac{1}{3}\left(1+2 \cos \left[\frac{2 \pi}{3}\Delta\right] \right ),
\]
describing the correlation function for the perfect $3C$ stacking.
At $\gamma_0$ the oscillating part of both terms in $Q_{s}(\Delta)$ becomes equal for all values of $\Delta$. At $\gamma=\sqrt{2}-1$, where the hexagonality reaches its maximum value, the oscillating part of $Q^{[1]}_{s}(\Delta)$ is the prevailing one at large $\Delta$ values. For small ($\gamma\approx 0$) and large values ($\gamma\approx 1$) is the oscillating part of $Q^{[2]}_{s}(\Delta)$ which determines the underlying stacking sequence.
In any case, the lower curve in Fig. \ref{fig:qp}a determines the faster decaying behavior of the pairwise correlation function, while the upper curve determines the dominant behavior at larger $\Delta$ values. Figure \ref{fig:qp}b shows the correlation lengths derived from both decaying terms. At large values of $\Delta$ the $p$ term is the dominant factor in the pairwise correlation function for values of $\gamma > \gamma_0$, while the opposite happens at values below $\gamma_0$.
A similar deduction made for $Q_{c}(\Delta)$ results in
\[\begin{array}{l}
Q_c(\Delta)=\frac{1}{3} \left(1+\left ( \frac{\left| p\right|}{4}\right )^\Delta \left [C_p \cos (\Delta \phi_p)+S_p \sin (\Delta \phi_p)\right]+\right.\\\\
\left.\left ( \frac{\left| q\right|}{4}\right )^\Delta \left[C_q \cos (\Delta\phi_q)+S_q \sin (\Delta \phi_q)\right]\right)
,\end{array}\]
with
\[
\begin{array}{l}
C_p=\frac{\sqrt{2}}{\left| r \right|}\frac{ \gamma^2-4 \gamma+1 }{1+\gamma }\cos\phi_r+2 \frac{\sqrt{6}}{\left| r \right|}\frac{\gamma}{1+\gamma}\sin \phi_r-\frac{1}{2},\\\\
S_p=\frac{\sqrt{2}}{\left| r \right|}\frac{ \gamma^2-4 \gamma+1 }{1+\gamma}\sin\phi_r-2 \frac{ \sqrt{6}}{\left| r \right|}\frac{\gamma}{1+\gamma} \cos \phi_r+\frac{\sqrt{3}}{2}\\\\
C_q=-\frac{\sqrt{2}}{\left| r \right|}\frac{ \gamma^2-4 \gamma+1 }{1+\gamma }\cos\phi_r-2 \frac{\sqrt{6}}{\left| r \right|}\frac{\gamma}{1+\gamma}\sin \phi_r-\frac{1}{2},\\\\
S_q=-\frac{\sqrt{2}}{\left| r \right|}\frac{ \gamma^2-4 \gamma+1 }{1+\gamma}\sin\phi_r+2 \frac{ \sqrt{6}}{\left| r \right|}\frac{\gamma}{1+\gamma} \cos \phi_r+\frac{\sqrt{3}}{2}.
\end{array}
\]
$Q_a(\Delta)$ follows from the normalization condition.
\section{The interference function.}
The diffraction pattern of an OD structure can be decomposed in two contributions: that of the layer and that of the stacking sequence. The reduced diffracted intensities (i.e. once the necessary corrections are applied: Lorentz, polarization, absorption etc.) can be deconvoluted in terms of these two contributions so that the stacking sequence leaves its fingerprint in the form of an interference function showing a periodic distribution of deconvoluted intensities.
In the case of complex sequences like that of micas, in which adjacent layers can be stacked in six different orientations, the interference function has been called PID (Periodic Intensity Distribution: \cite{nespolo99}). For close packed structures, the situation is simpler because adjacent layers may take only two relative positions. The consequence of extrinsic faulting over the diffracted intensity is visible in the interference function. The interference function follows from the use of the expressions for $Q_{s}$, $Q_{c}$ and $Q_{a}$ \cite{estevez01}:
\begin{equation}
\displaystyle {\cal I}({r}^{*})= 1+2 \sum_{\Delta=1}^{N_{c}-1} A_{\Delta} \cos(2 \pi \Delta l)+B_{\Delta} \sin(2 \pi \Delta l), \label{Qfinal}
\end{equation}
where
\begin{equation}
\label{fcoef}
\begin{array}{l}
\displaystyle A_{\Delta}=(1-\frac{\Delta}{N_{c}}) \left \{ Q_s(\Delta) +\left[Q_c(\Delta)+Q_a(\Delta)\right] \cos[\frac{2 \pi}{3} (h-k)]\right \}\label{fcoefa}\\\\
\displaystyle B_{\Delta}=(1-\frac{\Delta}{N_{c}}) \left[Q_c(\Delta)-Q_a(\Delta)\right] \sin[\frac{2 \pi}{3} (h-k)].
\end{array}
\end{equation}
$N_c$ is the number of layers in the stacking sequence.
For $h-k$ a multiple of $3$, the coefficients reduces to $A_{\Delta}=(1-\frac{\Delta}{N_{c}})$ and $B_{\Delta}=0$ and this family of reflections are not affected by the extrinsic faulting. For $h-k=3n+1$ with $n$ an integer, the coefficients are then
\begin{equation}\label{fcoef1}
\begin{array}{l}
\displaystyle A_{\Delta}=(1-\frac{\Delta}{N_{c}}) \left \{ Q_s(\Delta) -\frac{Q_c(\Delta)+Q_a(\Delta)}{2}\right \}\\\\
\displaystyle B_{\Delta}= \frac{\sqrt{3}}{2}(1-\frac{\Delta}{N_{c}})\left[Q_c(\Delta)-Q_a(\Delta)\right] .
\end{array}
\end{equation}
The last case is $h-k=3n+2$ with $n$ an integer, the coefficients are then
\begin{equation}\label{fcoef2}
\begin{array}{l}
\displaystyle A_{\Delta}=(1-\frac{\Delta}{N_{c}}) \left \{ Q_s(\Delta) -\frac{Q_c(\Delta)+Q_a(\Delta)}{2}\right \}\\\\
\displaystyle B_{\Delta}= \frac{\sqrt{3}}{2}(1-\frac{\Delta}{N_{c}})\left[Q_a(\Delta)-Q_c(\Delta)\right] .
\end{array}
\end{equation}
An analytical expression for the interference function can be deduced from the above equations but is too long and cumbersome to be of any particular interest\footnote{In Riechers, Varn, and Crutchfield arXiv:1410.5028 (2014), a more elegant way to deduce the interference function directly from the HMM is derived and could lead to a more manageable expression as has been rightly pointed out by an anonymous referee.}.
The result has been discussed already by \cite{johnson63,warren63,holloway69a}. With increasing faulting probability $\gamma$, the peak asymmetrically broadens, lowers its intensity and shifts (Figure \ref{fig:peakshift}). For $h-k=1\;(mod \, 3)$ the peak originally at $l=3n+1$ ($n\in \mathcal{Z}$) shift towards lower $l$ values, while the opposite occurs for $h-k=2\;(mod \, 3)$ where the peak originally at $l=3n-1$ shift towards higher $l$ values. Additionally at high faulting probability an additional peak appears near the so called twin position. For $h-k=1\;(mod \, 3)$ ($h-k=2\;mod \, 3$) the twin position is at $l=3n-1$ ($l=3n+1$), the additional peak appears at lower (larger) value of $l$ and gradually shifts towards the twin peak position as $\gamma$ increases while strengthening its intensity and decreasing its broadening. The behavior of the original peak and the twin one are not symmetrical, that is, they do not behave the same for $\gamma$ and $1-\gamma$, respectively. The non symmetric behavior of the peaks can be explained by the non symmetrical character of the HMM describing extrinsic broadening (Figure \ref{fig:fsaext}). A similar profile for single crystal and the particular case of $\gamma=1/2$ has been reported by \cite{varn13}.
If one observe the interference for $\gamma=0.333$ (Figure \ref{fig:peakshift}), it is too often in the literature that peak deformations with geometry such as these are fitted with models involving more than one phase. The fact that such distortions can be result of a single type of faulting that does not lead to any polytype, should be taken as a note of warning against introducing to easily new structures in profile fitting.
In Figure \ref{fig:asym} the peak shift and asymmetry as a function of faulting probability is shown. Asymmetry has been defined as the ratio between the half width at half maximum (HWHM) for the right side ($W_r$), divided by the HWHM for the left side ($W_l$), by construction, the asymmetry is equal to $1$ for a perfect symmetric peak.
For powder diffraction it must be considered that the components of a family of planes like the $\{111\}$ ( where all members of the family are crystallographically equivalent for the unfaulted crystal and share the same interplanar distance) are no longer equivalent when faulting occurs. For example, when indexed respect to hexagonal axis the $\{111\}$ includes the following planes: $(0,0,3)$ , $(0,0,\bar{3})$, $(\bar{1},1,1)$, $(1,0,1)$, $(0,\bar{1},1)$, $(0,1,\bar{1})$, $(1,\bar{1},\bar{1})$, $(\bar{1},0,\bar{1})$; the first two are unaffected by extrinsic faulting, the next three are of the type $h-k=1\;(mod\, 3)$, and the last three of the form $h-k=2\;(mod\, 3)$. Thus, when simulating the faulted powder diffraction profiles, each component of a plane family must be considered individually. Figure \ref{fig:powder} shows the powder peak profile for the $\{111\}$ where the components not affected by faulting have been left out. The reader can compare with the single crystal profiles of figure \ref{fig:peakshift}.
\section{Conclusions}
Stacking disorder can be viewed in a number of cases as a dynamical system capable of storing and processing information. From this point of view, it has been shown that extrinsic fault in the H\"agg code is a sofic system, where predictability in the future is linked to long range memory in the past for faulting probabilities within ]0,1[. A sofic system, as the one considered here, has no description as a finite range Markov process. This inability to describe such simple faulting process by a finite range model is interesting, as it is common in the literature to try to model faulting by this type of finite range Markov models.\footnote{We thank one of the anonymous referee for her/his enlightening comment on this issue}. In spite of this, the HMM model for extrinsic faulting is simple enough, it just belongs to different type of processing machinery. This is precisely the underlying idea of computational mechanics that attempts to find the less sophisticated model for a given process by climbing up in a given hierarchy of possible computational machines until such description is found.
This character has several interesting consequences. First, the excess entropy equals the statistical complexity of the system. Excess entropy is linked with the structured output of the system, while statistical complexity measures memory stored in the system. In consequence, structure is linked to memory, a result not surprising once it is acknowledge that the HMM of the process is equivalent to a biased even process. In an even process, the occurrence of consecutive 0's has to be tracked completely, to determine in which state the system is. As increasing faulting probability means longer runs of 0's, excess entropy grows monotonically with increasing $\gamma$. Excess entropy has a discontinuity at $\gamma=1$, where the topology of the HMM changes to a one state system with certainty in the output and therefore zero $E$.
Entropy density, on the other hand, is a smooth function of faulting probability in all the probability range. Entropy density has a maximum at $\gamma \approx 0.382$, near the maximum of the hexagonality, but slightly larger value. Extrinsic faulting, as treated here, implies that no faulting probability changes the underlying periodic sequence: no phase transformation happens. Hexagonality reaches at $\gamma =\sqrt{2}-1\approx 0.414214$ a maximum of $2(3-2\sqrt{2})\approx0.34314$ and therefore the system is always more ``cubic'' than hexagonal.
In the text, several useful analytical expressions have been derived for different entropic magnitudes, probabilities, lengths, and correlations, all as a function of the faulting probability $\gamma$. To the knowledge of the authors, such expressions have not been reported before.
The pairwise correlation function of the layers has been derived and from there the interference function was obtained. The correlation function is composed of two terms each with a decaying and oscillating part. The numerical values of the obtained expression coincides with those that can be found using previous treatments. The shift and asymmetric broadening of the reflections as a result of extrinsic faulting was also discussed.
\section{Acknowledgment}
This work was partially financed by FAPEMIG under the project BPV-00047-13 and computational infrastructure support under project APQ-02256-12. EER which to thank the Universit\'e de Lorraine for a visiting professor grant. He also would like to acknowledge the financial support under the PVE/CAPES grant 1149-14-8 that allowed the visit to the UFU. RLS wants to thank the support of CNPq through the projects 309647/2012-6 and 304649/2013-9. We would like to thank the anonymous referees for the careful reading and the number of valuable suggestion that greatly improved the manuscript.
\section{Appendix}
\subsection{Calculation of the excess entropy}
In order to calculate the excess entropy the mixed state representation of the system dynamics must be deduced. To understand what the mixed state representation is, the HMM description must be viewed as an instance of a hidden Markov model (HMM) \cite{upper89}. In short, any model derived by the observer of the system output, that reproduces (statistically) the output, is called a presentation of the process. The observer can then follow the evolution of the system by updating mixed states, defined as a distribution over the states of the HMM HMM description. The reader is referred to \cite{upper89} and \cite{crutchfield13} for a detailed explanation, the later will closely followed.
The mixed state representation of the biased even process of Figure \ref{fig:fsaext} is shown in Figure \ref{fig:mixedfsm} (Compare with Fig. 2 in \cite{crutchfield13}). Each state in the set $\mathcal{S}$ now has a distribution of probabilities associated with it:
\[
\begin{array}{ll}
S: & \delta_S= \left \{\frac{1}{1+\gamma}, \frac{\gamma}{1+\gamma} \right \}\\\\
S_2: & \delta_{S_2}=\left \{1/2, 1/2 \right \}\\\\
S_3: & \delta_{S_3}=\left \{1,0 \right \}\\\\
S_4: & \delta_{S_4}=\left \{0, 1\right \}.
\end{array}
\]
As well as transition probabilities
\[
\begin{array}{ll}
P(0|S)= & 2 \frac{\gamma}{1+\gamma}\\\\
P(1|S)= & \frac{1-\gamma}{1+\gamma}\\\\
P(0|S_2)= & \frac{1+\gamma}{2}\\\\
P(1|S_2)= & \frac{1-\gamma}{2}\\\\
P(0|S_3)= & 1-\gamma\\\\
P(1|S_3)= & \gamma\\\\
P(0|S_4)= & 1\\\\
P(1|S_4)= & 0\\\\
\end{array}
\]
Observe that the emission of a $1$ implies from any state, a transition to the state $S_3$. States $S$ and $S_2$ are transient while the recurrent states reproduce the original HMM. The stationary probability over the states is given by
\[
\displaystyle \langle \pi_{mix}|=\left \{0,0,\frac{1}{1+\gamma}, \frac{\gamma}{1+\gamma}\right \}.
\]
The state transition matrix will be
\[
\displaystyle W=\left (
\begin{array}{cccc}
0 & \frac{2\gamma}{1+\gamma}& \frac{1-\gamma}{1+\gamma} & 0\\
\frac{1+\gamma}{2} & 0 & \frac{1-\gamma}{2} & 0 \\
0 & 0 & 1-\gamma & \gamma \\
0 & 0 & 1 & 0
\end{array}
\right ).
\]
With eigenvalues
\[
\Lambda_{W}=\left \{ 1,-\gamma, -\sqrt{\gamma}, \sqrt{\gamma} \right \}.
\]
The projection operator $W_{\lambda}$, for each eigenvalue, is obtained using
\[
\displaystyle W_\lambda=\prod_{\xi\in\Lambda_W,\xi \neq \lambda}\frac{W-\xi I}{\lambda-\xi}
\]
$I$ represents the identity matrix and the product avoids the singularity in the denominator. The results are
\[
\begin{array}{l}
\displaystyle W_1=\left (
\begin{array}{cccc}
0 & 0 & \frac{1}{1+\gamma}& \frac{\gamma}{1+\gamma}\\
0 & 0 & \frac{1}{1+\gamma}& \frac{\gamma}{1+\gamma}\\
0 & 0 & \frac{1}{1+\gamma}& \frac{\gamma}{1+\gamma}\\
0 & 0 & \frac{1}{1+\gamma}& \frac{\gamma}{1+\gamma}
\end{array}
\right )\\\\
\displaystyle W_{-\gamma}=\left (
\begin{array}{cccc}
0 & 0 & 0 & 0\\
0 & 0 & \frac{1}{2}\frac{\gamma-1}{1+\gamma}& \frac{1}{2}\frac{1-\gamma}{1+\gamma}\\
0 & 0 & \frac{\gamma}{1+\gamma}& -\frac{\gamma}{1+\gamma}\\
0 & 0 &- \frac{1}{1+\gamma}& \frac{1}{1+\gamma}
\end{array}
\right )\\\\
\displaystyle W_{-\sqrt{\gamma}}=\left (
\begin{array}{cccc}
\frac{1}{2} & -\frac{\sqrt{\gamma}}{1+\gamma} & \frac{1}{2}\frac{\sqrt{\gamma}-1}{1+\gamma}& \frac{1}{2}\frac{\sqrt{\gamma}-\gamma}{1+\gamma}\\
-\frac{1}{4}\frac{1+\gamma}{\sqrt{\gamma}} & \frac{1}{2} & \frac{1}{4}\frac{1-\sqrt{\gamma}}{\sqrt{\gamma}}& \frac{1}{4}(\sqrt{\gamma}-1)\\
0 & 0 & 0& 0\\
0 & 0 & 0&0
\end{array}
\right )\\\\
\displaystyle W_{\sqrt{\gamma}}=\left (
\begin{array}{cccc}
\frac{1}{2} & \frac{\sqrt{\gamma}}{1+\gamma} & -\frac{1}{2}\frac{\sqrt{\gamma}+1}{1+\gamma}& -\frac{1}{2}\frac{\sqrt{\gamma}+\gamma}{1+\gamma}\\
\frac{1}{4}\frac{1+\gamma}{\sqrt{\gamma}} & \frac{1}{2} & -\frac{1}{4}\frac{1+\sqrt{\gamma}}{\sqrt{\gamma}}& -\frac{1}{4}(\sqrt{\gamma}+1)\\
0 & 0 & 0& 0\\
0 & 0 & 0&0
\end{array}
\right ).
\end{array}
\]
Defining
\[
\langle \delta_\pi|=\{ \begin{array}{llll}1 & 0 & 0 & 0\end{array}\}
\]
then
\[
| H(W^{\mathcal{A}})\rangle=-\sum_{\eta \in \mathcal{S}}|\delta_{\eta}\rangle\sum_{x\in \{0,1\}}\langle \delta_\eta|W^{(x)} |\mathbf{1}\rangle \log \langle \delta_\eta|W^{(x)} |\mathbf{1}\rangle,
\]
and the excess entropy follows from
\[
E=\sum_{\lambda\in\Lambda_W, |\lambda|<1}\frac{1}{1-\lambda}\langle \delta_{\pi_{mix}}|W_\lambda|H(W^{\mathcal{A}})\rangle
\]
which is equation (8) from \cite{crutchfield13}. |
1709.07856 | \section{Special Case: SDSS III CMASS Sample}
\label{sec:CMASS}
The SDSS III CMASS sample is one of the key target datasets where we have a large number of massive galaxies with photometric and spectroscopic observations. We use this sample as an example, computing the effects of relativistic beaming
and Doppler shifting in detail. This analysis can be easily extended to other surveys. We first briefly describe the sample and introduce the relevant quantity necessary to understand the CMASS target selection.
\subsection{CMASS Sample}
\begin{figure}
\includegraphics[width=0.5\textwidth]{plots/CMASS_TS.png}
\caption{The density of galaxies in the CMASS sample in colour-magnitude
space. The
parameter $d_perp$ is defined in Equation \ref{eq:dperp}. The red colour indicates a high density and black shows low density. The solid blue line represents the CMASS target selection criteria.}
\label{fig:CMASSTS}
\end{figure}
We use data included in data release 12 \cite[DR12;][]{Reid2016,Alam2014} of the Sloan Digital Sky Survey \cite[SDSS;][]{York2000}. SDSS I, II \citep{Abazajian2009} and III \citep{Eisenstein2011} used a drift-scanning mosaic CCD camera \citep{Gunn1998} to image 14555 square degrees of the sky in five photometric bands \citep{Fukugita1996,Smith2002,Doi2010} to a limiting magnitude of $r <22.5$ using the 2.5-m Sloan Telescope \citep{Gunn2006} at the Apache Point Observatory in New Mexico. The imaging data were processed through a series of SDSS pipelines \citep{Lupton1999,Pier2003,Padmanabhan2008}. \cite{Aihara2011} reprocessed all of the SDSS imaging data in Data Release 8 (DR8). The Baryon Oscillation Spectroscopic survey \cite[BOSS;][]{Dawson2013} was designed to obtain spectra and redshifts for 1.35 million galaxies covering 10,000 square degrees of sky. These galaxies were selected from the SDSS DR8 imaging. \citep{Blanton2003b} developed a tiling algorithm that is adaptive to the density of targets on the sky and this was used for targeting in BOSS. BOSS used double-armed spectrographs \citet{Smee2013} to obtain the spectra. BOSS resulted in a homogeneous data set with a high redshift completeness of more than 97\% over the full survey footprint. The redshift extraction algorithm used in BOSS is described in \citet{Bolton2012}. \citet{Eisenstein2011} provides a summary and \citet{Dawson2013} provides a detailed description of the survey design.
We use the CMASS sample of galaxies \citep{Bolton2012} from data release 12 \citep{Alam2014}. The CMASS sample contains 7,65,433 Luminous Red Galaxies (LRGs) covering 9376 square degrees in the redshift range $0.44<z<0.70$, which correspond to an effective volume of 10.8 Gpc$^{3}$. We used co-added spectra for each galaxy in our analysis \footnote{The co-added version of the spectrum used in our analysis can be downloaded from \url{http://data.sdss3.org/sas/dr12/boss/spectro/redux/v5_7_0/spectra/lite/}. The basic description of the SDSS optical spectra can be found over \url{http://www.sdss.org/dr12/spectro/spectro_basics}}.
\subsubsection{CMASS Target Selection}
The photometrically identified objects in the SDSS imaging catalog (Data Release 8:DR8\footnote{\url{http://www.sdss3.org/dr8}}) are used as the parent sample for selecting the galaxies to be targeted for spectroscopic observations. The parent catalog covered 7606 ${\rm deg}^2$ in the Northern Galactic Cap (NGC) and 3172 ${\rm deg}^2$ in the Southern Galactic Cap (SGC). The photometric sample contains flux observed in five photometric bands ($u,g,r,i,z$). The target selection for the CMASS sample uses two types of magnitude provided by the SDSS imaging pipeline. The imaging pipeline fits exponential and deVaucouleurs profiles for each of the five photometric band to provide the fluxes $f_{\rm exp}^{\rm band}$ and $f_{\rm deV}^{\rm band}$ respectively. These fluxes are used to define two different kinds of flux, named ``model'' and ``cmodel'' and given by the following equations.
\begin{equation}
f_{\rm mod,cmod}^{\rm band}=(1-P_{\rm mod,cmod})f_{\rm exp}^{\rm band}+P_{\rm mod,cmod}f_{\rm deV}^{\rm band}.
\end{equation}
Here $P_{\rm mod}$ is a real number between 0 and 1, and $P_{\rm cmod}$ is an integer which can be either 0 or 1. The imaging pipeline fits the observed flux to obtain values of $P_{\rm mod,cmod}$. The main difference between model and cmodel flux is that the model flux results from the use of a linear combination of exponential and deVaucouleurs profiles, whereas the cmodel flux uses the best-fitting profile. The model and cmodel fluxes are converted to magnitudes as follows:
\begin{equation}
{\rm mag_{band}} = 22.5 -2.5\log( f^{\rm band})-C_{\rm extinction},
\label{eq:mag}
\end{equation}
where fluxes are in nanomaggies and ${\rm mag_{band}}$ can be any of the five photometric bands $u,g,r,i,z$. The $C_{\rm extinction}$ is the galactic extinction correction for the galaxy using the dust maps of \citet{Schlegel1998}.
The main criteria used in CMASS target selection are as follows:
\begin{align}
17.5 &< i_{\rm cmod} <19.9 \label{eq:TS1} \\
d_\perp &> 0.55 \label{eq:TS2} \\
i_{\rm cmod} &<1.6(d_\perp-0.8)+19.86 \label{eq:TS3}
\end{align}
The CMASS targets are selected to create a constant
stellar mass sample. A galaxy evolution model incorporating the
redshift evolution of band magnitudes is used to determine the
magnitude cuts that lead to the required sample. Hence the selection
(using model magnitudes as cuts) is applied without any
K-correction. There are several other criteria used for the target
selection but they affect a very small number of objects and are not
relevant for our study. The full list of target selection rules is
provided in \citet{Reid2016}. The quantity $i_{\rm cmod}$ is the
cmodel magnitude for photometric band $i$. The quantity $d_\perp$ is
a linear combination of the colour $g-r$ and $r-i$ based on model
magnitude as follows:
\begin{equation}
\label{eq:dperp}
d_\perp=(r_{\rm mod}-i_{\rm mod})-\frac{1}{8}(g_{\rm mod}-r_{\rm mod}),
\end{equation}
where $g_{\rm mod},r_{\rm mod},i_{\rm mod}$ are the model magnitudes for the photometric bands $g,r$ and $i$ respectively. The Figure \ref{fig:CMASSTS} shows the distribution of galaxies in the final CMASS sample (DR12) in the $i_{\rm cmod}-d_\perp$ plane. The solid line shows the target selection rule as stated in equation \ref{eq:TS1}, \ref{eq:TS2} and \ref{eq:TS3}.
\subsection{Spectro-Photometry}
\begin{figure}
\includegraphics[width=0.5\textwidth]{plots/hist_CMOD_SPEC.png}
\caption{The histogram of the ratio of magnitudes from spectra to the photometric magnitude for $g,r$ and $i$ bands. The mean of the ratio is 0.93 which indicates that the magnitudes measured from spectra are larger (flux from spectra is smaller). This is because the fibres cover only 2\'' which is smaller than the mean size of a galaxy in the sample. This plot also shows that the scatter in this ratio of the two magnitudes is quite small.}
\label{fig:MagR}
\end{figure}
We use SDSS observed SEDs as a template to study the relativistic effects. We assume the observed SEDs are good representation of the galaxy population and treated them as if they were emitted SED of galaxies. We transform each of the observed spectra according to equation \ref{eq:spectra} for a given $\beta$ and $\theta$. We then obtain the flux in different photometric bands by integrating the spectra with the response function for each band:
\begin{equation}
f_{\rm spec}^{\rm band}=\int d\lambda f(\lambda) R^{\rm band}(\lambda) C^{\rm band},
\end{equation}
where $f(\lambda), R(\lambda)$ represents the flux and photometric band response for wavelength $\lambda$. The parameter $C^{\rm band}$ is the calibration factor which is obtained using the fibre flux of 10,000 galaxies. The calibration factors obtained for $g,r$ and $i$ bands are ($(2.3,3.3,6.1)\time 10^{-3}$ respectively. The fibre flux is another flux provided in the SDSS imaging catalog. It represents the flux obtained in the photometric survey withing the aperture of spectroscopic fibre for each band \footnote{http://www.sdss.org/dr12/algorithms/magnitudes/}. The aperture of $2\,''$ in diameter is assumed for calculating fibre flux, which is appropriate for the BOSS spectrograph. The spectroscopic flux is converted to magnitude using equation \ref{eq:mag}. The spectroscopic magnitude is typically smaller than the corresponding photometric magnitude because fibres cover only the central part of galaxies. We have found that the spectroscopic magnitudes can be converted to photometric magnitudes using a simple multiplication factor of 0.93. The Figure \ref{fig:MagR} shows the histogram of the ratio of model magnitude to the spectroscopic magnitude. For each of $g,r$ and $i$ band the ratio of magnitudes has mean at 0.93 with a scatter of 0.03 for $g$ band and 0.02 for both $r$ and $i$ band. We therefore obtain the cmodel magnitude from the spectroscopic magnitude using a multiplication factor of 0.93 ($i_{\rm cmod}^{\rm spec}=0.93 i^{\rm spec}$).
\subsection{Magnitude and colour evolution}
The local gravitational interactions of galaxies causes them to have peculiar velocities. These peculiar velocities cause the observed SEDs of galaxies to be different from the true SEDs. This can change the observed magnitude and colour of galaxies. We systematically investigate these changes for grid of peculiar velocity magnitudes and directions from the line-of-sight. We transform the observed spectra of each galaxy using $\beta$ values between -0.01 and 0.01 and $\theta$ between $0^\circ$ and $90^\circ$. We find that adding relativistic effects to spectra shifts the galaxies in the target selection plane. Not suprisingly, these shifts in colour are sensitive to the galaxy spectra themselves
and therefore depend on the stellar mass and redshift of galaxies. The Figure \ref{fig:trace} shows the tracks of galaxies in the target selection colour-magnitude plane. Each line with an arrowhead shows the path followed by the galaxies in the sample as peculiar velocity is varied. The tail of the line corresponds to the colour-magnitude of the galaxy when it is moving away from the observer with $\beta=-0.01$ (speed of 3000 ${\rm km\,s}^{-1}$) and the arrowhead correspond to the case when it is moving towards the observer with the same speed (i.e. we are showing the difference in assigning $\beta$ from $-0.01$ (tail) to $+0.01$ (head). The colour of the track indicates the redshift of the galaxy. Note that in the plot
we only show a very small illustrative sub-sample of the full CMASS dataset,
and we restrict ourselves to velocity directions directly aligned with the line-of-sight. The black thick solid line shows the CMASS target selection as described in equations \ref{eq:TS1}, \ref{eq:TS2} and \ref{eq:TS3}. We also show 3 more restrictive target selection criteria using other solid lines. The target selection criterion TS-n is given by the following equation:
\begin{align}
17.5 &< i_{\rm cmod} <19.9-0.05n \\
d_\perp &> 0.55 +0.03n\\
i_{\rm cmod} &<1.6(d_\perp-0.8-0.05n)+19.86,
\label{eq:TSn}
\end{align}
where $n$ is either 0,1,2, or 3, which represent different target selections TS-0,TS-1,TS-2 and TS-3 respectively. TS-0 is the actual CMASS target selection. Notice that these additional target selections are defined such that the shape of the target selection region in this plane remains unchanged. The tracks of galaxies show that the magnitudes (plotted on the x-axis) decrease (becomes brighter) when galaxies move towards the observer and increase (becomes dimmer) when they moves away as per our expectation. This leads to galaxies at higher redshifts which are close to the magnitude limit of the target selection being moved inside the sample when their velocity is towards the observer and being moved outside while their velocity is away. The colour cuts can however reverse this trend as shown by the galaxies close to the lower limit of $d_\perp$, which are at lower redshifts. These galaxies move inside the sample when they have velocities away from the observer and moves outside the sample with velocities towards the observer. It should be also noted that the effects shown in this plot are exaggerated by roughly an
order of magnitude compared to the typical case for galaxies, as we are showing results for galaxy velocities as high as 3000 ${\rm km\,s}^{-1}$.
\begin{figure}
\includegraphics[width=0.5\textwidth]{plots/icmod-dperp-sm-2-8-vv-0_June2017.png}
\caption{The effects of galaxy motion on observed galaxy colour and magnitude. The solid thick lines of different colours show the different versions of our target selection criteria. The black solid line shows the CMASS original target selection. Other solid lines shows the variant of CMASS target selection described in equation \ref{eq:TSn}. Each line with an arrow head shows how an individual galaxy will move in this space as we assign it a different velocity. The arrow-head shows the observed colour-magnitude when galaxies are moving towards the observer with a speed of 3000 ${\rm km\,s}^{-1}$ and the tail point shows the colour-magnitude when it moves with speed of 3000 ${\rm km\,s}^{-1}$ away from the observer. The colour of the arrow itself
indicates the redshift of the galaxy. Note that at small redshift a galaxy moving towards observer will cross the colour cut to move out of the sample whereas at higher redshift the galaxy moving towards us with become brighter and cross the lower magnitude cut to move inside the sample. Note that we only show a very small illustrative sub-sample of the full CMASS dataset, and we restrict ourselves to velocity directions directly aligned with the line-of-sight. It should be also noted that the effects shown in this plot are exaggerated by roughly an
order of magnitude compared to the typical case for galaxies, as we are showing results for galaxy velocities as high as 3000 ${\rm km\,s}^{-1}$.}
\label{fig:trace}
\end{figure}
\subsection{Impact on Final Obtained Sample}
\label{sec:wrel}
\begin{figure*}
\includegraphics[width=0.98\textwidth]{plots/wrel_June2017.png}
\caption{The relativistic weights for a galaxy given its redshift, stellar mass and velocity vector. The different colours indicate different redshift bins and different line-styles indicate different stellar mass bins. The left panel shows the $w_{\rm rel}$ with velocity of the galaxy in units of the speed of light along line-of-sight. The central and right panel shows the weight dependence on the direction of velocity from line-of-sight for $\beta=-0.01$ (v=3000 ${\rm km\,s}^{-1}$ away from observer) and $\beta=0.01$ (v=3000 ${\rm km\,s}^{-1}$ towards the observer) respectively.}
\label{fig:wrel}
\end{figure*}
Because the peculiar velocities of galaxies vary spatially, the
relativistic effects will spatially modulate the observed SEDs of
galaxies, which will in turn affect the observed magnitudes and
colours. Therefore, a fraction of galaxies with colours and magnitudes
originally within our target selection will move out of the sample and
also some galaxies from outside the sample will move into it. This
affects the observed number density of galaxies in the final
sample. The modulations introduced in the observed number density will
also be correlated with several other properties of galaxies for
example stellar mass, redshift and velocity. In order to quantify
these effects, we bin our sample in redshift and stellar mass. We
create 10 bins in redshift between 0.4 and 0.8 and 10 bins in
logarithm of stellar mass between $10^{10.8} M_{\odot}$ and $10^{13}
M_{\odot}$. Naively one might think that the sample
doesn't contain information on galaxies lost due to such effects
and it should be impossible to correct for these lost galaxies. However,
this is not the case if we work
under the assumption that the lost (and extra) galaxies are part
of the same distribution, and that galaxies properties and dynamics
are smoothly varying. As long as we are not dominated by noise where
we overfit small fluctuations in small bins of properties our
results should be independent of binning used in the sample. For
each stellar mass and redshift bin, we compute the initial number of
galaxies ($N_{\rm TS}^i$) in the sample. We then transform the
galaxies as if they were moving with velocity $v=\beta c$ along a
direction at angle $\theta$ from the line-of-sight. We then reapply
the target selection boundaries to count the final number of galaxies
in the sample ($N_{\rm TS}^f$). The relativistic effects due to
peculiar motion of galaxies imply that the number of galaxies in the
observed sample will be multiplied by the fraction $N_{\rm
TS}^f/N_{\rm TS}^i$. Therefore, in clustering analysis if we would
like to compensate for the number density modulation due to
relativistic effects we should weight each galaxy by $w_{\rm rel}$,
where
\begin{equation}
w_{\rm rel}=N_{\rm TS}^i/N_{\rm TS}^f
\end{equation}
We have obtained the $w_{\rm rel}$ for each bin as a function of $\beta$ and $\theta$ of the galaxy. Figure \ref{fig:wrel} shows the weights obtained for some of the redshift and stellar mass bins as the function of $\beta$ and $\theta$. The different colours correspond to different redshift bins, while the different line styles correspond to different stellar mass bins. The left panel shows $w_{\rm rel}$ with $\beta$ between $-0.01$ and $0.01$ and $\theta=0$. The value $\beta=-0.01$ corresponds to galaxies moving with a speed 3000 ${\rm km\,s}^{-1}$ away from the observer and $\beta=0.01$ galaxies moving at 3000 ${\rm km\,s}^{-1}$ towards the observer. At higher redshifts the galaxies moving towards the observer (positive $\beta$) have weight smaller than 1. They will appear brighter and hence will be seen in larger number than if they were at rest with respect to the observer. The weight in this case is therefore smaller than unity, to compensate for the higher number of observed galaxies. The weights vary with stellar mass, galaxies with higher stellar mass having larger weights.
These trends change for lower redshifts however. Below approximately $z=0.5$, galaxies moving towards the observer have weights larger than 1. This is due the fact that the galaxies at lower redshift are less likely to be close to the magnitude limit of the sample than they are to the colour cut. When they move
towards the observer they cross through the colour cut and out of the sample.
This causes a reverse trend with $\beta$ which is different to that at higher
redshifts. This can be seen in Figure \ref{fig:trace} by following the tracks of these galaxies as $\beta$ is varied. The middle and right panels of Figure \ref{fig:wrel} shows the dependence of $w_{\rm rel}$ on the direction of galaxy velocity $\theta$ for velocities with positive and negative $\beta$.
These results show the importance of considering the full velocity vector
rather than just the line-of-sight component.
\subsection{Predicting the galaxy peculiar velocities}
\label{sec:reconvel}
\begin{figure}
\includegraphics[width=0.48\textwidth]{plots/hist_vel_Recon.png}
\caption{Estimated galaxy peculiar velocities in the SDSS III CMASS galaxy redshift sample. The velocity vectors for each galaxy were estimated using a perturbation theory based reconstruction algorithm. The top panel shows the distribution of the magnitudes of galaxy velocities in the sample. The bottom panel shows the distribution of velocity directions, where $\theta=0^\circ$ indicates that a galaxy is moving along line of sight away from observer and $\theta=180^\circ$ that the galaxy is moving directly towards the observer.}
\label{fig:vel}
\end{figure}
In order to associate relativistic weights to each individual galaxy, the galaxy velocity is required. We estimate the velocity for each galaxy in the sample using a reconstruction approach. We use a publicly available reconstruction code\footnote{github repo: \url{https://github.com/martinjameswhite/recon_code/}} which estimates the velocities of galaxy in our sample using perturbation theory \citep{White2015code,White2015theory}. The reconstruction code first computes the number density ($\rho$) of galaxies on a grid using a cloud-in-cell assignment scheme . The number density is then converted to density contrast ($\delta$) which is divided by a large scale bias $b$ to yield the mass fluctuation
in the cell. We use the value $b=2.1$ measured in our analysis \citet[see companion paper:][]{Alam2016Measurement}. This mass fluctuation is then smoothed using a Gaussian kernel of width $R_f$ (the smoothing scale). Our chosen
value of $R_{f}=10$ h$^{-1}$Mpc is motivated by the results of \citep{Vargas2015}. The reconstruction code then solves for the displacement field
\citep{Zeldovich1970} and provides the displaced position for each galaxy \citep{White2015code}. We use the displaced position to obtain the peculiar velocities of galaxies using following equation:
\begin{equation}
\vec{v}=afH (\vec{r}_{obs}-\vec{r}_{\rm recon}),
\end{equation}
where $H=100$ (h$^{-1}$Mpc)/${\rm km\,s}^{-1}$, $a=1/(1+z)$ is the scale factor. We approximate the linear growth rate of perturbations $f=d\ln D/d\ln a$ as $f=\Omega_m(z)^{0.55}$. Figure \ref{fig:vel} shows the distribution of galaxy velocities obtained using this procedure. In the top panel it can be seen that most of the galaxies have velocities between $200-600 \, {\rm km\,s}^{-1}$. The bottom panel shows the distribution of the angles between the velocities in the line of sight. The detailed shape of this distribution depends on the geometry the survey.
In an isotropic universe we would expect the velocity
distribution to be isotropic. This in spherical polar
coordinates would yield a sin function for the velocity distribution with
angle. Since the survey geometry is a cone we are
sub-sampling a cone to estimate the velocity distribution. Simply
sub-sampling any part of universe with any geometric shape shouldn't
change the velocity distribution either. However, we
are estimating the velocity using the sub-sampled galaxy
distribution. While solving the Poisson equation the effect
of missing galaxies will alter the estimated velocities. In the
simplest picture, since we sample a cone, this implies an area increasing
as we move farther from the observer, and so more galaxies should be moving
away from the observer than towards the observer. We believe this to
be the reason for the sloping distribution between 30 and 150 degree in the
lower panel of Figure \ref{fig:vel}. The angle zero degrees is
that pointing
away from observer and 180 degrees denotes
pointing towards the observer in our
convention.
We note that these velocities are predicted using
perturbation theory which is not accurate on small scales where
non-linear clustering occurs. On scales below our smoothing scale, a
number of galaxies will be moving significantly faster than the
predicted velocity. This will be particularly true in virialised
objects such as galaxy clusters. Our estimate of the strength of
relativistic effects will therefore tend to be an underestimate. We
also note that the fact that the velocities are predicted using
already modulated field will introduce a second order relativistic
correction which we expect to be much smaller and leave for future
studies.
\begin{figure}
\includegraphics[width=0.48\textwidth]{plots/wrel-CMASS_June2017.png}
\caption{The distribution of the relativistic weights
$w_{\rm rel}$ for the CMASS galaxy
redshift sample. The x-axis is $w_{\rm rel}$ and the y-axis displays the binned number of galaxies on logarithmic scale. The galaxies with $w_{\rm rel}<1$ have higher probability of being in the sample. We estimate that 0.16\% more such galaxies have been added to the sample because of their peculiar velocities. Galaxies with weights $w_{\rm rel}>1$ have a lower probability of being in the sample. From these we calculate that 0.11\% of the sample which would be have been within the colour-magnitude cuts is excluded because of the effect of peculiar velocities. }
\label{fig:wrel-CMASS}
\end{figure}
\subsection{Impact on Clustering}
\begin{figure}
\includegraphics[width=0.48\textwidth]{plots/xi02-wrel_June2017.png}
\caption{The two point galaxy auto-correlation function with and
without the effect of relativistic weights. The top panel shows the monopole and the bottom panel shows the quadruple moment of the correlation function. The blue points represents the measurement without relativistic weight and the magenta points are with the relativistic weight correction.
}
\label{fig:xi02}
\end{figure}
We now examine how relativistic sample selection effects alter the results of
standard clustering analyses of the CMASS galaxy redshift sample.
We use the observational data for the CMASS sample to compute the weights $w_{\rm rel}$ which compensate each galaxy for the effect of Doppler shifting and
beaming (see section \ref{sec:wrel}). These weights are a function of the redshift, stellar mass and velocity vector of the galaxy. The relativistic
correction therefore involves applying the weights before computing the two-point clustering of the galaxy sample. The galaxy catalog contains the redshift and stellar mass of each of the galaxy. We estimate the velocity vector of the galaxy using the perturbation theory approach described in section \ref{sec:reconvel}. Figure \ref{fig:wrel-CMASS} shows the distribution of $w_{\rm rel}$ in the CMASS sample. We can see that the distribution of weights is not symmetric due to the fact that the luminosity function is non-uniform and hence there are more galaxies which scatters into the sample compared to those that scatters out of the sample. We estimate that around 0.10\% ($\sim 585$ galaxies) of the CMASS sample should not have been targeted and around 0.09\% ($\sim 523$ galaxies) should have been in the sample, but were not observed.
We have computed the two-point clustering of CMASS with and without the
relativistic weights. We use the Landy-szalay \citep{LandySzalay93} estimator, and the results are shown in Figure \ref{fig:xi02}. The top panel shows the monopole of the correlation function and the bottom panel the quadruple moment.The error bars on the clustering were computed by dividing the entire sample into 61 jackknife regions, see \citet{Alam2016Measurement} for more details. We find that the effects of these weights are much smaller than the statistical errors on the clustering measurement.
We therefore do not expect that any of the standard large scale structure analyses(such as BAO measurement or redshift space distortions) will show significant effects in current surveys. We should bear in mind though, that as the samples get larger and probe fainter magnitudes these effects might start to become more important for future surveys.
\section{Conclusion}
We have used the SDSS III BOSS CMASS galaxy sample to examine the impact of relativistic effects on observed galaxy SEDs. We have discussed how the effects
on SEDs will translate to observed fluxes and hence will impact the target
selection of galaxy redshift surveys. We have found that galaxies can move both in and out of the sample depending on their peculiar motion. We have investigated these effects for the CMASS
target selection as a function of redshift, stellar mass, magnitude and direction of galaxy velocity. In order to
estimate the effect on clustering statistics, we have also
used perturbation theory to predict the galaxy velocities from
the galaxy density field. These velocities provide the information we
need to gauge the impact of relativistic effects on individual galaxies.
We have computed weights that can be used to cancel out the relativistic
effects on target selection.
We studied the galaxy two-point correlation function with and without these weights, finding an impact on the clustering signal which is much smaller than the current statistical errors. This should not therefore affect current large scale structure analyses such as
baryon acoustic oscillation measurement or estimates of
the growth rate from redshift space distortions. We expect that these effects will be more significant when one is looking at galaxy clustering weighted by one of the properties which are affected by relativistic effects such as luminosity, photometric magnitude etc. We also expect these effects to be more significant when surveys are deeper and hence future surveys should be analyzed with such effects in mind.
One of the main motivations to study these effects is to understand how relativistic beaming and doppler shift modulate the density field and change galaxy
clustering.
If clustering statistics are chosen carefully and galaxy samples are large
enough, then these effects can in principle be detected.
\citep{Kaiser2013} has shown that these effects can contribute to
the asymmetry in galaxy clustering around clusters which is used to
infer the gravitional redshift profile \citep[e.g., ][]{Cappi1995, Kim2004, Wojtak2011, zhao2013, Sadeh2015}. Relativistic
effects on large-scale clustering
have also been computed using
perturbation theory in full General Relativity
\citep[e.g., ][]{McDonald2009, Yoo2012, Bonvin2014b}.
The results in our paper have motivated the form of the beaming effect included in a companion paper \citet{Zhu2016Nbody}. We have applied them to
N-body simulations in order to estimate the line-of-sight asymmetry in the
non-linear scale cross-correlation function of two galaxy populations with
different halo masses.
The models are also used in our other companion paper
\citet{Alam2016Measurement}, which provides the first measurement of
line-of-sight asymmetry in the CMASS sample.
\section*{Acknowledgments}
This work was supported by NSF grant AST1412966. SA is also supported by the European Research Council through the COSFORM Research Grant (\#670193). SA and SH are supported by NASA grants 12-EUCLID11-0004 during part of this study. We would like to thank Ayesha Fatima for going through the early draft and helping us making the text much more clear.
SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de As trofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University.
\section{Introduction}
\label{sec:introduction}
General Relativity \cite[GR;][]{Einstein1916} combined with the standard
cosmological model ($\Lambda$CDM) provides the most successful theory of our
universe with the minimum of external assumptions. The $\Lambda$CDM model paints a simple picture of structure formation arising from density fluctuations growing under
gravity \citep{Comer1994}. For most of the Universe's history, these
perturbations obey linear perturbation theory
\citep{Mukhanov1992,Liddle1993, Durrer1994, Ma1994, Bruni1994, Kopeikin2001,Bernardeau2002, Lagos2016}. The density field predicted by these theories have very specific statistical properties with multiple unique features \citep{1970ApJ...162..815P,Eis2005, Bassett2010,Coil2013}. We can measure most of the physical quantities of the universe just by comparing one, two, three and higher point statistics of the predicted matter density field. Galaxies provide us with a window on the underlying matter density field of the universe. In the limit of linear perturbations, galaxies can be assumed to form at the high-density peaks of the underlying matter density field and should have same clustering properties up to a multiplicative constant (galaxy bias) \citep{Bardeen1986,Cole1989}. Therefore creating three-dimensional maps of galaxies and studying their clustering properties provides one of the most precise ways to measure physical properties of our universe. In this paper, we address one of the complications of making these
maps from galaxy redshift surveys which is usually ignored: the effect of
peculiar velocities on galaxy photometry and thus the target selection.
Carrying out large galaxy surveys has been a challenging task, which was
made easier by the development of CCD cameras \citep{1998ASSL..228.....B}. Many astronomy projects were involved in the development and adoption of CCD technology for telescopes \citep{Arnaud1994, Abe1997, Bauer1998, Boulade1998, Fukugita1996, Gunn1998}. These have led to various photometric surveys covering increasingly large parts of sky with improved depth and resolution \cite[][DES\footnote{\url{http://www.darkenergysurvey.org/survey/}}]{York2000, Gladders2005, Kaiser2010,Takada2010, Gilbank2011}. Such surveys provide an excellent map of the angular distribution of galaxies, but precise measurements of the cosmological line-of-sight distance, and hence creation of three dimensional maps, requires redshifts ($z$). The redshift quantifies the wavelength shift
of features in galaxy spectra and hence requires observing galaxy's spectral energy distributions (SED). The measurement of galaxy SED requires targeting each galaxy individually and is a very expensive process. An early large galaxy redshift surveys was the CfA redshift survey \citep{CfaSurvey1989} which observed 22000 galaxies one at a time. Galaxy surveys targeting much large numbers of galaxies for SED measurement became possible with the advent of optical fibres combined with the ability to observe hundreds of SEDs in a single exposure. The huge increase in the number of spectra that we could observe started the era of large galaxy redshift surveys e.g: \citet[ LCRS]{1996ApJ...470..172S}, \citet[2dF]{Colless2003}, \citet[6dF]{Jones2009}, \citet[SDSS-III]{Eisenstein2011}, \citet[WiggleZ]{WiggleZ}, \citet[DEEP2]{Deep2013}, \citet[VIPERS]{Garilli2014}, \citet[GAMA]{gama2015}.
To make this process efficient, it is important to have prior knowledge about the location of possible targets. Therefore, generally galaxy redshift surveys require samples of objects observed photometrically to serve as parent sample. Various algorithms and knowledge of galaxy evolution models are employed to create sub-samples of such parent samples to be targeted for spectra (for example \citet{Reid2016}). Generally, these selection algorithms use various magnitude and colour cuts to define these subsamples. We know that the observed magnitudes and colours of galaxies are affected by their peculiar motion \citep{Teerikorpi1997}. This can influence the final spectroscopic galaxy target sample obtained after following the target selection rules \citep{Kaiser2013}. Such effects will act to modulate the observed galaxy density in the observed sample, in a way which will be correlated with galaxy properties including redshift, mass and velocity. This could in principle introduce new features into the measured clustering of galaxies and also bias the physical properties inferred from such clustering observations.
In this paper, we examine the special relativistic effects that galaxy
peculiar velocity have on their observed SEDs and the photometric
quantities derived from them. We then discuss the impact of these
effects on an observed sample of galaxies. We use the Sloan Digital
Sky Survey III (SDSSIII) Baryon Oscillation Spectroscopy Survey (BOSS)
CMASS sample from Data Release 12 (DR12) as an example to show how
relativistic effects will impact target selection which uses cuts in
the magnitude and colour plane. We then discuss how these introduce
density modulation in the observed sample. We define a weighting
scheme to compensate for such modulation and look at its effect on the
clustering signal. We conclude with a discussion about the impact of
such effects on the large scale structure analyses. We note that we
restrict ourselves here to the effect of peculiar velocities on
spectroscopic target selection. This is distinct from the effect of
velocities on the properties of galaxies inferred from the
spectroscopic sample \cite[e.g.,][]{2015MNRAS.450..883K,
2014MNRAS.443.1900B}. We would like to stress here
that the main focus of and motivation for the paper is the derivation of the
relativistic weights. The impact on the clustering signal is just one of
the areas which can be assessed using these weights. We are
most interested however in the weights themselves, which can be
used to model the impact of
relativistic effects (specifically relativistic beaming)
on galaxy clustering. We investigate this aspect in our companion paper
\citet{Zhu2016Nbody}.
\section{Effects of peculiar velocities on galaxy spectra}
\label{sec:theory}
We study the relativistic effects of galaxy motion on galaxy spectra and how they affect observed galaxy flux and colour. This will help us estimate the impact of such observational effects on our final observed samples. We consider two kinds of effects. The first is the redshift or blueshift applied to the spectrum due to relative motion between the observer and galaxy. The second is the change in flux coming from relativistic boost and beaming. Note that we do not consider the impact of magnification caused by gravitational lensing \citep{2012ApJ...744L..22S, 2010MNRAS.405.1025M}.
\subsection{Relativistic Doppler effect}
The relativistic Doppler effect shifts the observed wavelength of a photon
with respect to the emitted wavelength in a manner which depends on the line of sight velocity of the source. The observed wavelength and emitted wavelength for a galaxy moving along the line-of-sight are related by the following equation, where $\beta_{los}=v_{los}/c$ is the ratio of line of sight velocity ($v_{los}$) and the speed of light ($c$):
\begin{equation}
\lambda_o = \lambda_e \sqrt{\frac{1-\beta_{los}}{1+\beta_{los}}}.
\label{eq:lamshift}
\end{equation}
Here $\lambda_o$ and $\lambda_e$ are the observed and emitted wavelengths respectively. The galaxy's velocity along the line of sight consists of two components. First component is the Hubble velocity due to the expansion of the universe (denoted by $v_e$) while the second component is due to local dynamics, the peculiar velocity and denoted by $v_p$. The total line-of-sight velocity of a galaxy $v_{los}$ is given by relativistic addition of the two components under the assumption of negligible matter density so that
\begin{equation}
v_{los}= \frac{v_e + v_p}{1+ \frac{v_e v_p}{c^2}}.
\label{eq:addvel}
\end{equation}
The expansion of the Universe acts to redshift the
galaxy spectrum, and peculiar velocities lead to
additional shifts.
This implies that photometric bands see
different parts of the spectrum for galaxies with different
redshifts. Accounting for this shift leads to the well known
K-correction, \citep[see for example the case of massive
galaxies][]{Hogg2002, Blanton2003a}.
In order to apply a K-correction to galaxy magnitudes in different bands,
it is necessary to use an estimate of the galaxy redshift. In the present
paper, we concern ourselves with target selection for galaxy spectroscopic
redshift
surveys, and we assume that this target selection is carried out
using galaxy magnitudes before a redshift is known, and hence without
K-corrections. Photometric redshifts could instead be used to
compute K-corrections first, but we consider surveys such
as BOSS/CMASS \citep{Reid2016} and the SDSS main galaxy sample \citep{Strauss2002} where this is not done.
First, we note that the effect of
shift in wavelengths due to different components of the galaxy
velocity can be separated as follows:
\begin{equation}
\left(\frac{\lambda_o}{\lambda_e}\right)^2 =\left( \frac{1-\beta_{los}^e}{1+\beta_{los}^e} \right) \left(\frac{1-\beta_{los}^p}{1+\beta_{los}^p} \right)
\label{eq:vel5}
\end{equation}
Equation \ref{eq:vel5} shows that the
Doppler shifts in wavelength due to different velocity components is
separable and hence justifies our treatment to separate peculiar
velocity from the Hubble velocity due to the expansion of the Universe.
We note that additional terms such as that due to
the gravitational redshift/Sachs-Wolfe effect are also relevant, and
are treated in our companion paper \citep{Zhu2016Nbody}. To linear order
in perturbation theory, the combined effects are described in detail
by e.g., \cite{Yoo2014,Bonvin2014a}.
We also note that if galaxy band magnitudes were
K-corrected using the
observed galaxy redshift this would take into account the
effect of peculiar velocities as well as the Hubble expansion and other
components. As stated above, such K-corrections are not
relevant for the target selection considered here.
It is important to define the sign convention for velocity to avoid any
confusion. From now on we use positive velocity and $\beta$ to
indicate that the line-of-sight component of galaxy peculiar velocity
is toward the observer. Negative velocity will imply that the galaxy's
line-of-sight component of velocity is moving away from the observer.
In the situation when a galaxy is moving with velocity $c\beta$ at an angle $\theta$ from the
line-of-sight then the Doppler shift will have an additional term due to
the transverse velocity.
The observed wavelength and emitted wavelength for a galaxy moving in such a situation is given by the following equation, where $\gamma=1/\sqrt{1-\beta^2}$
\begin{equation}
\lambda_o = \gamma (1-\beta \cos(\theta))\lambda_e
\label{eq:lamshift-all}
\end{equation}
\subsection{Relativistic Beaming effect}
Relativistic beaming modifies the apparent brightness of a galaxy due to its peculiar motion. The peculiar motion of galaxy through the Doppler shift modifies the energy of emitted photons and the number of photons emitted per unit time. The direction in which photons are emitted is also different in the observed frame compared to the galaxy's rest frame, leading to an anisotropic pattern of emission in the observer's frame. Taken together, these effects are known as relativistic beaming. The effect on the spectral brightness can be derived using special relativity. The spectral brightness ($I_\nu$) of a galaxy is defined to be the energy observed per unit time, per unit area of the detector, per unit frequency and per unit solid angle\footnote{More discussion in \citet{Hogg1997}. Section 7.4 of \url{http://cosmo.nyu.edu/hogg/sr/sr.pdf} is most relevant}:
\begin{equation}
I_\nu = \frac{\Gamma E}{\sigma \Omega},
\label{eq:brightness}
\end{equation}
where $\Gamma$ is the number of photons emitted per unit time, $E$ is the energy of emitted photons, $\Omega$ is the solid angle subtended by the observed galaxy and $\sigma$ is the area of the detector. Each of the quantities appearing in equation \ref{eq:brightness} will be modified by the peculiar motion of the galaxy in the observed frame. The spectral brightness in the observed (telescope) frame ($I_{\nu}^o$) and emitted (galaxy rest) frame ($I_{\nu ^\prime}^e$) are related by following equation:
\begin{equation}
\frac{I_{\nu_o}^o}{I_{\nu_e}^e} = \left(\frac{\nu_o}{\nu_e} \right)^3 =\left[\gamma (1-\beta cos(\theta)) \right]^{-3}.
\label{eq:Ioenu}
\end{equation}
Here the Lorentz factor $\gamma=\frac{1}{\sqrt{1 - \beta^2}}$ and $\theta$ is the angle the velocity vector makes with the line of sight direction. The above expression is derived using the fact that phase space volume is invariant under Lorentz transformations. It is proportional to the number of photon in a quantum state. This makes the quantity $\frac{I_{\nu}}{\nu^3}$ Lorentz invariant and leads to equation\footnote{A detailed derivation of these equations can be found in \citet{Goodman2013}. Chapter 1 of \url{http://www.astro.princeton.edu/~jeremy/heap.pdf} is most relevant} \ref{eq:Ioenu}.
This equation is in terms of flux per unit frequency whereas our measurements will be in flux per unit wavelength. The spectral brightness per unit frequency ($I_\nu$) can be converted to the spectral brightness per unit wavelength ($I_\lambda$) using:
\begin{equation}
I_{\lambda}=\frac{dF}{d \lambda} = \frac{dF}{d \nu} \frac{d \nu}{d \lambda}=\frac{I_{\nu}}{ \lambda^2}
\label{eq:lnu}
\end{equation}
Where we have used $\nu \lambda=c$.
Finally, the observed and emitted spectral brightness per unit wavelength can be obtained by combining equations \ref{eq:lamshift-all}, \ref{eq:Ioenu} and \ref{eq:lnu}:
\begin{equation}
\frac{I_{\lambda_o}^o}{I_{\lambda_e}^e} = \left[\gamma (1-\beta cos(\theta)) \right]^{-5}
\label{eq:Ioel}
\end{equation}
It is important to note that relativistic beaming depends on both the magnitude and direction of the source velocity and not just its the line-of-sight component.
\subsection{Effects of velocity on the observed spectra}
\begin{figure}
\includegraphics[width=0.5\textwidth]{plots/OneSpectraEffect_June2017.png}
\caption{The relativistic effects on the spectra and observed colour of a single galaxy. The top panel shows the flux of a galaxy SEDs on the y-axis, with x-axis showing wavelength in \r{A} and the colour scale showing velocity.
Two effects are liiustrated, the first being the wavelength shift and the second being the rescaling of flux for the same wavelength as the source galaxy moves towards or away from the observer. The middle and bottom panel show the percentage change in the $g-r$ and $r-i$ colours as a function of the magnitude and direction of the galaxy velocity respectively. The change in colours are strongest when galaxy velocity is alight towards the line-of-sight ({\it i.e.} $\theta=0$), and vanishes when the galaxy velocity become perpendicular to the line-of-sight ({\it i.e.} $\theta=90$). }
\label{fig:OneSpectra}
\end{figure}
The spectra observed for a galaxy redshift survey experience both the effects discussed in the previous two subsections: the shift in wavelength due to Doppler shift and the change in flux due to relativistic beaming.
To compute these effects on the broad band magnitudes used for
target selection we can make use of
some template galaxy spectra and redshift them.
The spectra observed
from the BOSS/CMASS survey can fulfill this purpose. We therefore
now describe how the BOSS/CMASS fibre spectra are affected by peculiar
velocities.
The following equation describes how the observed flux per unit wavelength ($f_{\lambda}^o$)
is related to the emitted flux per unit wavelength ($f_{\lambda}^e$) at wavelength ($\lambda_e$),
as a function of observed wavelength ($\lambda_o$)
\begin{equation}
f_{\lambda}^o (\lambda_o,\beta,\theta) = f_{\lambda}^e (\lambda_e) \left[\gamma (1-\beta \cos(\theta)) \right]^{-5}
\label{eq:spectra}
\end{equation}
Here the galaxy is moving with peculiar velocity $v=\beta c$ along the
direction at angle $\theta$ from the line-of-sight. The observed ($\lambda_o$) and
emitted ($\lambda_e$) wavelengths are related by equation \ref{eq:lamshift-all}.
While deriving equation \ref{eq:spectra}, we have assumed isotropic emission of light
from galaxies. In the case of realistic galaxies the different components of galaxies can have a non-isotropic emission pattern.
In such cases the shape of the galaxy can be aligned with tidal forces acting on it (causing so-called intrinsic alignments) and hence may show a correlation with the peculiar velocity. Modelling the anisotropic emission from galaxies and its correlation with peculiar velocity is beyond the scope of this paper but could be studied in future work.
Figure \ref{fig:OneSpectra} shows the effect of relativistic beaming and relativistic doppler shift on the observed galaxy spectra and colour. The top panel focuses on the galaxy SED. The x-axis shows the wavelength in \r{A} and the y-axis shows the observed flux. The colour scale represents the velocity of the galaxy in the unit of speed of light. The spectrum corresponding to $\beta=0$ represents the emitted galaxy spectrum. We can clearly see the two effects discussed in the previous two sections. The relativistic Doppler shift causes the atomic lines to shift in wavelength. Relativistic beaming increases the observed flux for positive $\beta$ (moving towards the observer) and decreases it for negative $\beta$ (moving away from the observer). The middle and bottom panels show the percentage change in the $g-r$ and $r-i$ colour as a function of different velocity magnitude (varying along the y-axis) and velocity direction with respect to the line-of-sight (x-axis). The percentage change in $g-r$ colour is at the level of 0.2\% when the galaxy has a peculiar velocity of 3000 ${\rm km\,s}^{-1}$.
For realistic velocities of around 400 ${\rm km\,s}^{-1}$ (See section 4.5) the change is around 0.05\% . For $r-i$ colour the percentage change is significantly higher,
at the level of 3\% for 3000 ${\rm km\,s}^{-1}$ galaxies and $\sim 0.5\%$ for 400 ${\rm km\,s}^{-1}$.
This difference between colour bands illustrates
that the strength of the relativistic selection effects
will depend on galaxy spectrum and hence galaxy type in a relatively complex
way.
\section{Effects of velocities on Selected Catalog}
\label{sec:TS}
Most large
galaxy redshift surveys feature a two-step process of photometric
target selection and spectroscopic follow-up. Grism spectroscopy and other
techniques for one-step generation of galaxy redshift samples
have been used in the past \cite[e.g. ][]{1996MNRAS.279.1057S, 2015arXiv151002106M,2008ASPC..399..115H} and will play a prominent role
in the future (EUCLID: \citet{2010SPIE.7731E..2YC}, WFIRST: \citet{2013arXiv1305.5425S}, SPHEREx: \citet{2016AAS...22714701B} ).
Neverthless, fibre spectrographs are also becoming larger
and photometric selection of galaxy targets will be used to generate
samples of tens of millions of galaxy redshifts in the next few years
\citep{2016arXiv161100036D}. We therefore focus in this paper on photometric
target selection.
In order to obtain a reasonable target sample one must determine the
properties of each object based on photometric magnitudes. This require
detailed modeling of the SEDs of different kind of objects.
The targets of interest are then selected from a photometric sample which
has predefined depth and redshift
coverage.
Historically target selection was the result of
simple magnitude cuts. Recent redshift surveys employ more complex
sample selection with various cuts in the colour-magnitude plane \citep{Reid2016, 2016ApJS..224...34P}. The final observed samples will also be affected
by several biases due to the interplay between the sharp magnitude cut,
the luminosity function and errors
in the observed magnitudes. These biases are well understood and
discussed in detail by e.g., \citet{Teerikorpi1997}. We are not focusing on
biases of such kind, but instead we are concerned about the modulations
introduced in the inferred density field due to galaxy peculiar motion,
but distinct from redshift space distortions. As mentioned
in Section 2,
we deal exclusively with target selection where
K-corrections have not been applied to galaxy broad-band magnitudes before
targets are selected.
If redshifts are available and those corrections are made, the effects
of peculiar velocities on galaxy colours would be nullified by the
K-correction.
\subsection{Magnitude limited sample}
A magnitude limited sample is one which has been selected only by
applying a limiting magnitude cut. The effect of peculiar velocities
on such a
sample is relatively simple to understand. The galaxies moving towards the observer will have their magnitudes boosted and those that are intrinsically
just below the threshold will move into the sample. The galaxies moving away from an observer will have their magnitudes suppressed and hence those just above the magnitude limit will move out of the sample. We can therefore construct a simple picture in which the probability of a galaxy passing the sample cut is determined its velocity. The constant of proportionality will depend on the true magnitude of the galaxy and its spectrum and it will always be positive. This means galaxies moving towards the observer will always have a higher probability of making the sample cut compared to galaxies moving away from the observer. This is true unless one considers an exotic galaxy SED, for example, an SED in which flux decreases with wavelength fast enough, such that the gain in flux by relativistic beaming is smaller than the reduction in flux caused by relativistic Doppler effect.
\subsection{colour-Magnitude cuts}
Most of the current and future galaxy redshift survey have a more
complicated targeting algorithm than simple magnitude cuts.
In a more complicated scenario where the sample selection has several colour and magnitude cuts, the simplest expectation that galaxies moving towards the observer will have a higher probability of making into the sample does not hold true. The exact nature of cuts, details of spectra and the galaxy population can lead to the probabilities of including galaxies moving towards the observer being smaller than those moving away from the observer. Such effects depend on the redshift, halo mass and peculiar velocity (both magnitude and direction) of the observed galaxy. This can lead to extra structure in the number density of the observed target and affect the clustering measurements. This has been assumed to be unimportant for current and future surveys.
We will investigate the validity of this past assumption.
Some analyses of galaxy clustering
rely on partitioning a sample into subsamples based on their observed
properties \citep{2006MNRAS.369...68S, Croft2013, Alam2016Measurement}. The
effects we model in this paper are likely to be relatively more important
for these analyses, as they will have different strengths for
sub-samples with different galaxy properties. |
1709.07884 | \section{Introduction}
While BHs are ubiquitous in the cores of massive galaxies, the population of BHs in dwarf galaxies ($M_{\ast}<10^{9.5}M_{\odot}$) has been relatively elusive \citep{2016PASA...33...54R}. The first dwarf galaxies identified to have active galactic nuclei (AGNs) were NGC 4395 \citep{1989AJ.....97..726F, 2003ApJ...588L..13F} and Pox 52 \citep{1987AJ.....93...29K}. The AGNs in these systems were serendipitous discoveries, and they were the only dwarf galaxies known to contain AGNs for almost two decades \citep{2004ApJ...607...90B}. In recent years, thanks to large-scale surveys such as the Sloan Digital Sky Survey (SDSS), we have started to identify an increasing number of such systems. Using optical spectroscopic diagnostics, \cite{Reines:2013fj} identified 151 dwarf galaxies with signatures of AGN activity in the SDSS. This constituted an order of magnitude increase in the number of known dwarf galaxies with AGN. While optical spectroscopic diagnostics have identified the largest number of such systems (see also earlier works by \citealt{2004ApJ...610..722G, 2007ApJ...670...92G, 2008AJ....136.1179B}, more recent studies by \citealt{2014AJ....148..136M, 2015MNRAS.454.3722S}), searches using radio and/or X-rays have also been successful at identifying dwarf galaxies with AGNs \citep{:kj,Reines:2011fr, 2014ApJ...787L..30R, 2015ApJ...805...12L, 2016ApJ...831..203P, 2017ApJ...837...48C}. There have been efforts to use IR diagnostics \citep{2014ApJ...784..113S, 2015MNRAS.454.3722S}, though extreme star forming dwarf galaxies can have IR colors that mimic AGNs \citep{2016ApJ...832..119H}. In all, there now exists a collective sample of roughly two hundred dwarf galaxies with AGN signatures.
With the number of known dwarf galaxies hosting AGN growing, it is important to characterize the host galaxies in detail in order to understand what factors (if any) may influence the presence of an AGN. Additionally, studies of the host galaxies are necessary to determine whether scaling relations between BH mass and host galaxy properties hold at the low-mass end (see \citealt{Kormendy:2013ve} for a review of scaling relations). Where these low-mass systems fall with respect to scaling relations has important implications for BH formation and growth scenarios (\citealt{2008MNRAS.383.1079V, 2012NatCo...3E1304G, 2014GReGr..46.1702N}). For example, semi-analytic models suggest the slope and scatter of the low-mass end of the $M_{\rm BH}-\sigma_{\ast}$ relation between BH mass and bulge stellar velocity dispersion depends on the mechanism by which the first BH seeds formed (\citealt{2009MNRAS.400.1911V}, see also \citealt{:fl, 2016PASA...33...51L} for reviews of BH formation scenarios).
In the continuing effort towards detailed characterization of host galaxies, we present a \textit{Hubble Space Telescope} imaging analysis of RGG 118 (SDSS 1523+1145), a nearby ($z=0.0243$) dwarf, disk galaxy with an active $\sim50,000~M_{\odot}$ BH. \citep{2015ApJ...809L..14B}. It was first identified as having AGN signatures in \cite{Reines:2013fj} based on narrow emission line ratios which place it in the composite region of the BPT diagram \citep{1981PASP...93....5B, 2003MNRAS.346.1055K, 2006MNRAS.372..961K}. Subsequent analysis of high-resolution spectroscopy with the Magellan Echellette Spectrograph on the 6.5m Clay telescope at Las Campanas Observatory clearly revealed broad H$\alpha$ $\lambda$6563 emission feature characteristic of dense gas orbiting a central massive black hole. Furthermore, the galaxy was found to have a hard X-ray point source coincident with the nucleus -- strong confirmation that RGG 118 hosts an AGN. The mass of the BH, based on single-epoch spectroscopic techniques using the broad H$\alpha$ emission line \citep{2005ApJ...630..122G}, was found to be just $\sim50,000$ solar masses, the smallest yet identified in a galaxy nucleus \citep{2015ApJ...809L..14B}.
Previous analyses of the morphology of RGG 118 have relied on relatively shallow SDSS imaging \citep{2015ApJ...809L..14B, 2016ApJ...818..172G}. Using 2-D light profile modeling techniques, \cite{2015ApJ...809L..14B} find that RGG 118 is composed of an extended disk, central bulge-like component, and central point source. Subsequent analysis of the SDSS imaging was done by \cite{2016ApJ...818..172G}, who claim the presence of a stellar bar.
In this paper, we analyze new \textit{Hubble Space Telescope} imaging of RGG 118, with the aim of characterizing the morphology of the host galaxy, and studying the galaxy's stellar populations.
\section{Data}
We obtained \textit{Hubble Space Telescope} (HST) Wide Field Camera 3 (WFC3) imaging of RGG 118. Images were taken over three orbits during July 2016 (Cycle 23, Proposal 14187, PI: Baldassare). We took observations in two UVIS filters (F475W and F775W) and one IR filter (F160W). These filters correspond to \textit{g}, \textit{i}, and \textit{H} band, respectively. We also employ a traditional four point dither pattern.
Data were reprocessed using the AstroDrizzle pipeline in the DrizzlePac software package. We used a square drizzling kernel and inverse-variance map weighting, recommended for background-limited targets. The native pixel scales for WFC3 are 0.04$''$/pix for the UVIS channel and 0.13$''$/pix for the IR channel. However, dithering of observations allows one to improve the pixel sampling of the final product. For the UVIS observations, our final drizzled product used a final pixel fraction (\textit{final\_pixfrac} parameter in AstroDrizzle) of 0.5 and a final pixel scale (\textit{final\_scale}) of 0.03$''$/pix. The IR observations have a \textit{final\_pixfrac} of 0.8 and \textit{final\_scale} of 0.09$''$/pix. Point spread functions (PSFs) for each filter were constructed using the PSF fitting software \textit{Starfit}\footnote{https://www.ssucet.org/$\sim$thamilton/research/starfit.html}. Figure~\ref{threecolor} shows a three-color \textit{HST} image of RGG 118, and Figure~\ref{final_ims} shows the \textit{HST} imaging in each band. We also construct a PSF using a bright star, in order to determine how much the PSF used impacts the final fit parameters. Figure~\ref{psf_comp} shows a comparison of the Starfit-generated PSF to the profile of a bright star in the F160W image.
\begin{figure}
\centering
\includegraphics[width=0.44\textwidth]{threecolor_samepixscale.pdf}
\caption{Three color image of RGG 118. The UVIS filters have been matched in angular resolution to the IR image (0.09$''$/pixel). The red, green, and blue correspond to the F160W, F775W, and F475W filters, respectively. The images are log-scaled, and the upper and lower scale limits for each filter have been chosen such that the background is black and bright stars are white.}
\label{threecolor}
\end{figure}
\begin{figure*}
\includegraphics[width=0.33\textwidth]{f475w_full.pdf}
\includegraphics[width=0.33\textwidth]{f775w_full.pdf}
\includegraphics[width=0.33\textwidth]{f160w_full.pdf} \\
\caption{\textit{Hubble Space Telescope} WFC3 images of RGG 118 in F475W (left), F775W (middle) and F160W (right) filters. Full galaxy images, smoothed with a Gaussian kernel of 3 pixels. The color distribution for all images is in log-scale, and the limits are chosen to encompass the distribution of pixel values.}
\label{final_ims}
\end{figure*}
\begin{figure}
\includegraphics[width=0.5\textwidth]{psf_comp.pdf}
\caption{Intensity versus semi-major axis of the PSF generated by Starfit and a bright star in the image. They have been normalized to the same central intensity.}
\label{psf_comp}
\end{figure}
\section{Results}
\subsection{Profile Fitting}
We fit the 2-D light profile of RGG 118 using GALFIT \citep{2002AJ....124..266P, 2010AJ....139.2097P}. Fitting is performed on the image taken in the F160W filter, which has the greatest sensitivity. The best fit model is then applied to the optical filters in order to measure the total luminosity of each component. We also compare the results of the 2-D fitting to the 1-D surface brightness profile. We extract 1-D light profiles for each filter using the IRAF program \textit{ellipse}, which fits elliptical isophotes to imaging data. Using the results of \textit{ellipse}, we plot 1-D surface brightness profiles for RGG 118 (i.e., surface brightness as a function of semi-major axis). We also obtain measurements of the ellipticity and position angle as a function of semi-major axis from \textit{ellipse}.
The main goal of this analysis is to decompose the 2-D light profile of RGG 118 into its individual components. Each tested model is comprised of some combination of the following components: S{\'e}rsic profile \citep{1963BAAA....6...41S}, disk (defined as a S{\'e}rsic profile with index $n=1$), Ferrers profile (typically used to model galaxy bars; \citealt{2010AJ....139.2097P}), and PSF. In some models, we also introduce spiral structure in the outermost component. The components used for each tested model are listed in Table~\ref{models}.
We start by testing a model with a single S{\'e}rsic component to describe the galaxy light output, and find that a single S{\'e}rsic profile produces a poor fit. The addition of a central PSF components improves the fit, but still results in large residuals. We next consider models with two main components: an ``inner" component and an ``outer" component. The outer component is always described by a S{\'e}rsic profile, the index of which is either free to vary or restricted to the canonical disk value of $n=1$. We also consider spiral structure in the outer profile. The inner component is either modeled with a S{\'e}rsic or modified Ferrers profile.
Each combination of inner and outer component is also tested with and without a central PSF component; in all cases, the inclusion of a central PSF improves the $\chi^{2}$ value by more than 40\%. Table~\ref{models} lists each tested model and its corresponding $\chi^{2}$ value, computed by comparing the intensity as a function of semi-major axis for the model and data. Ultimately, we find the best-fit model to include an outer disk ($n=1$) with spiral structure, an inner S{\'e}rsic component with $n=0.8\pm0.01$, and a central PSF (see Figure~\ref{SersSpPsf_}). The best-fit parameters (S{\'e}rsic index, effective radius, effective surface brightness) for this model are given in Table~\ref{modelparams}. Figure~\ref{SersSpPsf_UVIS} shows the S{\'e}rsic+Spiral Disk+PSF model applied to the F475W and F775W filters. In applying the model to the F475W and F775W bands, all components were held fixed except the magnitude of each component. The position of the PSF was also allowed to vary, as the angular resolution differs between the IR and UVIS filters.
The magnitudes of each component in each filter for our best-fit model are reported in Table~\ref{bestmod}. As noted in the GALFIT documentation \footnote{https://users.obs.carnegiescience.edu/peng/work/galfit/galfit.html}, the error bars returned by GALFIT rely on the assumptions that the residuals are due only to Poisson noise, and the noise has a Gaussian distribution. Similar to the procedure described in \cite{2016ApJ...823...50S}, we estimate errors on the magnitude using the standard deviation of the sky background. The standard deviation is computed by measuring the median sky value in a series of 50x50 pixel boxes placed in the sky regions surrounding the galaxy. We estimate errors on the S{\'e}rsic index and effective radii by fitting with our alternate bright star PSF (see Section 2) and taking the error to be the difference between the values of each parameter.
\floattable
\begin{deluxetable}{c c c}
\tablecaption{GALFIT fitting results \label{models}}
\tablehead{
\colhead{Components} & \colhead{$\chi^{2}$} & $M_{F160W}$ (PSF)}
\startdata
S{\'e}rsic & 347.77 & -- \\
S{\'e}rsic + PSF & 33.09 & 21.92 $\pm$0.02 \\
\hline
S{\'e}rsic + S{\'e}rsic & 353.31 & -- \\
S{\'e}rsic + S{\'e}rsic + PSF & 6.05 & 22.00$\pm$0.03 \\
S{\'e}rsic+ Disk & 281.56 & -- \\
S{\'e}rsic + Disk + PSF & 11.51 & 22.06$\pm$0.03 \\
\hline
Ferrers + S{\'e}rsic & 83.37& -- \\
Ferrers + S{\'e}rsic + PSF & 74.09 & 22.22 $\pm$0.03 \\
Ferrers + Disk & 82.88 & -- \\
Ferrers + Disk + PSF & 64.83 & 22.25$\pm$0.03 \\
\hline
S{\'e}rsic + Spiral Sersic + PSF & 11.46 & 22.05$\pm$0.03\\
S{\'e}rsic + Spiral Disk + PSF & 3.97 & 22.02$\pm$0.03 \\
\enddata
\tablecomments{Model components and corresponding chi-squared values for each GALFIT trial. For all models except S{\'e}rsic + Spiral Disk + PSF (shown in Figure~\ref{SersSpPsf_}. Errors on the PSF are those reported by GALFIT. }
\end{deluxetable}
\floattable
\begin{deluxetable}{c c c c | c c c c}
\tablecaption{Best fit model parameters}
\tablehead{
\multicolumn{4}{c}{Inner S{\'e}rsic component} & \multicolumn{4}{c}{Outer disk}
}
\startdata
{${\rm r_{eff}}$} & {$\mu_{\rm eff}$} & {$n$} & (b/a) &{${\rm r_{eff}}$} & {$\mu_{\rm eff}$} & {$n$} & (b/a) \\
{(kpc)} & {(mag/arcsec$^{2}$)} & {} & & {(kpc)} & {(mag/arcsec$^{2}$)} & {} & \\
\hline
$1.57\pm0.22$ & 22.5 & $0.80\pm0.1$ & 0.45 & $6.51\pm1.72$ & 24.1 & 1.00 (fixed) & 0.69 \\
\enddata
\tablecomments{Best fit model parameters (effective radius, surface brightness at the effective radius, S{\'e}rsic index, and axis ratio) for the S{\'e}rsic + Spiral Disk + PSF model in the F160W filter. }
\label{modelparams}
\end{deluxetable}
\begin{figure*}
\includegraphics[width=0.33\textwidth]{orig_sers_disk_psf_spiral.pdf}
\includegraphics[width=0.33\textwidth]{model_sers_disk_psf_spiral.pdf}
\includegraphics[width=0.33\textwidth]{resid_sers_disk_psf_spiral.pdf}\\
\includegraphics[scale=0.6]{sb_sers_disk_psf_spiral.pdf}
\includegraphics[scale=0.6]{intens_sers_disk_psf_spiral.pdf}
\caption{Top row: Image of RGG 118 in the F160W filter (left); best fitting GALFIT model including a PSF, inner S{\'e}rsic component, and outer spiral disk (middle); residuals (right). Bottom row: Left panel shows the observed surface brightness profile of RGG 118 as open circles. The overall best-fit GALFIT model is shown in red, and is comprised of a PSF (purple dashed line), inner S{\'e}rsic component (green dashed line) and outer disk (blue dashed line). The residuals are shown below the surface brightness profile. Right panel shows the average intensity along a given isophote from the data and the intensity as a function of radius for the best-fit GALFIT model. Scale and colormap are consistent between the images.}
\label{SersSpPsf_}
\end{figure*}
\begin{figure*}
\includegraphics[width=0.33\textwidth]{orig_475.pdf}
\includegraphics[width=0.33\textwidth]{model_475.pdf}
\includegraphics[width=0.33\textwidth]{resid_475.pdf}\\
\includegraphics[width=0.33\textwidth]{orig_775.pdf}
\includegraphics[width=0.33\textwidth]{model_775.pdf}
\includegraphics[width=0.33\textwidth]{resid_775.pdf}\\
\caption{Best-fit model as determined from the F160W data applied to the F475W (top row) and F775W (bottom row) images. For each filter, the scale and colormap are consistent between the images .}
\label{SersSpPsf_UVIS}
\end{figure*}
\floattable
\begin{deluxetable}{c c c c}
\tablecaption{AB magnitude of individual components \label{magnitude}}
\tablehead{
\colhead{Filter} & \colhead{PSF} & \colhead{Inner component} & \colhead{Disk}\\
\colhead{} & \colhead{(mag)} & \colhead{(mag)} & \colhead{(mag)}
}
\startdata
F475W & 22.56$^{+0.37}_{-0.28}$ & 19.38$^{+0.36}_{-0.27}$ & 17.49$^{+0.65}_{-0.40}$\\
F775W & 22.37$^{+0.38}_{-0.28}$ & 18.20$^{+0.13}_{-0.12}$ & 17.48$^{+0.82}_{-0.46}$ \\
F160W & 22.02$\pm$0.03 & 18.31$^{+0.09}_{-0.08}$ & 16.42$^{+0.14}_{-0.12}$ \\
\enddata
\tablecomments{AB magnitude of each component in the best-fitted model (S{\'e}rsic + Spiral Disk + PSF) for each filter. Modeling was performed on the F160W filter, and the best-fitted model was applied to the two optical filters. }
\label{bestmod}
\end{deluxetable}
\subsection{Colors and stellar masses}
Using the $g$, $i$, and $H$-band magnitudes from GALFIT and extinction corrections based on the extinction map from \cite{2011ApJ...737..103S}, we find the $g-i$ color and $H$-band luminosity for the outer disk and inner S{\'e}rsic components. For the inner component, we find $(g-i)_{\rm bulge} = 1.18^{+0.48}_{-0.40}$.
The disk is faint in the g and i bands, and the errors on the disk magnitudes returned from GALFIT are large (they give $(g-i)_{\rm disk} =-0.01^{+1.1}_{-1.2}$). An alternate way to try to constrain the disk color is using the 1-D light profiles output by \textit{ellipse}. Using the total flux computed between a radius of 10$''$ and 16$''$, i.e., where the disk is dominant, we find $(g-i)_{\rm disk}=0.5\pm0.3$.
We show the $g-i$ (F475W-F775W) color evolution of single stellar population with initial mass of $10^{8}$ solar masses using GALEV \citep{Kotulla:2009ul} and show the results in Figure~\ref{ssp} for a solar-metallicity model and sub-solar metallicity model. While treating the bulge and disk as single stellar populations is a significant simplification, we can nevertheless get a rough idea of the relative ages of the bulge and disk. A disk with $(g-i)\approx0.5$ would be dominated by young stellar populations with ages of hundreds of Myr to $\sim1$ Gyr. The observed bulge color suggests a population that is older than $\sim1$ Gyr. We also show the color evolution for an Sa-galaxy with total mass of $5\times10^{9}M_{\odot}$ in Figure~\ref{ssp}.
We compute the stellar mass of each component using the color-based mass-to-light ratios derived by \cite{2003ApJS..149..289B}. We measure the luminosity using our F160W observations (roughly the equivalent of $H$ band), since the variation in M/L is decreased at NIR wavelengths. We use the $(g-i)$ color to compute the $\log({\rm M/L}_{H})$ ratio $\Upsilon_{H}$. The resulting equation is $\Upsilon_{H} = -0.186 + (0.179\times(g-i))$. We compute a disk stellar mass of $M_{\ast, disk} = 10^{9.23(+0.1,-0.09)}M_{\odot}$ and an inner component stellar mass of $M_{\ast, bulge} = 10^{8.59(+0.11,-0.12)}M_{\odot}$. This gives a total stellar mass of $M_{\ast, total} = 10^{9.32(+0.10,-0.11)}M_{\odot}$. This is in good agreement with the total stellar mass in the NASA-Sloan Atlas, which uses the k-correct code \citep{2007AJ....133..734B}: $M_{\ast, total} = 10^{9.35} M_{\odot}$.
\begin{figure}
\includegraphics[width=0.45\textwidth]{ssp.pdf}
\caption{F475W - F775W color versus stellar population age. The solid red and dashed blue lines represent models for the extinction-corrected color evolution of a single stellar population with initial stellar mass of $10^{8}M_{\odot}$. The solid red line shows the evolution of a population with solar metallicity, while the dashed blue shows a population with sub-solar metallicity ($[{\rm Fe/H}] =-0.3$, or roughly half the metallicity of the Sun). The gray horizontal lines show the color of the bulge and disk, and the corresponding shaded regions encompass the errors. Note that the disk color is computed between a radius of 10$''$ and 16$''$. The light blue shaded region encompasses evolutionary tracks for an Sa-galaxy (the morphological classification most similar to RGG 118) with a \textit{total} mass of $5\times10^{9}M_{\odot}$ and chemically consistent metallicity. Models for this galaxy were computed for E(B-V) ranging from 0.0 to 0.5; increasing E(B-V) increases (reddens) the F475W-F775W color. All evolutionary tracks were computed using GALEV \citep{Kotulla:2009ul}. }
\label{ssp}
\end{figure}
\section{Discussion}
\subsection{Nature of the central point source}
In the following section, we discuss the nature of the observed point source. We first consider whether the optical point source is consistent with an AGN given the X-ray luminosity and \textit{assuming a typical quasar SED}. Using the quasar SED from \cite{2006ApJS..166..470R}, we use the observed X-ray luminosity (from \citealt{2015ApJ...809L..14B}) to determine the expected luminosity at the central wavelength in the F475W filter. The \cite{2006ApJS..166..470R} SED is computed out to a maximum energy 0.4 keV, beyond which it is assumed to have constant $\nu L_{\nu}$. From our X-ray observations, $\rm{\nu L_{\nu}} (2 \rm{keV})$ = $1.96\times10^{39}$ erg~s$^{-1}$. Based on the \cite{2006ApJS..166..470R} SED, we expect $\rm{\nu L_{\nu}} (4659 \rm{\AA})$ = $2.7\times10^{40}$ erg~s$^{-1}$.
The measured luminosity of the point source is $\rm{\nu L_{\nu}} (4659 \rm{\AA})$ = $2.5\times10^{40}$ erg~s$^{-1}$, based on the magnitude of the point source as determined in GALFIT (22.56 mag). While we note that there is considerable scatter ($\sim0.5$ dex) in the \cite{2006ApJS..166..470R} mean quasar SED, the measured luminosity is in excellent agreement with the predicted luminosity, suggesting the point source is indeed dominated by the AGN.
We also consider the possibility of a nuclear star cluster (NSC) for the point source. NSCs become increasingly prevalent as one moves down the galaxy mass function, with as many as $80\%$ of galaxies with $M_{\ast}<10^{10}M_{\odot}$ hosting a massive, compact NSC (see, e.g., \citealt{Carollo:1997fj, Carollo:1998uq, 2002AJ....123.1389B, 2006ApJS..165...57C, 2007ApJ...671.1456C}). These NSCs are typically a few to a few tens of parsecs in radius, with masses from $\sim10^{5}-10^{7}M_{\odot}$ \citep{2002AJ....123.1389B, 2004AJ....127..105B, 2005ApJ...618..237W}. At the distance of RGG 118, 0.1$''$ corresponds to 50 pc, meaning that for our observations, a NSC would appear as an unresolved point source.
\textit{HST} surveys of nearby late-type galaxies have revealed relations between galaxy properties and those of their NSCs (see \citealt{2002AJ....123.1389B, 2004AJ....127..105B}). In particular, \cite{2004AJ....127..105B} finds a relation between the B-band magnitude of the galaxy and the I-band magnitude of the NSC. RGG 118 has an absolute B-band magnitude of $M_{B, \rm{galaxy}} = -18.39\pm0.521$ (from the HyperLeda database; \citealt{2014AA...570A..13M}). Using the relationship from Table 2 in \cite{2004AJ....127..105B}, we find that, if RGG 118 has a NSC, it's predicted I-band magnitude is $M_{I, \rm{NSC}} = -11.74$. Using the synthetic photometry package SYNPHOT, we compute the predicted WFC3 F475W and F775W magnitudes. We assume a stellar population with an age of 1 Gyr and star formation occurring in an instantaneous burst, normalized to the predicted I-band magnitude. With these assumptions, SYNPHOT predicts NSC apparent magnitudes of $m_{F475W, \rm{NSC}} = 24.4$ and $m_{F775W,\rm{NSC}} = 23.7$; one to two magnitudes fainter than the observed point source in RGG 118.
Given the mass and morphology of RGG 118, it is possible that it does contain a NSC. Based on scaling relations between galaxy stellar mass and NSC properties \citep{2016MNRAS.457.2122G}, the NSC would have an expected radius of $\sim2-3$ pc, and the combined BH+NSC mass would be $\sim8\times10^{5}M_{\odot}$. However, while it may contain a NSC, the point source luminosity in RGG 118 is consistent with being dominated by the AGN. We note that our main results relating to the structure of RGG 118 are unaffected by the relative contributions to the central PSF from an AGN versus a NSC.
\subsection{Comparison to SDSS imaging analysis}
\cite{2015ApJ...809L..14B} first analyzed the SDSS imaging of RGG 118. They also used GALFIT to decompose the SDSS image into individual components, finding a best-fitted model including an exponential disk, inner S{\'e}rsic component with $n=1.13\pm0.26$, and PSF. The masses of the disk and inner component were found to be $10^{9.3}\pm0.1M_{\odot}$ and $10^{8.8\pm0.2}M_{\odot}$, consistent with the masses determined in this work. We do find a slightly lower S{\'e}rsic index for the inner component based on modeling of the HST data than for the SDSS data ($n=0.8\pm0.1$ in this work compared to $n=1.13\pm0.26$).
The SDSS imaging was subsequently analyzed by \cite{2016ApJ...818..172G}. They use 1-D modeling techniques to model the profile of RGG 118. \cite{2016ApJ...818..172G} include spheroid (bulge), bar, and disk components, and find that a central PSF is not required for their model. Their bar component is fit with a modified Ferrers profile, while their bulge is fit with a S{\'e}rsic component of n=0.41. Our attempts to model the RGG 118 light profile with a bulge, bar, and disk did not converge on a solution in GALFIT.
One potential explanation for our differing preferred models is that their spheroid component has an effective radius of 0.63$''$, less than the typical FWHM of the SDSS r-band PSF (1.3$''$), making it difficult to distinguish their bulge from a point source. We are able to find best-fit solutions for models including a point source, bar, and disk, though these have higher $\chi^{2}$ values than models without a bar (Table~\ref{models}). \cite{2016ApJ...818..172G} find the stellar masses of the disk, bar, and bulge to be $10^{9.36}M_{\odot}$, $10^{7.76}M_{\odot}$, and $10^{7.92}M_{\odot}$, respectively. Their total stellar mass is consistent with our findings, though the masses of the individual components are not. Overall, we find models including Ferrers (bar) components to produce poorer fits to the HST data.
The presence of a bar can also be assessed using the ellipticity and position angle profiles of RGG 118 (Figure~\ref{pa_ellip}). \cite{2007ApJ...657..790M} describe several signatures produced by a bar. Within the bar, there is typically a continuous increase in ellipticity with a fixed position angle. At the end of the bar, there is an abrupt drop-off in ellipticity and a sharp change in position angle as the profile moves from bar-dominated to being dominated by the disk. We do not find evidence for a bar in either the ellipticity or position angle profile of RGG 118.
\begin{figure}
\includegraphics[width=0.5\textwidth]{pa_ellip.pdf}
\caption{Position angle and ellipticity profile of RGG 118. Signatures of a bar are an increase in ellipticity with fixed position angle within the bar, followed by a sharp drop off in ellipticity coincident with a change in position angle. We do not observe these features for RGG 118.}
\label{pa_ellip}
\end{figure}
\subsection{Comparison to other systems}
Studying the morphologies of the population of dwarf/low-mass galaxies with AGNs may help illuminate what factors are important for influencing the presence of an AGN in these systems. Here, we compare RGG 118 to other low-mass galaxies with AGN, as well as to the general population of spiral galaxies. \cite{2011ApJ...742...68J} study the structures of 147 host galaxies of low-mass AGNs ($M_{\rm BH}\lesssim10^{6}~M_{\odot}$), the vast majority of which have extended disks. They find that for galaxies with detected disks, the mean ratio of the bulge-to-total luminosity $\langle B/T \rangle$ is 0.23 (with a median of 0.16). We find that the $B/T$ ratio for RGG 118 is 0.15$\pm0.03$.
We can also compare RGG 118 to disk-dominated galaxies studied by \cite{2003ApJ...582..689M}, who presented bulge-to-disk decompositions of 121 late-type spiral galaxies. They found that the bulge S{\'e}rsic indices ranged from 0.2 - 2.0, with a mean of $\sim1.0$. They also found a relation between the bulge and disk radii, such that the mean ratio $\langle r_{e}/r_{h} \rangle = 0.22\pm0.09$. RGG 118 is consistent with these disk-dominated galaxies, with a bulge S{\'e}rsic index of $n=0.8$ and a bulge-to-disk scale length ratio of 0.24.
There are also examples of low-mass galaxies with AGNs that have very different morphologies from RGG 118. For example, NGC 4395 has a disk and nuclear star cluster, but is bulgeless \cite{2003ApJ...588L..13F}. POX 52 on the other hand has no detected disk component and has a S{\'e}rsic index of $n=4.0$ \citep{2008ApJ...686..892T}. Accreting BHs have been found in the compact irregular dwarf galaxy Henize 2-10 \citep{Reines:2011fr, 2012ApJ...750L..24R, 2016ApJ...830L..35R}, and in a member of the the interacting dwarf galaxy pair Mrk 709 \citep{2014ApJ...787L..30R}. Further demographic studies will be necessary to determine whether the morphology of dwarf galaxies with AGN is distinct from those without.
\subsection{Scaling relations}
There are well known scaling relations between BH mass and bulge properties such as stellar velocity dispersion \citep{2000ApJ...539L...9F, 2000ApJ...539L..13G, 2009ApJ...698..198G, 2013ApJ...764..184M}, stellar mass \citep{Marconi:2003fk, Haring:2004lr}, and near-infrared luminosity \citep{Marconi:2003fk}. These relations imply that the BH and galaxy co-evolve despite the small gravitational sphere of influence of the BH relative to the galaxy. In this section, we discuss the importance of constraining the low-mass end of scaling relations and revisit the position of RGG 118 relative to these relations.
Cosmological simulations suggest that the BH occupation fraction in low-mass galaxies, as well as the slope and scatter of the low-mass end of BH-galaxy scaling relations are related to the primary mechanism by which BH seeds form in the early universe \citep{2009MNRAS.400.1911V}. BH seed formation models tend to fall into two categories: light seeds ($M_{\rm BH, seed}\approx100~M_{\odot}$) and heavy seeds ($M_{\rm BH, seed}\approx10^{4-5}~M_{\odot}$). In light seed models, BH seeds form from the deaths of Population III stars \citep{2001ApJ...551L..27M,2009ApJ...701L.133A, 2014ApJ...784L..38M}. These models predict a plume of objects which scatter \textit{below} the present-day M-$\sigma$ relation at low galaxy/BH masses. On the other hand, heavy seed models \citep{2006MNRAS.370..289B, 2006MNRAS.371.1813L} produce BH seeds via direct collapse of gas clouds and predict that objects at the low-mass end of M-$\sigma$ should scatter \textit{above} the relation.
There has also been considerable discussion regarding whether galaxies without classical bulges follow scaling relations. \cite{Kormendy:2013ve} find that the properties of galaxies with pseudobulges (i.e., flatter, rotationally supported components with S{\'e}rsic indices $n<2.0$; \citealt{2004ARAA..42..603K}) do not correlate with BH mass. Though there is considerable intrinsic scatter in these scaling relations, it seems that galaxies with pseudobulges tend to fall below the $M_{\rm BH}-M_{\rm bulge}$ relation, such that their BHs are under-massive with respect to the mass of the (pseudo)bulge \citep{Greene:2010fr}. With a S{\'e}rsic index of $n=0.8$, the central component of RGG 118 is more consistent with a pseudobulge than a classical bulge. \cite{2015ApJ...809L..14B} find that RGG 118 does sit below the $M_{\rm BH}-M_{\rm bulge}$ relation defined by early-type galaxies; our results based on \textit{HST} imaging are consistent with this. For the bulge mass of RGG~118 ($10^{8.59}M_{\odot}$), the relation given by \cite{Kormendy:2013ve} predicts a BH mass of $\sim8\times10^{5}M_{\odot}$, or 0.2\% of the bulge mass. The BH in RGG 118 is in actuality roughly an order of magnitude smaller. We also find that RGG 118 sits below the relation between BH mass and IR bulge luminosity.
In Figure~\ref{scalings} we show the position of RGG 118 relative to the $M_{\rm BH}-M_{\rm bulge}$ and $M_{\rm BH}-L_{\rm bulge}$ relations as defined by \cite{Kormendy:2013ve} and \cite{2016ApJ...825....3L}. While the \cite{Kormendy:2013ve} relation is defined by elliptical/S0 galaxies with classical bulges, the \cite{2016ApJ...825....3L} relation includes late-types galaxies as well. The spiral galaxies from \cite{2016ApJ...825....3L} have BH masses ranging from $10^{6}-10^{8} M_{\odot}$. It is important to mention that the \cite{2016ApJ...825....3L} sample is comprised of galaxies with BH masses measured dynamically (i.e., through megamasers). There are few low-mass galaxies for which comparisons between dynamical BH mass measurements and broad-line based measurements can be made. However, for NGC 4395, the broad-line mass from \cite{Reines:2013fj} is consistent with both the reverberation mapping mass \citep{2005ApJ...632..799P} and the recent gas dynamical measurement from \cite{2015ApJ...809..101D}. Additionally, the relationship between $R_{\rm BLR}$ and the 5100${\rm \AA}$ luminosity, on which the broad line mass measurements depend, has been shown to extend down to BHs of least $\sim7\times10^{5}$ $M_{\odot}$ \citep{2016ApJ...831....2B}.
Our result is consistent with those of \cite{Greene:2008qy} (followed by \citealt{Jiang:2011vn, 2011ApJ...742...68J}) suggesting a break-down in scaling relations for low-mass ($M_{\rm BH}<10^{6}M_{\odot}$) BHs. Recent work by \cite{2015ApJ...813...82R} showed that nearby AGNs (including dwarf AGNs) fall systematically $\sim$ an order of magnitude below quiescent galaxies on the relation between total galaxy stellar mass and BH mass. This is potentially driven by a difference in host galaxy properties; they find a significant fraction of the AGN host galaxies are in spiral/disk galaxies.
There are several possible explanations for why bulgeless/disk-dominated galaxies or those with pseudobulges do not correlate with BH mass in the same way as galaxies with classical bulges. \cite{Kormendy:2013ve} (see also \citealt{Greene:2008qy}; \citealt{Jiang:2011vn}) suggest that there are two different modes of BH growth: one in which a merger drives copious amounts of gas towards the center, growing the BH rapidly, and a second where BH growth is a local, stochastic process. The first mechanism would be relevant for BHs in bulge-dominated/elliptical galaxies, while BHs in disk-dominated galaxies would grow via the second mode. This is consistent with a picture in which the BHs in disk-dominated and/or pseudobulge galaxies are undermassive with respect to the scaling relations defined by relatively massive, classical bulge-dominated systems.
\smallskip
In summary, we find that the light profile of RGG 118 is well described by an outer spiral disk, an inner S{\'e}rsic component with a stellar population older than $\sim1$ Gyr, and properties consistent with a pseudobulge, and a central point source. The properties of the central point source are consistent with originating from an AGN. We confirm that RGG 118 sits well below scaling relations between BH mass and bulge mass/luminosity, similar to other low-mass, disk-dominated systems.
\begin{figure*}
\centering
\includegraphics[width=0.44\textwidth]{mbh_lbulge.pdf} \includegraphics[width=0.44\textwidth]{mbh_mbulge.pdf}
\caption{Scaling relations between bulge properties and BH mass. \textit{Both panels:} The green star represents RGG 118, and the gray circles show galaxies from \cite{2014ApJ...780...70L} and \cite{2016ApJ...825....3L}. We also show the positions of NGC 4395 \citep{2003ApJ...588L..13F, 2015ApJ...809..101D} and Pox 52 \citep{2004ApJ...607...90B, 2008ApJ...686..892T} . Note that NGC 4395 is bulgeless, so the ``bulge" stellar mass and luminosity refer to the entire galaxy. WFC3 H-band luminosities for NGC 4395 and Pox 52 are computed by transforming their 2MASS H-band luminosities via the relations given in \cite{2011wfc..rept...15R}. The pink lines and shading show the scaling relations derived by \cite{2016ApJ...825....3L} including the offsets found for their late-type galaxy sample relative to the full sample. \textit{Left:} $M_{BH}$ versus $L_{\rm bulge, H}$. The blue line and shaded region show the $L_{\rm bulge}-M_{BH}$ relation and intrinsic scatter from \cite{Marconi:2003fk}. \textit{Right}: $M_{BH}$ versus $M_{\rm bulge}$. The blue line and shaded region show the $M_{\rm bulge}-M_{BH}$ relation and scatter from \cite{Kormendy:2013ve}. }
\label{scalings}
\end{figure*}
\acknowledgements
A.E.R. is grateful for the support of NASA through Hubble Fellowship grant HST-HF2-51347.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS 5-26555. Support for Program No. HST-GO-14187 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. The authors thank Laura Ferrarese for helpful discussions.
\software{AstroDrizzle (\footnotesize{drizzlepac.stsci.edu}),\normalsize{
GALFIT \citep{2002AJ....124..266P, 2010AJ....139.2097P},}
GALEV \citep{Kotulla:2009ul},
Starfit (\footnotesize{https://www.ssucet.org/$\sim$thamilton/research/starfit.html})}
\bibliographystyle{apj} |
2205.12199 | \section{Introduction}
\IEEEPARstart{D}{ecision} support systems across multiple industries rely on heuristic approaches using models trained on historical data. For example, credit risk, propensity, attrition, fraud, and hospital readmission risk models classify data into two classes given a set of input features. The goal is to train analytical models that give the most accurate predictions, retain stable performance over time and utilize fewer features possible by efficiently extracting information from available data. Generally, the better the model can describe complex non-linear interactions between features, the higher its performance and ability to generalize the data will be. Conventional, non-quantum, machine learning models with higher performance are often achieved through ensembles of simpler learners. See, for example \cite{Freund1996machine, Breiman2001random, Hastie2017elements}.
Havlicek, et al. \cite{havlivcek2019supervised} implemented a quantum support vector machine classifier (QSVM) on a superconducting processor. Originally proposed in \cite{rebentrost2014quantum}, QSVM exploits a high-dimensional quantum Hilbert space to obtain an enhanced solution. This enhancement can be achieved through controlled entanglement and interference, which is inaccessible for classical support vector machines. However, superior performance of QSVM or other quantum approaches compared to traditional machine learning models is yet to be demonstrated on a practical dataset. Park et al. \cite{Park2020practical} demonstrated improvements to QSVM compared to classical SVM by using parameterized shallow unitary transformations for feature maps with rotation and regularization. Wu et al. \cite{wu2021application} provided benchmarks comparing performance of QSVM built using a simulator and physical hardware with classical SVM and \textit{xgboost}. Those benchmarks built using three different platforms, IBM Quantum, Google Tensorflow Quantum and Amazon Bracket, indicated similar performance of QSVM compared to its classical counterparts on a practical dataset. Another recent paper by Glick et al. \cite{glick2021covariant} discusses a class of covariant kernels and quantum advantage for problems where the data satisfies a group structure.
The idea of boosting quantum machine learning models was previously discussed by Neven et al. \cite{Neven2009training} in the context of adiabatic quantum computing implemented on D-Wave annealers, where authors used one level decision trees as weak classifiers. Papers by Schuld et al. and Abbas et al. \cite{Schuld2017quantum, Abbas2020onquantum} discussed quantum ensembles of quantum classifiers primarily from a perspective of speedup due to parallel calculation.
In general, quantum machine learning (QML) models consist of data encoding into qubits, a variational quantum circuit with trainable parameters, a classical cost function and an optimization algorithm. Most of the QML models constructed this way are mathematically related to quantum kernel methods \cite{Schuld2021quantum}. Notably, initial state preparation and subsequent unitary transformation with input features is carried out through a circuit called feature map. Unlike other types of machine learning algorithms, the choice of initial feature map in quantum support vector machines (QSVM) could yield unique decision boundaries, making QSVMs with different feature maps independent from each other. This characteristic of QSVM is well suited for implementing boosting algorithms, however, given a large number of choices of feature maps the automation of feature map/model selection and training process is quite desirable.
In addition to achieving better performance with quantum models, there is a need to make them more user-friendly by automating the model selection and training process. Currently, model architectures are often derived from well-known physical models, e.g. an Ising model and respective Hamiltonian were used for feature mapping. Thus, an automated procedure can augment some of the unnecessary complexity of the existing models for users without a physics background as well as assist in discovering new model architectures.
The approach presented in the current work is different from results discussed in the referenced sources in the following respects. First, we are focused on universal quantum computing with gates that can run on superconducting qubits as implemented, e.g. in IBM Quantum System One. Secondly, even though we consider shallow circuits for kernel functions, those tend to be stronger than typical weak classifiers discussed in the literature, which is mostly based on decision trees. Thirdly, we implement an automated model selection on every boosting step to choose from different topologies and thus explore wider feature and model spaces. This process can be used to search for alternatives for broadly used Ising-type models. Fourthly, our approach is not constrained to classification tasks, it can be equally applied to regression tasks. Lastly, the focus of many prior results was on the possibility of quantum speedup for classical procedures, whereas our focus is primarily on the development of models with higher performance. In this work we are applying our approach to the problem of binary classification and use classification accuracy on the test sample as a measure of performance. \newline
\noindent Our main contributions are:
\begin{itemize}
\item A new ensemble method for QSVM that enhances model performance, when the data is difficult to model for a single learner.
\item Hyperparameter optimization for QSVM.
\item Simulation on multiple datasets to ensure stability of results.
\end{itemize}
\section{Boosting Method for QSVMs}
\subsection{Data and Data Encoding}
In this work we consider classical examples of generated data, such as moons, circles and XOR. This allows creation of many different datasets to accumulate statistics of model performance.
Following the best practice, the data is split into training, validation and testing datasets. A validation dataset is used for hyperparameter tuning in the process of grid search for the best model on every step of the boosting procedure. A testing dataset is completely hidden from the training and is used to compare different models.
Following \cite{havlivcek2019supervised} we define a feature map on $n$-qubits as
\begin{equation}
\label{eq:feature_map}
{\mathcal{U}}_{\Phi}(\vec{x})=U_{\Phi(\vec{x})}H^{{\otimes}n}
\end{equation}
where
\begin{equation}
\label{eq:feature_map_diag_gate}
U_{\Phi(\vec{x})}=\exp\left(i\sum_{S\subseteq [n]}
\phi_S(\vec{x})\prod_{i\in S} P_i\right)
\end{equation}
Here $H$ is the Hadamard gate and $P_i \in \{ I, X, Y, Z \}$. A set of feature maps that we can utilize for a grid search, when the data has two features is shown in Fig. \ref{fig:feature_maps}.
\begin{figure*}[ht]
\centering
\includegraphics[width=\textwidth]{images/Fig1_feature_maps.png}
\caption{Set of feature maps for grid search.}
\label{fig:feature_maps}
\end{figure*}
\subsection{Ensemble Structure}
The traditional \textit{AdaBoost} variant of boosting relies on weak learners, such as decision stumps, that are trained on every iteration \cite{Hastie2017elements}. For each subsequent iteration it emphasizes examples that were previously misclassified by calculating and assigning or updating their weights. The final prediction is calculated by weighted majority vote of classifiers. In this work we consider support vector machines on quantum kernels $k(\vec{x}_{i},\vec{x}_{j})=|\langle {\mathcal{U}}_{\Phi}(\vec{x}_{i})|{\mathcal{U}}_{\Phi}(\vec{x}_{j})|^{2}$ that we call Quantum Support Vector Machines (QSVM). QSVM is not a weak learner, so we modify the boosting method as shown in Algorithm \ref{alg:bqsvm}.
\def\hskip-\ALG@thistlm{\hskip-\ALG@thistlm}
\begin{algorithm*}
\caption{Boosted QSVM classifier.}\label{alg:bqsvm}
\hspace*{\algorithmicindent} \textbf{Input} {${X}_{train}, {y}_{train}, {X}_{val}, {y}_{val}, {y}_{train, i} \in \{0,1\}, {y}_{val, i} \in \{0,1\},$ grid parameters for QSVM} \\
\hspace*{\algorithmicindent} \textbf{Output} $G(x)$
\begin{algorithmic}[1]
\State Initialize ${w_i}=1,$ $\forall{i}$.
\For {$m=1$ to $M$}
\State {Perform grid search and select the best classifier $G_m(x)$ on $ ({X}_{train},{y}_{train},{X}_{val}, {y}_{val})$ taking into account exclusions from the grid and training weights ${w_i}$}
\State {Check early stopping conditions for perfect and worse than random guessing classification.}
\State {Exclude selected feature map from grid parameters for next iterations.}
\State {Compute ${err}_{m} = \frac {\sum_{i=1}^{N} {w_i} \cdot I({y}_{train, i} \neq G_m({X}_{train,i}))}{\sum_{i=1}^{N} {w_i}}$ (\textit{estimator error})}
\State {Compute ${\alpha}_{m}=\log ((1-{err}_{m})/{err}_{m})$ (\textit{estimator weight})}
\State {Set ${w}_{i} \gets {w}_{i} \cdot \exp[{\alpha}_{m} \cdot I({y}_{train, i} \neq G_m({X}_{train,i}))]$}
\EndFor
\State {Output $G(x) = \sum_{m=1}^{M} ({\alpha}_{m} G_m(x)) / \sum_{m=1}^{M} ({\alpha}_{m})$}
\end{algorithmic}
\end{algorithm*}
In the beginning the algorithm receives training and validation datasets as well as grid search parameters. In this work we consider the following parameters: Pauli feature map set as shown in Fig. \ref{fig:feature_maps}, the Pauli rotation factor, which is a multiplier to the Pauli rotations (\textit{alpha}) and a regularization parameter (\textit{C}) for sklearn's support vector classifier (SVC). We vary \textit{alpha} in the interval $(0; 2]$, \textit{C} in $[1; 100]$. All examples are initially assigned a weight of $1$. Grid search uses a validation dataset to select the best model. After the best model is selected we check early stopping conditions:
\begin{enumerate}
\item Estimator is perfect, i.e. estimator error on the training dataset is $\le 0$.
\item Estimator is as bad as random guessing or worse, i.e. estimator error is $\ge 0.5$ for binary classification or $\ge 1 - \frac{1}{{N}_{classes}}$ for multiclass.
\item The maximum number of classifiers is reached.
\end{enumerate}
The feature map selected on the current iteration is excluded from the grid search for next iterations. This is important to force the model to explore a broader Hilbert space and, consequently, different decision boundaries by choosing other feature maps for the quantum kernel. Finally, the weights are updated as shown in Algorithm \ref{alg:bqsvm}. Once any stopping condition is satisfied then the final model object is returned. This object can be used to build predictions for new samples as a weighted majority vote of classifiers included in the model.
It is worthwhile to highlight differences of the approach presented here from more traditional \textit{boosting}: 1) we perform a grid search for the best model on each iteration of the algorithm, 2) we enforce exploration of different model architectures through parameter grid constrains.
Identifying the optimal number of estimators and ensemble pruning is generally outside of the boosting method description and is up to the user. In this work we will choose the optimal number of estimators based on the minimum error on the validation sample.
\subsection{Numerical Simulation Results}
First, we run experiments on simulated data created with functions available in scikit-learn (see Fig. \ref{fig:datasets}). This allows us to create a number of statistically independent datasets and obtain averaged performance metrics. In this study we chose to generate 50 datasets of each kind: XOR, moons and circles.
\begin{figure*}[ht]
\centering
\includegraphics[width=\textwidth]{images/Fig2_datasets.png}
\caption{Different datasets used in experiments.}
\label{fig:datasets}
\end{figure*}
Each dataset has 150 observations for training, validation and testing split equally between those subsets. We train a boosted QSVM as described above for each dataset. For comparison, we also train an \textit{SVM} and \textit{xgboost}. The parameter grid for the \textit{SVM} includes RBF and linear kernels, regularization \textit{C} ranging from 0.1 to 100, gamma parameter for RBF kernel ranging from 0.0001 to 10. The parameter grid for \textit{xgboost} was constructed following \cite{wade2020handson}.
\begin{figure*}[ht]
\centering
\includegraphics[width=\textwidth]{images/Fig3_model_accuracies.png}
\caption{Model accuracy comparison box-plots. Lines show median accuracy on the test sample, boxes show the range between the lower and upper quartiles, and whiskers indicate "the range of the
data" following Tukey's definition with $Q1 - 1.5 \cdot (Q3-Q1)$ and $Q3 + 1.5 \cdot (Q3-Q1)$ for lower and upper whiskers, respectively.}
\label{fig:accuracies}
\end{figure*}
The results are shown in Fig. \ref{fig:accuracies}. The performance on the XOR dataset seems comparable across the three models. Boosted QSVM struggles to achieve comparable performance on the moons dataset, but works best on the circles dataset with median at $100\%$ accuracy.
An interesting question is whether a Boosted QSVM actually benefited from the ensemble and if so then how much improvement did it provide. It turns out that only about $31\%$ of Boosted QSVM models contain more than 1 estimator in the ensemble. Table \ref{tbl:lengths} shows mean and maximum ensemble size by dataset. The more difficult the dataset for QSVM is, the larger the ensemble seems to be: more than 1 estimator is barely used for circles data, while 3.8 estimators on average are used for moons data.
\begin{table}[ht]
\caption{Average and Maximum Model Ensemble Size Per Dataset}
\label{tbl:lengths}
\begin{center}
\begin{tabular}{ |c|c|c| }
\hline
Dataset & Mean & Max \\
\hline
XOR & 2.02 & 10 \\
circles & 1.06 & 3 \\
moons & 3.84 & 10 \\
\hline
\end{tabular}
\end{center}
\end{table}
We have also investigated whether there is a performance gain from having multiple classifiers. Table \ref{tbl:improvements} shows classification accuracy improvement from an ensemble of QSVM classifiers compared to a single QSVM. There is a small sample size for circles data, where even a single QSVM is doing well. There is an average of $4.2\%$ and $7.5\%$ classification accuracy improvement for XOR and moons.
\begin{table}[ht]
\caption{QSVM Accuracy Increase with Boosting}
\label{tbl:improvements}
\begin{center}
\begin{tabular}{ |c|c|c|l| }
\hline
\hfil Dataset & \hfil Mean & \hfil Max & \multicolumn{1}{|p{3.2cm}|}{\centering Number of ensembles with $>{2}$ learners (out of 50)}\\\hline
XOR & $4.2\%$ & $16.0\%$ & {\hfil 36}\\
circles & $2.0\%$ & $2.0\%$ & {\hfil 2}\\
moons & $7.5\%$ & $14.0\%$ & {\hfil 24}\\
\hline
\end{tabular}
\end{center}
\end{table}
\section{Conclusions}
Data scientists across multiple industries continue to push limits in their search for the best-in-class machine learning model that would provide a competitive edge. Quantum machine learning holds a promise of even higher performance than classical due to enhanced feature spaces. The approach discussed here is derived and adapted from the best ensemble building practices that worked well in traditional machine learning and thus should push the limits of model performance even further. Examples discussed in this work show that boosted QSVM ensembles outperform single QSVMs that in some cases allows them to match accuracy of non-quantum models, and in other cases - even exceed it.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran} |
2205.12205 | \section{Introduction}
In recent years there has been substantial progress towards the minimal model program for complex projective varieties of arbitrary dimension \cite{BCHM10}.
Unluckily, much less is known about the minimal model program for K\"ahler varieties.
In dimension 3, the situation is now well understood, including the cone theorem, the base point free theorem, the existence of flips and divisorial contractions and the termination of flips (see \cite{HP16}, \cite{CHP16}, \cite{DO22}, \cite{DH20} and references therein).
In higher dimension, however the situation is less clear.
Recently, however, Fujino proved the minimal model program for projective morphisms between complex analytic spaces (of arbitrary dimension) \cite{Fuj22}.
In this paper we take the first steps towards proving that the minimal model program holds for K\"ahler 4-folds. In particular we show that it holds for effective dlt pairs, and for (strongly) semistable families of $3$-folds over curves.
\begin{theorem}\label{thm:effective-dlt-mm}
Let $(X, B)$ be a $\mathbb{Q}$-factorial compact K\"ahler $4$-fold dlt pair such that $K_X+B\sim_{\mathbb{Q}} M\>0$. Then $(X, B)$ has a log minimal model.
\end{theorem}
\begin{theorem}\label{thm:ss-mmp}
Let $f:(X,B)\to T$ be a $\mathbb{Q}$-factorial
semi-stable klt pair of dimension $4$ and $W\subset T$ a compact subset (see Definition \ref{def:klt-semi-stable-pair}). If $K_X+B$ is effective (resp. not effective) over $W$ (see Lemma \ref{l-psef}), then we can run the $(K_X+B)$-MMP over a neighborhood of $W$ in $T$ which ends with a minimal model over $W$ (resp. with a Mori fiber space over $W$).
\end{theorem}
The main idea for the proof of Theorem \ref{thm:effective-dlt-mm} is as follows. If $K_X+B\sim _{\mathbb Q}M\geq 0$, then running the minimal model program for $K_X+B$ is equivalent to running the minimal model program for $K_X+B+\lambda M$ for any $\lambda >0$. Suppose for simplicity that
$(X,{\rm Supp}(B+M))$ has simple normal crossings and $(X,B+\lambda M)$ is dlt for some $\lambda >0$ such that the support of $\lfloor B+\lambda M\rfloor$ is equal to the support of $M$. It then follows that $K_X+B$ is nef if and only if $K_X+B+\lambda M$ is nef. If this is not the case, then we show that there is a $(K_X+B)$-negative extremal ray $R$ spanned by a rational curve $C$ such that $C\cdot M<0$ and hence $C\cdot S<0$ for a component $S$ of $M$ and hence of $\lfloor B+\lambda M\rfloor$. By adjunction $K_S+B_S:=(K_X+B+\lambda M)|_S$ is a divisorially log terminal 3-fold. We can now apply the 3-dimensional minimal model program to the pair $(S,B_S)$ and in particular we have a contraction $S\to T$ corresponding to the $K_S+B_S$ negative extremal face $F$ spanned by the curves of the ray $R$ contained in $S$.
Since $C\cdot S<0$, we are able to extend this to a contraction $X\to Y$ of the ray $R$. If this is a divisorial contraction, we replace $X$ by $Y$ and repeat the procedure. Otherwise we have a flipping contraction, which is in particular a projective morphism and hence its flip $X\dasharrow X^+$ exists by \cite{Fuj22} (see also Theorem \ref{t-mmpscale} below). We then replace $X$ by $X^+$ and repeat the procedure. In order to conclude it is necessary to show the termination of the corresponding sequences of flips. This follows along the usual approach by using special termination, the acc for log canonical thresholds, and termination of flips in dimension 3.
Some of the ideas in this approach are inspired by the approach for projective varieties \cite{BCHM10}, \cite{Bir07}, and \cite{Bir10}, but not surprisingly many new technical issues arise in the context of K\"ahler varieties. Regarding Theorem \ref{thm:ss-mmp}, we simply remark that according to our definition of a
semi-stable klt pair $f:(X,B)\to T$, for any $t\in W$, $(X,X_t+B)$ is a plt pair, thus $K_{X_t}+B_t=(K_X+X_t+B)|_{X_t}$ is a klt 3-fold and so we can reduce questions on the existence of the relative $(X,B)$ minimal model program to known results about the 3-fold minimal model program for $(X_t,B_t)$. Termination of flips when $K_X+B$ is not effective over $W$ is the most challenging part of this proof as the usual approach does not immediately apply here.\\
We will also use the results of \cite{Nak87} and recent advances in the minimal model program to prove the following results conjectured in \cite{Nak87}.
\begin{theorem}[Finite generation conjecture]\label{c-fg} Let $f: X \to Y$ be a proper surjective morphism of analytic varieties where $X$ is in Fujiki's class $\mathcal C$. Suppose that $(X,B)$ is a klt pair. Then the relative canonical $\mathcal O _Z$-algebra \[R(X/Y,K_X+B):=\oplus_{m\>0} f_*\mathcal O _X(m(K_X+B))\] is locally finitely generated.
\end{theorem}
\begin{theorem}\label{t-mmpscale}
Let $\pi :X\to U$ be a projective morphism of normal varieties and $B\geq 0$ a ${\mathbb{Q}}$-divisor such that $(X,B)$ is klt. Let $W\subset U$ be a compact subset such that $\pi :X\to U$ satisfies property $\mathbf P$ or $\mathbf Q$ (see Definition \ref{def:property-pq}) and $X$ is $\mathbb{Q}$-factorial near $W$ (cf. \ref{def:Q-factorial}), then after shrinking $U$ in a neighborhood of $W$,
\begin{enumerate}
\item we can run the $K_X+B$ MMP over $U$,
\item if $K_X+B$ is pseudo-effective, and either $B$ or $K_X+B$ is big over $U$, then any MMP with scaling of a relatively ample divisor terminates with a minimal model, and
\item if $K_X+B$ is not pseudo-effective over $U$, then any MMP with scaling of a relatively ample divisor terminates with a Mori fiber space.
\end{enumerate}
\end{theorem}
\begin{remark}
After completing the proofs of Theorems \ref{c-fg} and \ref{t-mmpscale}, we were informed that Fujino has also proved these results, see \cite{Fuj22}. We note that Fujino's approach is based on \cite{BCHM10} whereas our approach is inspired by \cite{CL10}. Another possible approach can be found in \cite{Pau12}, which is particularly suited to the analytic context.
\end{remark}
This article is organized in the following manner: In Part 1, we collect and prove various preliminary results. In Subsection 2.4 we prove two important results, namely Theorem \ref{thm:nef-big-to-kahler} and \ref{thm:nef-restricts-to-pseff}. These two results work as our main tools for testing whether a $(1, 1)$ class $\alpha$ is nef or not, see Remark \ref{rmk:nef-criteria} for more details. Part 2 of the article is devoted to proving finite generation as in \cite{CL10}. We prove Theorem \ref{c-fg} and \ref{t-mmpscale} in Section 4 of this part. In Part 3, we prove Theorem \ref{thm:effective-dlt-mm} (in Section 7) and Theorem \ref{thm:ss-mmp} (in Section 8).\\
{\bf Acknowledgment.} O. Das would like to thank Cristian Martinez for many useful discussions.
\part{Preliminaries}
\section{Preliminaries}
A \textit{complex analytic variety} or simply an \textit{analytic variety} is a reduced and irreducible complex space. All complex spaces in this article are assumed to be \textit{second countable} spaces. A holomorphic map $f:X\to Y$ between complex spaces is called a \textit{morphism}. An open subset $U\subset X$ is called a Zariski open set if the complement $Z=X\setminus U$ is a closed analytic subset of $X$, i.e. there is a sheaf of ideals $\mathscr{I}_Z\subset\mathcal{O}_X$ such that $Z={\rm Supp}(\mathcal{O}_X/\mathscr{I}_Z)$. Let $\mathcal{P}$ be a property. We say that \textit{general} points of $X$ satisfy $\mathcal{P}$ if there is a dense Zariski open subset $U\subset X$ such that $\mathcal{P}$ is satisfied for all $x\in U$. We say that \textit{very general} points of $X$ satisfy $\mathcal{P}$ if there is a countable collection of dense Zariski open subsets $\{U_i\}_{i\in I}$ of $X$ such that $x\in X$ satisfies $\mathcal{P}$ for all $x\in \cap_{i\in I} U_i$. Similarly, if $f:X\to Y$ is a morphism between complex spaces, we say that \textit{general fibers} of $f$ satisfy $\mathcal{P}$ if there is a dense Zariski open subset $U\subset Y$ such that $X_y:=f^{-1}(y)$ satisfies $\mathcal{P}$ for all $y\in Y$; very general fibers are defined analogously.\\
Let $S\subset X$, then we say that $S$ is \textit{uncountably Zariski dense} in $X$ if $S$ is not contained in any countable union of closed analytic subsets of $X$. Note that, if $S\subset X$ is uncountably Zariski dense, then for any non-empty Zariski open subset $U\subset X$, $S\cap U\neq\emptyset$.\\
\begin{definition}\label{def:kahler-variety}
Let $X$ be an analytic variety. Then $X$ is called a K\"ahler variety if the there is a K\"ahler form on $X$, i.e. a positive closed real $(1, 1)$ form $\omega\in \mathcal{A}_{\mathbb{R}}^{1,1}(X)$ such that the following holds: for every $x\in X$, there is an open neighborhood $x\in U\subset X$ and a closed embedding $\iota: U\to V$ into an open subset of $\mathbb{C}^N$, and a strictly plurisubharmonic $C^\infty$ function $f:V\to \mathbb{R}$ such that $\omega|_{U\cap X_\textsubscript{sm}}=(i\partial\bar\partial f)|_{U\cap X_\textsubscript{sm}}$.
\end{definition}
\noindent
\begin{enumerate}
\item For a compact analytic variety $X$, $N^1(X)$ is defined to be the Bott-Chern cohomology group $H^{1,1}_{\operatorname{BC}}(X)$ (which is also an $\mathbb{R}$-vector space), see \cite[Definition 3.1]{HP16}. $N_1(X)$ is defined in \cite[Definition 3.8]{HP16}. When $X$ is a normal compact analytic variety with rational singularities and belongs to Fujiki's class $\mathcal{C}$, the duality of $N^1(X)$ and $N_1(X)$ is established in \cite[Proposition 3.9]{HP16}.
\item Let $X$ be a compact analytic variety. Let $u\in H^{1,1}_{\operatorname{BC}}(X)$ be a class represented by a form $\alpha$ with local potentials. Then $u$ is called nef if for some positive $(1,1)$ smooth form $\alpha$ and for every $\epsilon>0$, there exists a smooth function $f_\epsilon\in\mathcal{A}^0(X)$ such that
\[
\alpha+i\partial\bar{\partial}f_\epsilon\>-\epsilon\omega.
\]
If $X$ is in Fujiki's class $\mathcal{C}$, then we denote by $\operatorname{Nef}(X)\subset N^1(X)$ the cone of nef cohomology classes.
\item For the definitions of big and pseudo-effective classes and the corresponding cones, see \cite{HP16}, \cite[Definition 2.2]{DH20} and also Subsection \ref{subs:analytic-classes}.
\item Let $D=\sum a_i D_i$ and $D'=\sum a'_iD_i$ be two $\mathbb{R}$-divisors on a normal analytic variety $X$. Then we define $D\wedge D'$ as
\[
D\wedge D':=\sum_i\min\{a_i, a'_i\} D_i.
\]
\end{enumerate}
\begin{remark}\label{rmk:nef-criteria}
Let $X$ be a normal compact K\"ahler variety, and $B\>0$ be an effective divisor such that $K_X+B$ is $\mathbb{Q}$-Cartier. Under mild singularity assumptions on the pair $(X, B)$, the Minimal Model Program asks whether $K_X+B$ is nef or not. When $X$ is a projective variety, nefness of a $\mathbb{Q}$-Cartier divisor $D$ can simply be tested by checking whether $D\cdot C$ is non-negative (or not) for all curves $C\subset X$. However, in general compact K\"ahler varieties this criteria is not equivalent to Definition \ref{def:kahler-variety}(2), for a counterexample see \cite[Page 5]{HP17}. When $\operatorname{dim} X=3$, using Boucksom's divisorial Zariski deocomposition \cite{Bou04} it is shown in \cite{HP16, CHP16} (also see \cite[Lemma 2.6]{DH20}) that $K_X+B$ is nef if and only if $(K_X+B)\cdot C\>0$ for all curves $C\subset X$. This result is expected to be true in $\operatorname{dim} X\>4$, but a proof is not yet known, the proof in dimension $3$ does not automatically extend in higher dimensions; for a partial result in higher dimensions see \cite{CH20}.
In absence of such a nefness criteria we use our Theorem \ref{thm:nef-restricts-to-pseff} to test whether a class $\alpha\in H^{1,1}_{\operatorname{BC}}(X)$ is nef or not; it says that $\alpha$ is nef if and only if $\alpha|_V$ is a pseudo-effective class for all analytic varieties $V\subset X$.
\end{remark}
The following results about nefness will be used throughout the article.
\begin{lemma}\cite[Remark 3.12]{HP16}\label{lem:nef-cone}
Let $X$ be a normal compact K\"ahler variety, $\operatorname{Nef}(X)$ is the cone of nef classes in $H^{1,1}_{\operatorname{BC}}(X)$ and $\mathcal{K}(X)$ is the (open) cone of K\"ahler classes. Then $\overline{\mathcal{K}(X)}=\operatorname{Nef}(X)$.
\end{lemma}
\begin{proposition}\label{pro:nef-mori-cone-duality}
Let $X$ be a normal compact K\"ahler variety with rational singularities. Then $\operatorname{Nef}(X)$ and $\operatorname{\overline{NA}}(X)$ are dual to each other via the natural isomorphism $N^1(X)\to N_1(X)^*$ induced by their usual perfect pairing.
\end{proposition}
\begin{proof}
A similar proof as in \cite[Proposition 3.15]{HP16} holds here. Note that the main ingredient of the proof of \cite[Proposition 3.15]{HP16} is Lemma 3.13 in \cite{HP16}, for which we use Lemma \ref{lem:nef-pullback}.
\end{proof}
\begin{definition}\label{def:finite-generation}
Let $X$ be a complex space and $\mathcal R$ a graded sheaf of $\mathcal{O}_X$-algebras. We say that $\mathcal R$ is \textit{locally finitely generated}, if for every $x\in X$ there is an open neighborhood $x\in U$ such that $\mathcal R(U)$ is a finitely generated $\mathcal{O}_X(U)$-algebra. We say $\mathcal R$ is \textit{finitely generated}, if there exists an integer $m\>0$ such that for every $x\in X$, there is an open neighborhood $x\in U$ such that $\mathcal R(U)$ generated by elements of degree $\<m$.
\end{definition}
\begin{remark}\label{rmk:finite-generation}
Note that finite generation is a necessary condition for the existence of $\mbox{Projan\ }\mathcal R\to X$. If $W\subset X$ is a compact subset, then for any locally finitely generated graded algebra $\mathcal R$, there is an open neighborhood $U\supset W$ of $W$ such that $\mathcal R|_U$ is a finitely generated $\mathcal{O}_U$-algebra. Indeed, since $W$ is compact, it can be covered by finitely many open sets $\{U_i\}_{1\<i\<k}$ such that $\mathcal R(U_i)$ is a finitely generated $\mathcal{O}_X(U_i)$-algebra. Now let $m_i\>0$ be an integer such that each $\mathcal R(U_i)$ is generated by degree $\<m_i$ monomials. Then $m:=\{m_i\;:\; i=1,2,\ldots, k\}$ does the job.\\
\end{remark}
\begin{definition}\label{def:Q-factorial}
Let $X$ be a normal analytic variety. The canonical sheaf $\omega_X$ is defined as $\omega_X:=(\wedge^{\operatorname{dim} X} \Omega^1_X)^{**}$. Note that unlike the case of algebraic varieties, $\omega_X$ here does not necessarily correspond to a Weil divisor $K_X$ such that $\omega_X\cong \mathcal{O}_X(K_X)$. However, by abuse of notation we will say that $K_X$ is a canonical divisor when we actually mean the canonical sheaf $\omega_X$. This doesn't create any problem in general as running the minimal model program involves intersecting subvarieties with $\omega_X$.
\begin{enumerate}
\item A $\mathbb{Q}$-divisor $D$ on $X$ is called $\mathbb{Q}$-Cartier if $mD$ is Cartier for some $m\in\mathbb{N}$. We say $X$ is $\mathbb{Q}$-factorial, if every prime Weil divisor $D$ on $X$ is $\mathbb{Q}$-Cartier and there is a positive integer $m>0$ such that $(\omega_X^{\otimes m})^{**}$ is a line bundle. Note that if $X$ is $\mathbb{Q}$-factorial and $U\subset X$ is an open subset, then $U$ is not necessarily $\mathbb{Q}$-factorial.
\item A $\mathbb{Q}$-divisor $D$ is called $\mathbb{Q}$-Cartier at a point $x\in X$, if there is an open neighborhood $x\in U\subset X$ such that $D|_U$ is $\mathbb{Q}$-Cartier.
\item Let $\pi :X\to T$ be a projective morphism of complex varieties, $X$ is normal and $W\subset T$ a compact subset. We say that $X$ is $\mathbb Q$-factorial over $W$, if every divisor $D$ defined on a neighborhood of $\pi ^{-1}(W)$ is $\mathbb Q$-Cartier at every point $x\in \pi ^{-1}(W)$ and $\omega _X$ is also $\mathbb Q$-Cartier at every point $x\in \pi ^{-1}(W)$.
\item A pair $(X, \Delta)$ consists of a normal variety $X$ and a $\mathbb{Q}$-divisor $\Delta$ such that $K_X+\Delta$ is $\mathbb{Q}$-Cartier. The singularities of $(X, \Delta)$ are defined exactly the same way as in \cite[Chapter 2]{KM98}. Note that in this article when we say that a pair $(X, \Delta)$ is klt, we assume that $\Delta$ is an \textit{effective} divisor. If $\Delta$ is not necessarily effective, then we will call $(X, \Delta)$ a \textit{sub-klt pair}. Similar conventions are made for other classes of singularities. If $E$ is a divisor over $X$, the \textit{discrepancy} of $E$ with respect to $(X, \Delta)$ will be denoted by $a(E, X, \Delta)$.
\item We will often abuse notation and simply say that $K_X+\Delta$ is klt (instead of the pair $(X, \Delta)$ is klt).
\end{enumerate}
\end{definition}
\begin{definition}\label{def:log-terminal-and-log-minimal-model}
Let $(X, B)$ be a log canonical pair and $\phi:X\dashrightarrow Y$ a bimeromorphic map. Let $B_Y$ be the push-forward of $B$ under $\phi$ and $E_Y=\sum E_j$ the sum of all prime Weil divisors on $Y$ which are contracted by $\phi^{-1}:Y\dashrightarrow X$.
\begin{enumerate}
\item We say that $(Y, B_Y+E_Y)$ is a \textit{nef model} if $(Y, B_Y+E_Y)$ is a $\mathbb{Q}$-factorial dlt pair and $K_Y+B_Y+E_Y$ is nef.\\
\item We say that $(Y, B_Y+E_Y)$ is a \textit{log minimal model} if it is a nef model and for any prime Weil divisor $E\subset X$ which is contracted by $\phi$, $a(E, X, B)<a(E, Y, B_Y+E_Y)$ holds.\\
\item We say $(Y, B_Y)$ is a \textit{log terminal model} of $(X, B)$ if the following hold:
\begin{enumerate}
\item[(i)] $(X, B)$ is a $\mathbb{Q}$-factorial dlt pair,
\item[(ii)] $K_Y+B_Y$ is nef,
\item[(iii)] $\phi$ does not extract any divisor, i.e. $\phi^{-1}:Y\dashrightarrow X$ does not contract any divisor, and
\item[(iv)] for any prime Weil divisor $E\subset X$ which is contracted by $\phi$, $a(E, X, B)<a(E, Y, B_Y)$ holds.\\
\end{enumerate}
\end{enumerate}
Clearly, every log terminal model is a log minimal model, and every log minimal model is a nef model.
\end{definition}
The following result shows that if $(X, B)$ is a plt pair, then every log minimal model of $(X, B)$ is a log terminal model.
\begin{lemma}\label{lem:lmm-to-ltm}
Let $(X, B)$ be a log canonical pair and $(Y, B_Y+E_Y)$ be a log minimal model of $(X, B)$ as in Definition \ref{def:log-terminal-and-log-minimal-model} above. Let $\phi:X\dashrightarrow Y$ be the induced bimeromorphic map. Then the following holds:
\begin{enumerate}
\item For any prime Weil divisor $E$ over $X$, $a(E, X, B)\<a(E, Y, B_Y+E_Y)$.
\item If $(X, B)$ is a plt pair, then $E_Y=0$, i.e. $\phi^{-1}$ does not contract any divisor; in particular, $(Y, B_Y)$ is a log terminal model of $(X, B)$.
\end{enumerate}
\end{lemma}
\begin{proof}
(1) Let $W$ be the normalization of the graph of $\phi$, and $p:W\to X$ and $q:W\to Y$ be the induced bimeromorphic morphisms. Then we can write $K_W=p^*(K_X+B)+G$ and $K_W=q^*(K_Y+B_Y+E_Y)+H$. Note that $p_*G=-B$ and $q_*H=-(B_Y+E_Y)$. Thus we have
\[
p^*(K_X+B)=q^*(K_Y+B_Y+E_Y)+H-G.
\]
Therefore $-(H-G)\equiv_p q^*(K_Y+B_Y+E_Y)$ is nef, and $p_*(H-G)=p_*H+B$. Let $D$ be a component of $H$. If $q_*D$ is a component of $B_Y$, then $p_*D\neq 0$, and the coefficient of $p_*D$ in $p_*H+B$ is $0$. If $q_*D=0$ and $p_*D\neq 0$, then from the definition of log minimal model it follows that $a(D, Y, B_Y+E_Y)>a(D, X, B)$. In particular, the coefficient of $p_*D$ in $p_*H+B$ is positive. Thus $p_*(H-G)$ is an effective divisor, and hence from the negativity lemma it follows that $H-G$ is an effective divisor. Thus for any prime Weil divisor $E$ over $X$ we have $a(E, X, B)\<a(E, Y, B_Y+E_Y)$.\\
\noindent
(2) Let $E_i$ be a component of $E_Y$. Then $E_i$ is an exceptional divisor over $X$, in particular, $a(E_i, X, B)>-1$, since $(X, B)$ is plt. But from part (1) it follows that $-1<a(E_i, X, B)\<a(E_i, Y, B_Y+E_Y)=-1$. This is a contradiction, and hence $E_i=0$ for all $i$, i.e. $E_Y=0$, i.e. $\phi^{-1}$ does not contract any divisor.\\
\end{proof}
\begin{convention}\label{con:relatively-compact}
We will say that $f:X\to U$ is a morphism from a complex space $X$ to a \textit{relatively compact} space $U$, if there exists a morphism $f':X'\to U'$ of complex spaces such that $U\subset U'$ is a relatively compact open subset of $U'$, $X=X'\times_{U'} U$ and $f$ is the induced morphism to $U$.
\end{convention}
\subsection{Projective morphisms} \cite[Chapter II, Page 24]{Nak04}\label{s-pm}
Let $f:X\to Y$ be a proper morphism of complex spaces. A line bundle $\mathscr{L}$ on $X$ is called \textit{$f$-free} or \textit{$f$-generated} if the natural morphism $f^*f_*\mathscr{L}\to \mathscr{L}$ is surjective. We say $\mathscr{L}$ is \textit{$f$-very ample} or \textit{very ample over $Y$} if $\mathscr{L}$ is $f$-free and $X\to \mathbb{P}_Y(f_*\mathscr{L})$ is a closed embedding. We say that $\mathscr{L}$ is \textit{$f$-ample} or \textit{ample over $Y$} if for every $y\in Y$, there is an open neighborhood $y\in V$ and a positive integer $m>0$ such that $\mathscr{L}^m|_{f^{-1}V}$ is very ample over $V$. A proper morphism $f:X\to Y$ of complex spaces is called \textit{projective}, if there exists a $f$-ample line bundle $\mathscr{L}$ on $X$. The morphism $f:X\to Y$ is called \textit{locally projective} if $Y$ has a open cover $\{U_i\}$ such that $f|_{X_{U_i}}:X_{U_i}\to U_i$ is projective for all $i$, where $X_{U_i}:=f^{-1}U_i$.
\begin{remark}\label{rmk:projective-morphism}
Note that the composition of two projective morphisms are not necessarily projective, see \cite[Page 557]{Nak87} for a counterexample. However, the composition of two locally projective morphisms is locally projective. On the other had, if $f:X\to Y$ and $g:Y\to Z$ are two projective morphisms of complex spaces and $K\subset Z$ is a compact subset, then over a neighborhood of $K$, $g\circ f$ is projective.
\end{remark}
We have the following properties of $f$-ample line bundles.
\begin{theorem}\label{t-rel-ample}
Let $f:X\to Y$ be a projective morphism of complex spaces, $L$ an $f$-ample line bundle, $F$ a coherent sheaf and for any integer $m$ let $F(m)=F\otimes L^m$. Then, for any compact subset $K\subset Y$ there exists an integer $m_0=m_0(K,F)$ such that
\begin{enumerate}
\item $f^*f_*(F(m))\to F(m)$ is surjective for any point $x\in X_K:=f^{-1}K$ and any $m\geq m_0$,
\item $R^if_*(F(m))=0$ on a neighborhood of $K$ for any $i\geq 1$ and $m\geq m_0$,
\item if $U\subset Y$ is a relatively compact Stein open subset, then $F(m)|_{f^{-1}(U)}$ is globally generated and $H^i(f^{-1}(U),F(m))=0$ for any $i\geq 1$ and $m\geq m_0$,
\item if $F$ is invertible, then $F(m)$ is ample (resp. very ample) over a neighborhood of $K$ for all $m\geq m_0$,
\item if $U\subset Y$ is a relatively compact Stein open subset, $S$ is a normal subvariety of $X$ and $D$ is Cartier on $X$, then $|(D+mL)_{|{S_U}}|=|D+mL|_{S_U}$ for all $m\geq m_0$, where ${S_U}=S\cap f^{-1}(U)$.
\end{enumerate}
\end{theorem}
\begin{proof}
(1-2) are standard results due to Grauert and Remmert, for example, see \cite[IV Theorem 2.1]{BS76}.
For (3) recall that if $G$ is a coherent sheaf on a Stein space $U$, then by Cartan's theorem, $G$ is globally generated and $H^p(U,G)=0$ for every $p>0$.
Since $U$ is relatively compact, then by (2) we have $R^if_*(F(m))|_U=0$ for any $i\geq 1$ and $m\geq m_0$ and so by a spectral sequence argument $H^i(f^{-1}(U),F(m))=H^i(U,f_*(F(m)))=0$ for $i>0$, since $f_*F(m)$ is coherent on $U$.
By (1) we have $f^*f_*(F(m))\to F(m)$ is surjective over $U$, and since $U$ is Stein, $f_*F(m)|_U$ is globally generated and hence so is $f^*f_*(F(m))|_{f^{-1}(U)}$. In particular, $F(m)|_{f^{-1}(U)}$ is globally generated.\\
(4) If $F$ is invertible, then by (1) we may assume that $F(m)$ is $f$-generated and hence $f$-nef over a neighborhood of $K$ for $m\geq m_0$. But then $F(m+1)$ is $f$-ample (as it is the tensor product of an $f$-nef and an $f$-ample line bundle). The very ampleness statement follows similarly.\\
(5) The inclusion $|(D+mL)_{|{S_U}}|\supset |D+mL|_{S_U}$ is immediate from the definitions.
Consider the short exact sequence
\[ 0\to {\mathcal{O}} _{X_U}(D-S)\to {\mathcal{O}} _{X_U}(D)\to {\mathcal{O}} _{S_U}(D|_{S_U})\to 0.\]
Twisting this sequence by ${\mathcal{O}} _{X_U}(mL)$ and then pushing forward by $f$ we obtain the following surjectivity from (1):
\[
\xymatrixcolsep{3pc}\xymatrix{H^0(X_U, \mathcal{O}_{X_U}(D+mL))\ar@{->>}[r] & H^0(S_U, \mathcal{O}_{S_U}((D+mL)|_{S_U})). }
\]
Thus the reverse inclusion holds.
\end{proof}~\\
We will also need the following which is the analog of \cite[Lemma 2.28]{CL10}.
\begin{lemma} Let $\pi :X\to U$ be a projective morphism from a complex manifold to a Stein space.
Suppose that $D_1,\ldots , D_l\in {\rm Div }_{\mathbb{Q}} (X)$, $|D_i|_{\mathbb{Q}}\ne \emptyset$ for $1\leq i\leq l$ and let $V\subset {\rm Div }_{\mathbb{R}} (X)$
be the subspace spanned by the components of $D_1,\ldots , D_l$ and $\mathcal P$ the complex polytope spanned by $D_1,\ldots , D_l$.
Suppose that the ring $R(X,D_1,\ldots , D_l)$ is a finitely generated ${\mathcal{O}} _U$-algebra, then
\begin{enumerate}
\item ${\mathbf{Fix}}$ extends to a rational piece-wise affine function on $\mathcal P$, and
\item there exists a positive integer $k$ such that for every $D \in \mathcal P$ and every $m \in \mathbb N$,
if $\frac mk D \in {\rm Div}(X)$, then ${\mathbf{Fix}}(D) = \frac 1
m {\rm Fix} |mD|$.
\end{enumerate}
\end{lemma}
\begin{proof}
See the proof of of \cite[Lemma 2.28]{CL10}.
\end{proof}
\subsubsection{Resolutions of singularities}
\begin{theorem}[Log Resolution]\cite[Thm. 13.2, 1.10 and 1.6]{BM97}\cite[Thm. 2.11]{DH20}\label{thm:log-resolution}
Let $X\subset W$ be a relatively compact open subset of an analytic variety $W$ and $D$ a $\mathbb{Q}$-Cartier divisor on $X$. Then there exists a projective bimeromorphic morphism $f:Y\to X$ from a smooth variety $Y$ satisfying the following properties:
\begin{enumerate}
\item $f$ is a successive blow up of smooth centers contained in $X\setminus SNC(X, D)$,
\item $f^{-1}(SNC(X, D))\cong SNC(X, D)$, and
\item $\operatorname{Ex}(f)$ is a pure codimension $1$ subset of $Y$ such that $\operatorname{Ex}(f)\cup(f^{-1}_*D)$ has SNC support.
\end{enumerate}
\end{theorem}
\begin{remark}\label{r-log-resolution}
{ Note that if $\mathcal J\subset \mathcal O _X$ is a sheaf of ideals, then there exists a projective bimeromorphic morphism $f:Y\to X$ from a smooth variety $Y$ such that $\mathcal J\cdot \mathcal O _Y= \mathcal O _Y(-G)$ where $(Y,G+{\rm Ex}(f))$ is log smooth. To see this, simply blow up $\mathcal J$ to get $f_1:X_1\to X$ such that $\mathcal J\cdot \mathcal O _{X_1}= \mathcal O _{X_1}(-D)$ (note that by \cite[Theorem 1.10]{BM97} we can also achieve this step by a finite sequence of blow ups along smooth centers). Then apply Theorem \ref{thm:log-resolution} to obtain $g:Y\to X_1$ so that $(Y,f^{-1}_*D+{\rm Ex}(f))$ is log smooth.}
\end{remark}
\begin{lemma}\label{l-log-resolution}
Let $\pi :X\to U$ be a projective morphism from a smooth complex variety to a relatively compact Stein variety. Let $V\subset |L|$ be a non-empty linear series, then there exists a projective birational morphism $f:X'\to X$ such that ${\rm Fix}f^*V$ is a divisor with simple normal crossings and ${\rm Mob}(f^*V)$ is $\pi \circ f$-free.
\end{lemma}
\begin{proof}
Let $\mathcal V\subset H^0(X,L)$ be the vector space corresponding to the linear series $V$.
Let $\mathfrak b$ be the base ideal of $\mathcal V$ so that $\mathcal V\cdot \mathcal O_X\to L\otimes \mathfrak b$ is surjective.
Let $f:Y\to X$ be a resolution of $\mathfrak b$ so that $\mathfrak b\cdot \mathcal O _{X'}=\mathcal O _{X'}(-F)$, where $F$ is a divisor with simple normal crossings. Then $f^*\mathcal V\otimes \mathcal O _{X'}\to f^*L\otimes \mathcal O _{X'}(-F)$ is surjective, and thus $f^*L\otimes \mathcal O _{X'}(-F)$ is globally generated. In particular, ${\rm Fix}f^*\mathcal V=F$ is a divisor with simple normal crossings and ${\rm Mob}(f^*\mathcal V)$ is $\pi \circ f$-free.
\end{proof}
\subsection{Bertini's theorem}
In this subsection we will prove a analytic version of Bertini's theorem which will be useful in what follows. First we need the following definitions.
\begin{definition}
Let $X$ be a complex space. A subset $W\subset X$ is called \emph{analytically meager}, if there exist countably many locally analytic subsets $\{Z_i\}_{i\in \mathbb{N}}$ of $X$ of codimension $\>1$ such that $W\subset \cup_{i=1}^\infty Z_i$. Clearly, a countable union of analytically meager sets is analytically meager.
\end{definition}
\begin{definition}
Let $X$ be a complete metric space. A subset $M\subset X$ is called \textit{fat}, if there are countably many dense open subsets $\{U_i\}$ of $X$ such that $\cap_i U_i\subset M$. Clearly, countable intersections of fat sets are fat. Let $\mathcal{P}$ be a property. We say that \textit{sufficiently general} points of $X$ satisfy $\mathcal{P}$, if there exists a fat subset $M\subset X$ such that $x\in X$ satisfies $\mathcal{P}$ for all $x\in M$.
Note that, since $X$ is a complete metric space, by Baire's theorem any fat set is dense in $X$. From \cite[Remark II.3, Page 276]{Man82} we know also that if $M$ is a fat subset of $X$, then $X\setminus M$ is analytically meager.
\end{definition}
\begin{remark}\label{rmk:sifficiently-general}
Let $X$ be a complex space and $\mathscr{L}$ a line bundle on $X$. Let $V\subset H^0(X, \mathscr{L})$ be a finite dimensional $\mathbb{C}$-subspace. By abuse of terminology we will say that a \textit{sufficiently general member} $D$ of the linear system $|V|$ satisfies property $\mathcal{P}$, if for a sufficiently general member $s\in V$, $D=\operatorname{Zero}(s)\subset X$ satisfies property $\mathcal{P}$.
\end{remark}
\begin{remark}\label{rmk:meager-set}
Note that, if $W\subset X$ is an analytically meager set, then $W$ is nowhere dense in $X$, i.e. the interior of the closure $\overline{W}$ is an empty set. Consequently, $X\setminus W$ is dense in $X$. Moreover, if $f:X\to Y$ is a surjective morphism between complex spaces and $W\subset Y$ is an analytically meager set, then $f^{-1}W$ is an analytically meager subset of $X$. Let $g:X\to Y$ be a surjective continuous map between complete metric spaces and $M\subset Y$ is a fat subset. By definition, $M$ contains a countable intersection of dense open subsets, say $\cap U_i$ of $Y$. Then $Y\setminus \cap U_i$ is an analytically meager set, and thus $g^{-1}(Y\setminus \cap U_i)$ is also analytically meager in $X$. In particular, $X\setminus g^{-1}(Y\setminus \cap U_i)$ is a dense subset of $X$, and hence $\cap g^{-1}U_i$ is dense in $X$. Therefore $g^{-1}M$ is a fat subset of $X$.\\
\end{remark}
\begin{theorem}\label{t-bertini+}
Let $\pi :X\to U$ be a projective morphism from a smooth complex variety to a Stein space. Let $D$ be a simple normal crossings divisor on $X$ and $L$ a $\pi$-generated line bundle on $X$. Then the following hold:
\begin{enumerate}
\item If $\operatorname{dim} X=n$, then there exist sections $s_0,\ldots , s_n\in H^0(X, L)$ generating $L$.
\item Let $V\subset H^0(X,L)$ be a finite dimensional $\mathbb C$-subspace such that $V$ generates $L$, i.e. $V\otimes _{\mathbb C}{\mathcal{O}}_X\to L$ is surjective. Then $(X, D+G)$ is log smooth for all sufficiently general members $G\in |V|$.
\end{enumerate}
\end{theorem}
\begin{proof}
(1) Since $L$ is $\pi$-generated and $U$ is Stein, $L$ is globally generated by its global sections on $X$, i.e. there is a surjection $H^0(X,L)\otimes \mathcal O _X\twoheadrightarrow L$. We claim that, for any $0\<k\<n$, we can pick sections $s_0,\ldots , s_k\in H^0(X, L)$ such that the dimension of every component of the zero set $Z_k=Z(s_0,\ldots , s_k)$ is $n-k-1$. Clearly, the claim holds for $k=0$. Proceeding by induction on $k$, assume that $Z_{k-1}$ is a finite union of subvarieties $\{V_j\}_{1\<j\<m}$ of dimension $n-k$. Since $L$ is globally generated on $X$, for every $V_j$ we can pick a section $t_j\in H^0(X, L)$ such that $t_j|_{V_j}\not\equiv 0$. Let $s_k$ be a general $\mathbb{C}$-linear combination of the $t_j$ for $j=1,2,\ldots, m$. Then $s_k\in H^0(X, L)$ does not vanish identically along any of the $V_j$, and hence $Z(s_k)\cap V_j$ is a finite union of irreducible components $V_j$ of codimension $1$ and the claim follows.\\
(2) Let $Z\subset X$ be a positive dimensional strata of $D$. Then $V|_Z:=\{s|_Z\; |\; s\in V\}\subset H^0(Z, L|_Z)$ generates $L|_Z$ globally and there is a surjection $\varphi:V\twoheadrightarrow V|_Z$ of vector spaces. Since $V$ (and hence also $V|_Z$) is a finite dimensional $\mathbb{C}$-vector space, fixing some norms on $V$ and $V|_Z$ we may assume that $\varphi$ is a surjective continuous linear transformation between two Banach spaces. Then by \cite[Theorem II.5]{Man82}, there exists a fat set $M\subset V|_Z$ such that the zero set $\operatorname{Zero}(s|_Z)\subset Z$ is smooth for all $s|_Z\in M$. Then from Remark \ref{rmk:meager-set} it follows that $\varphi^{-1}M$ is a fat subset of $V$. Let $K:=\ker(\varphi)$; since $V\setminus K\to V|_Z\setminus\{0\}$ is surjective, it follows that $\operatorname{Zero}(s)|_Z$ is smooth for all $s\in \varphi^{-1}M\setminus K$. Note that $V\setminus K$ is a dense open subset of $V$, since $K$ a proper closed subspace of $V$; in particular, $\varphi^{-1}M\setminus K$ is fat subset of $V$. Since there are only finitely many strata of $D$, by induction on the number of positive dimensional stratum of $D$, it follows that there is a fat subset $N\subset V$ such that $(X, D+G)$ is log smooth for all $s\in N$ with $\operatorname{Zero}(s)=G$.
\end{proof}
\begin{lemma}\label{l-klt}
Let $f:X\to Y$ be a projective morphism of complex spaces such that $Y$ is a relatively compact Stein space, and $L$ is a $f$-generated line bundle. Let $V\subset H^0(X, L)$ be a finite dimensional $\mathbb{C}$-subspace such that $L$ is globally generated by the sections of $V$. If $(X, B)$ is klt, then
for sufficiently general member $D\in |V|$, $(X,B+tD)$ is klt for any $t<1$.
\end{lemma}
\begin{proof}
Passing to a log resolution, we may assume that $X$ is smooth and $B$ has simple normal crossings support. Then by Theorem \ref{t-bertini+}, for sufficiently general $D\in |V|$, $(X, B+tD)$ is log smooth, and the lemma follows.
\end{proof}
\subsection{Linear series}
Let $\pi:X\to U$ be a projective surjective morphism of normal analytic varieties such that $X$ is smooth and $D$ a $\mathbb{R}$-divisor on $X$.
If $\pi _* {\mathcal{O}} _X(D)\ne 0$, then let $B$ be a prime Weil divisor on $X$ and $m_B(D)$ is the largest integer $m$ such that $\pi _* {\mathcal{O}} _X(D-mB)\to \pi _* {\mathcal{O}} _X(D)$ is an isomorphism (this can be computed on any open subset $V\subset U$ such that $V\cap f(B)\ne \emptyset$ cf. \cite[pg 97]{Nak04}; also note that by definition ${\mathcal{O}} _X(D)={\mathcal{O}} _X(\lfloor D\rfloor )$).
We define \[|D/U|=\{D'\sim _U D|D'\geq 0\}\qquad {\rm and}\qquad |D|=\{D'\sim D|D'\geq 0\}.\] Here $D'\sim _U D$ if $D-D'$ is a $\mathbb{Z}$-linear combination of principal divisors and Cartier divisors pulled back from $U$. Similarly, we say that $D'\sim _{\mathbb{R},U}D$ if $D-D'$ is an $\mathbb{R}$-linear combination of principal divisors and Cartier divisors pulled back from $U$. We let $|D/U|_{\mathbb{R}}:=\{D'\geq 0|D'\sim _{\mathbb{R},U}D\}$.
Assume now that $U$ is Stein.
\begin{lemma}\label{l-mD} Let $\pi :X\to U$ be a projective morphism from a normal variety $X$ to a Stein variety $U$, and $D$ is a $\mathbb{R}$-divisor on $X$. If $\pi _* {\mathcal{O}} _X(D)\ne 0$, then $|D|\ne \emptyset $ and $|D/U|\ne \emptyset $. Moreover, for a prime Weil divisor $B$ on $X$ define $m_B|D|:={\rm max}\{t \geq 0\;|\;D'\geq tB\ {\rm for\ all}\ D'\in |D|\}$ and $m_B|D/U|:={\rm max}\{t \geq 0\;|\;D'\geq tB\ {\rm for\ all}\ D'\in |D/U|\}$. Then
\[ m_B(D)=m_B|D|=m_B|D/U|.\]
\end{lemma}
\begin{proof}
Since $U$ is Stein, $H^0(X, {\mathcal{O}}_X(D))\cong H^0(U, \pi _* {\mathcal{O}} _X(D))\ne 0$ and so $|D|\ne \emptyset $ and $|D/U|\ne \emptyset $.
Let $m_B=m_B(D)$. Since $\pi _* {\mathcal{O}} _X(D-m_BB)\hookrightarrow \pi _* {\mathcal{O}} _X(D)$ is an isomorphism, $H^0(X, {\mathcal{O}} _X(D))=H^0(X, {\mathcal{O}} _X(D-m_BB))$
and hence $m_B|D|\geq m_B$.
Since $\pi _* {\mathcal{O}} _X(D-(m_B+1)B)\hookrightarrow \pi _* {\mathcal{O}} _X(D)$ is not surjective, and $U$ is Stein,
this map is also not surjective on global sections, i.e. $H^0({\mathcal{O}} _X(D-(m_B+1)B))\to H^0({\mathcal{O}} _X(D))$
is not surjective so that $m_B|D|<m_B+1$, and hence $m_B|D|= m_B$.
Clearly $m_B|D|\geq m_B|D/U|$. If this inequality is strict, then there is a divisor $G\sim _U D$ such that ${\rm mult }_B(G)<m:=m_B|D|$.
We can then pick an open subset $V\subset U$ such that $V\cap \pi (B)\ne \emptyset$ and $G|_{X_V}\sim D|_{X_V}$. But then
$G$ is not in the image of $\phi:\pi _* {\mathcal{O}} _X(D-mB)|_V\hookrightarrow \pi _* {\mathcal{O}} _X(D)|_V$. On the other hand, we have already seen that $m=m_B|D|=m_B$ and hence $\phi$ is an isomorphism. This is impossible and so $m_B|D|= m_B|D/U|$.
\end{proof}
We let \[{\rm Fix}|D/U|=\sum m_B|D/U|\cdot B,\qquad {\rm Mob}|D/U|=D-{\rm Fix}|D/U|.\]
Note that by what we have seen above, we have
\[{\rm Fix}|D/U|={\rm Fix}|D|, \mbox{ where }{\rm Fix}|D|=\sum m_B|D|\cdot B.\]
\begin{lemma} Let $\pi :X\to U$ be a projective morphism to a Stein variety, $X$ smooth and $D$ a divisor on $X$ such that $\pi _*{\mathcal{O}} _X(D)\ne 0$.
If $F={\rm Fix}|D/U|$ and $M={\rm Mob}|D/U|$, then ${\rm Fix}|M/U|=0$ and \[|D/U|=F+|M/U|,\qquad |D|=F+|M|.\]
\end{lemma}
\begin{proof}
Immediate consequence of Lemma \ref{l-mD}.
\end{proof}
It is easy to see that ${\rm Fix}|kmD/U|\leq k{\rm Fix}|mD/U|$ for any integers $k,m>0$. If $D\sim _{{\mathbb{Q}},U}D'\geq 0$, then we let \[{\mathbf {Fix}}(D/U):={\rm lim inf}\frac 1 k {\rm Fix}|kD/U|\] for all $k>0$ sufficiently divisible. Clearly \[{\mathbf {Fix}}(D):={\rm lim inf}\frac 1 k {\rm Fix}|kD|={\mathbf {Fix}}(D/U).\]
If $S\subset X$ is a smooth divisor, then we let $|D/U|_S\subset |D_{|S}/U|$ be the sub-linear series consisting of all divisors $D'|_S$, where $D'\in |D/U|$ and ${\rm Supp} D'$ does not contain $S$.
If $|D/U|_S\ne \emptyset$, we let ${\rm Fix}_S|D/U|:={\rm Fix}(|D/U|_S)$ and if $|kD/U|_S\ne \emptyset$ for some integer $k>0$, then we let \[{\mathbf {Fix}}_S(D/U):={\rm lim inf}\frac 1 k {\rm Fix}|kD/U|_S\] for all $k>0$ sufficiently divisible.
Similarly to what we have seen above, one can show that $|D/U|_S\ne \emptyset$ if and only if the homomorphism $\pi _* {\mathcal{O}} _X(D)\to \pi _* {\mathcal{O}} _S(D|_S)$ is non-zero. Since $U$ is Stein, this is in turn equivalent to the fact that
$|D|_S\ne \emptyset$. It then follows that ${\rm Fix}_S|D/U|={\rm Fix}_S|D|$, and
\[{\mathbf {Fix}}_S(D):={\rm lim inf}\frac 1 k {\rm Fix}|kD|_S={\mathbf {Fix}}_S(D/U).\]
If $f:X'\to X$ is a proper birational morphisms of smooth varieties, $D$ an ${\mathbb{R}}$-Cartier divisor on $X$ and $E\geq 0$ an $f$-exceptional divisor, then $|f^*D/X|_{{\mathbb{R}}}+E=|f^*D+E/X|_{{\mathbb{R}}}$, and $|f^*D|_\mathbb{R}+E=|f^*D+E|_\mathbb{R}$.
If $|D/X|_{{\mathbb{R}}}\ne \emptyset$,
then define ${\mathbf B}(D/X)=\cap _{D'\in |D/X|_{{\mathbb{R}}}}{\rm Supp}(D')$.
\begin{lemma}\label{l-2.3} Let $\pi :X\to U$ be a projective morphism to a Stein variety, $X$ smooth and $D$ a divisor on $X$.
If $D$ is a ${\mathbb{Q}}$-divisor such that $|D/U|_{{\mathbb{R}}}\ne \emptyset$, then $|D/U|_{{\mathbb{Q}}}\ne \emptyset$, $|D|_{{\mathbb{Q}}}\ne \emptyset$ and
\[{\mathbf B}(D/U)=\cap _{D'\in |D/U|_{{\mathbb{Q}}}}{\rm Supp}(D')=\cap _{D'\in |D|_{{\mathbb{Q}}}}{\rm Supp}(D').\]
\end{lemma}
\begin{proof} It is easy to see that ${\mathbf B}(D/U)\subset \cap _{D'\in |D/U|_{{\mathbb{Q}}}}{\rm Supp}(D')\subset \cap _{D'\in |D|_{{\mathbb{Q}}}}{\rm Supp}(D')$.
By \cite[Lemma 2.3]{CL10}, it follows that ${\mathbf B}(D/U)= \cap _{D'\in |D/U|_{{\mathbb{Q}}}}{\rm Supp}(D')$. Finally, let $x\in X$ and $\nu :X'\to X$ be the blow up of $x$ and $E$ the corresponding exceptional divisor. Then $x\in \cap _{D'\in |D|_{{\mathbb{Q}}}}{\rm Supp}(D')$ if and only if $m_E|\nu ^* mD|>0$ for any $m>0$. Assume that $x\in \cap _{D'\in |D|_{{\mathbb{Q}}}}{\rm Supp}(D')$. Then by Lemma \ref{l-mD}, $m_E|\nu ^* mD/U|>0$ for any $m>0$, and hence $x\in \cap _{D'\in |D/U|_{{\mathbb{Q}}}}{\rm Supp}(D')$. This shows that $\cap _{D'\in |D/U|_{{\mathbb{Q}}}}{\rm Supp}(D')\supset \cap _{D'\in |D|_{{\mathbb{Q}}}}{\rm Supp}(D')$ and the claim follows.
\end{proof}
\begin{lemma}\label{l-lu} Let $\pi :X\to U$ be a projective morphism from a smooth connected complex variety to a Stein space, $\mathcal L={\mathcal{O}} _X(L)$ a line bundle on $X$ and $S$ a smooth divisor on $X$.
Then the following are equivalent
\begin{enumerate}
\item $\pi _*\mathcal L\to \pi _*(\mathcal L|_S)$ is surjective,
\item $H^0(X,\mathcal L)\to H^0(S,\mathcal L|_S)$ is surjective or equivalently $|L|_S=|L_{|S}|$,
\item $|L/U|_S=|L_{|S}/U|$.
\end{enumerate}
\end{lemma}
\begin{proof}
(1) implies (2). Since $U$ is Stein, $H^0(U,\pi _*\mathcal L)\to H^0(U,\pi _*(\mathcal L|_S))$ is surjective, and hence so is
$H^0(X,\mathcal L)\to H^0(S,\mathcal L|_S)$.
(2) implies (1). Since $U$ is Stein, $\pi _*(\mathcal L|_S)$ is globally generated by sections of $H^0(U,\pi _*(\mathcal L|_S))\cong H^0(S,\mathcal L|_S)$. By assumption these sections lift to $H^0(U,\pi _*\mathcal L)\cong H^0(X,\mathcal L)$.
Thus $\pi _*\mathcal L\to \pi _*(\mathcal L|_S)$ is surjective.
(3) implies (1). Since $U$ is Stein, $\pi _*(\mathcal L|_S)$ is globally generated by sections of $H^0(U,\pi _*(\mathcal L|_S))\cong H^0(S,\mathcal L|_S)$. Fix $u\in U$ and $g^1_u,\ldots , g^k_u\in H^0(S,\mathcal L|_S)$
local generators of $\pi _*(\mathcal L|_S)$ at $u$. If $G^1_u,\ldots ,G^k_u\in |L_{|S}|$ are the corresponding divisors, then by assumption there are divisors $G^i\in |L+\pi ^*C^i|$ such that $G^i|_S=G^i_u$ and $C^i$ is Cartier on $U$.
Since the $C^i$ are Cartier, there is an open subset $u\in V\subset U$ such that $C^i|_V$ is principal, and hence $\pi ^*C^i|_{X_V}\sim 0$, where $X_V:=\pi^{-1}V$. But then $G^i|_{X_V}\sim L|_{X_V}$ and $(G^i|_{X_V})|_{S_V}=G^i_u|_{S_V}$, where $S_V=S\cap X_V$. This means that \[g^1_u|_{S_V},\ldots , g^k_u|_{S_V}\in {\rm im}\left( H^0(L|_{X_V})\to H^0(L|_{S_V})\right).\]
Since $g^1_u,\ldots , g^k_u\in H^0(S,\mathcal L|_S)$ are
local generators of $\pi _*(\mathcal L|_S)$ at $u$, then $\pi _*\mathcal L\to \pi _*(\mathcal L|_S)$ is surjective at $u$. Since $u\in U$ is arbitrary, (1) holds.
(1) implies (3). It is clear that $|L/U|_S\subset |L_{|S}/U|$. Suppose that $G_S\in |L_{|S}/U|$, then we must show that $G_S=G|_S$ for some $G\in |L/U|$.
By definition, there is a Cartier divisor $C$ on $U$ such that $G_S\sim L_{|S}+(\pi ^* C)_{|S}$.
By our assumption, $\pi _*\mathcal L\to \pi _*(\mathcal L|_S)$ is surjective, and hence so is
$\pi _*\mathcal L(\pi ^* C)\to \pi _*(\mathcal L(\pi ^* C)|_S)$ (here we use the projection formula and the fact that ${\mathcal{O}} _U(C)$ is invertible).
Since $U$ is Stein, this induces a surjection on global sections and hence
$H^0(X,\mathcal L(\pi ^* C))\to H^0(S, \mathcal L(\pi ^* C)|_S)$ is surjective, i.e. $G_S=G|_S$ for some $G\in |\mathcal L(\pi ^* C)|$. Thus $G\sim _U L$ concluding the proof.
\end{proof}
\begin{lemma}\label{l-gu} Let $\pi :X\to U$ be a projective morphism from a smooth complex variety to a Stein space, $\mathcal L={\mathcal{O}} _X(L)$ a line bundle on $X$. For any point $x\in X$, we have that $\operatorname{Bs}(|L|)$ does not contain $x$ if and only if $\operatorname{Bs}(|L/U|)$ does not contain $x$, if and only if $\pi _* \mathcal L\to \pi _*(\mathcal L/\mathfrak m _x)$ is surjective.
\end{lemma}
\begin{proof}
Since $|L|\subset |L/U|$, it is clear that if $\operatorname{Bs}(|L|)$ does not contain $x$, then $\operatorname{Bs}(|L/U|)$ does not contain $x$.
Suppose now that $\operatorname{Bs}(|L/U|)$ does not contain $x$. So there is a divisor $0\<G\in |L/U|$ such that $x\not\in {\rm Supp}(G)$.
Since $G\sim L+\pi ^*C$, where $C$ is a Cartier divisor on $U$, we may find an open subset $\pi (x)\in V\subset U$ such that $C|_V$ is a principal divisor, i.e. $C|_V\sim 0$. But then $G|_{X_V}\sim L|_{X_V}$ and it follows that $\mathcal{L}|_{X_V}$ is globally generated at $x$. Thus
$ \mathcal L\to \mathcal L/\mathfrak m _x\cong \mathbb C _x$ is surjective, and hence so is $\pi _* \mathcal L\to \pi _*(\mathcal L/\mathfrak m _x)$, since $U$ is Stein.
Suppose now that $\pi _* \mathcal L\to \pi _*(\mathcal L/\mathfrak m _x)$ is surjective.
Since $U$ is Stein, \[H^0(X, \mathcal L)\cong H^0(U, \pi _* \mathcal L)\to H^0(U, \pi _*(\mathcal L/\mathfrak m _x))\cong H^0\left(\{x\}, \mathcal L/\mathfrak m _x\right)\cong \mathbb{C}_x\] is surjective, and hence $\operatorname{Bs}(|L|)$ does not contain $x$.
\end{proof}
\begin{lemma}\label{l-2.28}
Let $\pi :X\to U$ be a projective morphism from a smooth variety to a Stein space and let $D_1,\ldots , D_\ell \in {\rm Div}_\mathbb{Q}(X)$
be such that $|D_i|_\mathbb{Q} \ne \emptyset$ for each $i$. Let $V \subset {\rm Div}_{\mathbb{R}}(X)$ be the subspace spanned by the
components of $D_1,\ldots , D_\ell$, and let $\mathcal P \subset V$ be the convex hull of $D_1,\ldots , D_\ell$. Assume
that the ring \[R(X; D_1,\ldots , D_\ell):=\bigoplus_{(m_1,\ldots, m_\ell)\in \mathbb N ^k}H^0\left(X, {\mathcal{O}} _X\left(\sum m_iD_i\right)\right)\] is finitely generated. Then:
\begin{enumerate}
\item ${\rm Fix}$ extends to a rational piecewise affine function on $\mathcal P$;
\item there exists a positive integer $k$ such that for every $D \in \mathcal P$ and every $m \in \mathbb N$,
if $\frac m
k D \in {\rm Div}(X)$, then $\mathbf{Fix}(D) = \frac 1
m {\rm Fix} |mD|$.
\end{enumerate}
\end{lemma}
\begin{proof}
See the proof of \cite[Lemma 2.28]{CL10}.
\end{proof}
\subsection{K\"ahler classes}\label{subs:analytic-classes}
In this section we recall a well known to the experts, characterizations of K\"ahler classes.
Since we were unable to find complete references in the literature, we include a detailed proof below.
We consider the following set-up. Let $(X, g)$ be a compact, normal complex space endowed with a Hermitian metric $g$. The objects we will work with in this subsection are introduced below.
\smallskip
\begin{definition} Let
$\displaystyle (A_i)_{i\in I}$ be an open finite covering of $X$ such that each subset $A_i$ is a local analytic subset of some open subset $\Omega_i\subset \mathbb{C}^{N_i}$. The space of forms of type $(p,q)$, denoted by ${\mathcal{C}}^{k}_{p,q}(X)$, is defined by local restrictions of forms of type $(p,q)$ which are $k$ times differentiable on the sets $\Omega_i$ above. Here $k$ is a positive integer or $\infty$. The definition of the space of currents on $X$ is then
completely parallel to the smooth case.
\end{definition}
\noindent For a more complete presentation we refer the reader to the first part of the article \cite{Dem85}.
\medskip
\noindent Let $\alpha\in {\mathcal{C}}^{\infty}_{1,1}(X)$ be a smooth $(1,1)$-form on $X$.
We assume that $\alpha$ is $\partial$ and $\bar{\partial}$ closed, such that moreover the following properties hold true.
\begin{enumerate}
\smallskip
\item[(1)] The class $\{\alpha\}$ is nef, i.e, we have $(f_\varepsilon)_{\varepsilon> 0}\subset {\mathcal{C}}^{\infty}(X)$ such that
$$\alpha+ i\partial\bar\partial f_\varepsilon\geq -\varepsilon g$$
on $X$.
\smallskip
\item[(2)] The class $\{\alpha\}$ is big, i.e. there exists
a function $\tau\in L^1(X)$ such that
$$\alpha+ i\partial\bar\partial \tau\geq \varepsilon_0 g$$
as currents on $X$, where $\varepsilon_0> 0$ is a positive constant.
\smallskip
\item[(3)] Let $V\subset X$ be a positive dimensional (compact) reduced analytic subset. Then we have
$$\int_{V_{\rm reg}}\alpha^{\operatorname{dim}(V)}> 0.$$
\end{enumerate}
\smallskip
\noindent Then we show that the following holds true.
\begin{theorem}\label{thm:nef-big-to-kahler}
Let $X$ be a compact analytic normal variety and $\alpha\in {\mathcal{C}}^{\infty}_{1,1}(X)$
such that $\partial\alpha= 0, \bar{\partial}\alpha= 0$. We assume moreover that the
properties (1)-(3) above are satisfied.
Then $\alpha$ is a K\"ahler class, i.e. there exists a function $\varphi\in {\mathcal{C}}^\infty(X)$ such that
\begin{equation}\label{eq1}
\alpha+ i\partial\bar\partial \varphi\geq \varepsilon_1 g
\end{equation}
on $X$, where $\varepsilon_1>0$.
\end{theorem}
\noindent In particular, $X$ is a K\"ahler space in the sense adopted in
\cite{BG13} provided that $\alpha$ is locally in the image of the $\partial\bar\partial$ operator. But in any case we can construct the function $\varphi$ with the properties of \eqref{eq1}.
\smallskip
\subsubsection{Psh functions on complex spaces} We recall here a few basic facts concerning psh functions defined on normal complex spaces. Our main reference is the first section of the article \cite{Dem85}.
\smallskip
\noindent To start with, a \emph{quasi-psh} function $\phi:X\to [-\infty, \infty[$
is {by definition} given by the restriction to each $A_i$
of a quasi-psh function on $\Omega_i$, for all $i\in I$. A locally integrable function
$\psi:X\to [-\infty, \infty[$ is called \emph{weakly quasi-psh} if it is locally bounded from above and such that
\begin{equation}\label{eq8}
i\partial\bar\partial \psi\geq - Cg
\end{equation}
for some positive real constant $C> 0$. Note that the local boundedness hypothesis is automatic in the non-singular case, but this is no longer true in our actual context.
\smallskip
\noindent We quote next a result which plays a crucial role in what follows. Its proof
(cf. \cite{Dem85}, Theorem 1.7) relies on two fundamental facts: the desingularization theorem of Hironaka and the characterisation of psh functions by restrictions to holomorphic disks, due to Forna\!ess-Narasimhan.
\begin{theorem}\label{wpsh}\cite[Theorem 1.7]{Dem85} Let $\psi$ be a weakly quasi-psh function defined on a normal compact complex space $X$. Then the function
\begin{equation}\label{eq9}
\psi^\star(x):= \lim\sup_{y\to x}\psi(y)
\end{equation}
is quasi-psh on $X$. Moreover, if $\displaystyle i\partial\bar\partial \psi\geq - C\gamma$ for some smooth form
$\gamma$ on $X$ then the Hessian of the function $\psi^\star$ in \eqref{eq9} has the same property.
\end{theorem}
\noindent The following statement is a direct consequence of Theorem \ref{wpsh}.
\begin{corollary}\label{coro1} Let $p: Y\to X$ be a modification, where $X$ and $Y$ are normal complex spaces. We assume that $\displaystyle p^\star\alpha+ i\partial\bar\partial \psi_Y\geq Cp^\star g$, where
$C$ is a real number. Then there exist a quasi-psh function $\psi_X:X\to [-\infty, \infty[$ such that
\begin{equation}\label{eq10}
\alpha+ i\partial\bar\partial \psi_X\geq Cg
\end{equation}
in the sense of currents on $X$.
\end{corollary}
\noindent Indeed, $\psi_X$ is obtained by taking the direct image of $\psi_Y$ and then applying the \emph{usc} regularisation procedure \eqref{eq9}. We notice that the direct image of $\psi_Y$ is automatically locally bounded from above.
\smallskip
\begin{proof}
The first step consists in constructing a (new) function $\tau$ with similar properties as in (2) above such that its singularities are concentrated along an analytic subset of $X$.
\noindent Let $\pi:\widehat X\to X$ be a desingularization of $X$. The pull-back of (2) shows that we have
\begin{equation}\label{eq2}
\Theta:= \pi^\star \alpha+ i\partial\bar\partial \varphi\circ \pi\geq \varepsilon_0 \pi^\star g
\end{equation}
in other words $\Theta$ is a closed $(1,1)$-current on $\widehat X$, greater than $\pi^\star g$. This implies that $\{\Theta\}$ contains a so-called K\"ahler current, that is to say a representative which is greater than a positive multiple of a Hermitian metric on $\widehat X$.
\noindent By Demailly's regularisation theorem (cf. \cite{Dem92}, main result), we can replace $\Theta$ by a cohomologous current
say $\Theta_1\in \{\Theta\}$ such that
\begin{equation}\label{eq3}
\Theta_1:= \pi^\star \alpha+ i\partial\bar\partial \varphi_1\geq \varepsilon_1 \widehat g, \qquad \Theta_1|_{\widehat X\setminus W}
\in {\mathcal{C}}^\infty(\widehat X\setminus W)\end{equation}
on $\widehat X$, where $W\subset \widehat X$ is a proper analytic subset.
\smallskip
\noindent The direct image $\pi_\star \Theta_1$ has the property (2) and it is non-singular on the complement of the analytic set $Y\subset X$
\begin{equation}\label{eq4}
Y:= X_{\rm sing}\cup \pi(W).
\end{equation}
In order to keep the notations as simple as possible, we assume from now on that $\tau$ in (2) is smooth in the complement of an analytic set $Y$.
\medskip
\noindent The next step consists in establishing the following simple statement, which will be used to argue by induction.
\begin{lemma}\label{ind}
Let $Z\subset X$ be any normal analytic subspace. Then the restriction $\alpha|_Z$
defines a $(1,1)$-form on $Z$ which satisfies the properties {\rm (1)-(3)}.
\end{lemma}
\begin{proof}
It is clear that $\alpha|_Z$ satisfies the properties (1) and (3). We show next that it is the case for (2) as well. By the existence of the current $\Theta_1$ as in \eqref{eq3} it follows that a further modification of the complex manifold $\widehat X$ is K\"ahler, see \cite[Theorem 0.7]{DP04}. We assume that it is the case for $\widehat X$ itself. In particular, $X$ is in Fujiki's class $\mathcal{C}$.
\smallskip
Let $p_Z:\widehat Z\to Z$ be a desingularization of $Z$. As we have seen that $X$ is in Fujiki's class $\mathcal{C}$, by \cite[A, page 235]{Fuj83C}, $Z$ is also in the class $\mathcal{C}$. Therefore passing to a higher desingularization we may assume that $\widehat{Z}$ is K\"ahler.
Then the class $p_Z^\star\{\alpha\}$ is nef, and $\displaystyle \int_{\widehat Z}p_Z^\star\alpha^d> 0$. By \cite[Theorem 0.5]{DP04} it contains a K\"ahler current, whose direct image combined with Corollary \ref{coro1} allow us to conclude.
\end{proof}
\medskip
\noindent In this last step we
remove the singularities of $\pi_\star\Theta_1$ by induction. For this, we are using the gluing techniques as in \cite{Dem90} (the reader may also consult \emph{Complex Analytic and Differential Geometry} by J.-P. Demailly, book available on the author's website, pages 411-414). We have to face two types of difficulties:
\noindent $\bullet$ The space $Y$ may have several components.
\noindent $\bullet$ Even if $Y$ is irreducible, it may not be normal (which will give us troubles, since we intend to use induction).
\smallskip
\noindent In order to understand how it works, we first assume that $Y$ is irreducible and normal.
Lemma \ref{ind} plus the induction hypothesis show the existence of a function $\tau_Y\in {\mathcal{C}}^\infty(Y)$ such that
\begin{equation}\label{eq5}
\alpha|_Y+ i\partial\bar\partial \tau_Y\geq \varepsilon_2g|_Y.
\end{equation}
\noindent By the proof of Theorem 4 in \cite{Dem90} we can assume that there exists an open subset $Y\subset U\subset X$ such that \eqref{eq5} holds true on $U$. That is to say, there exists an extension $\widetilde \tau_Y\in {\mathcal{C}}^\infty(U)$
of $\tau_Y$ such that
\begin{equation}\label{eq11}
\alpha|_U+ i\partial\bar\partial \widetilde \tau_Y\geq \varepsilon_3g|_U
\end{equation}
for some strictly positive $\varepsilon_3> 0$. Intuitively the construction of $\widetilde \tau_Y$ is clear: thanks to \eqref{eq5} the eigenvalues of the Hessian of $\tau_Y$ in the tangent directions of $Y$ are suitable, we simply ``correct'' the normal directions as indicated in \emph{loc. cit.} In this process there is a loss of positivity involved (since one is using a partition of unity) but since $\varepsilon_2>0$, we can afford that.
\smallskip
\noindent Now we consider the regularized maximum function
\begin{equation}\label{eq6}
\varphi:= \max_{\rm reg}(\tau, \widetilde \tau_Y- C)
\end{equation}
(cf. \cite{Dem90}, part of the proof of Lemma 5)
where $C\gg 0$ is a large enough constant, such that $\varphi= \tau$ near the boundary of $U$. This is possible since $\tau$ is smooth on the complement of $Y$. On the other hand, we clearly have $\varphi= \widetilde\tau- C$ in a nbd of $Y$, since
$\tau$ equals $-\infty$ when restricted to $Y$. Now the usual properties of the regularised
maximum of two functions (see especially \emph{loc. cit.}, page 287) show that we have \eqref{eq11}.
\medskip
\noindent In order to treat the general case, we formulate the following statement.
\begin{claim}\label{senilita}
Let $Y\subsetneq X$ be an analytic subset of $X$. Then there exists an open subset $U$ such that $Y\subset U\subset X$ and a function $\widetilde \tau_Y$ for which the property \eqref{eq11} is valid.
\end{claim}
\noindent Before explaining the arguments of the claim, a first thing to remark is that it would settle Theorem \ref{thm:nef-big-to-kahler}, by the maximum technique used in the particular case we have just treated above. We proceed in two steps.
\smallskip
\noindent $\bullet$ \emph{It is enough to establish the Claim \ref{senilita} in case of an analytic space $Y$ which is irreducible.} This is done by decomposing
\begin{equation}\label{eq7}
Y= Y_1\cup\dots \cup Y_N
\end{equation}
the set $Y$ as union of irreducible analytic sets and applying the maximum procedure sketched above combined with induction on $N$. Although standard, we explain next the construction of
$(U, \widetilde \tau_Y)$ if $N=2$, i.e. we assume that $Y$ only has two components. For an arbitrary $N$ there are no additional arguments to be invoked.
\smallskip
\noindent Let $(U_1, \widetilde \tau_1)$ and $(U_2, \widetilde \tau_2)$ corresponding to $Y_1$ and $Y_2$, respectively such that \eqref{eq11} holds true.
By \cite[Lemma 5]{Dem90} there exists a quasi-psh function $v$ with log-poles along $Y_1\cap Y_2$
and smooth in the complement of this analytic set. We consider the function
\begin{equation}\label{eq12}
\widetilde \tau_1+ \varepsilon v
\end{equation}
where $0< \varepsilon\ll 1$ is small enough -fixed- such that
\begin{equation}\label{eq13}
\alpha|_{U_1}+ i\partial\bar\partial \left(\widetilde \tau_1+ \varepsilon v \right)\geq \frac{1}{2}\varepsilon_3g|_{U_1}.
\end{equation}
This operation may seem silly --since $\widetilde \tau_1$ is smooth and by
adding the small multiple of
$v$ the resulting function becomes singular along the intersection $Y_1\cap Y_2$. Nevertheless, thanks to it we can conclude: let $W\subset U_1\cap U_2$ be an open subset of $X$ containing $Y_1\cap Y_2$. Let $C\gg 0$ large enough such that we have
\begin{equation}\label{eq14}
\widetilde \tau_1+ \varepsilon v\geq \widetilde \tau_2- C
\end{equation}
on $\partial W$, the boundary of $W$. We fix such a constant $C$ and remark that the function
\begin{equation}\label{eq15}
\max_{\rm reg}(\widetilde \tau_1+ \varepsilon v, \widetilde \tau_2- C)
\end{equation}
defined on $U_1$ is smooth, its Hessian verifies an inequality similar to \eqref{eq13} and moreover it equals
$\widetilde \tau_2- C$ near $Y_1\cap Y_2$. By shrinking $U_1$ and $U_2$ we can combine (smoothly!) the function constructed in \eqref{eq15} with $\widetilde \tau_2- C$ and therefore obtain $(U, \widetilde \tau_Y)$.
\smallskip
\noindent $\bullet$ \emph{Induction.} We assume that Theorem \ref{thm:nef-big-to-kahler} is established in case of a normal analytic space of dimension smaller than $\operatorname{dim}(X)$ and that Claim \ref{senilita} is established for analytic sets $Z$ such that $\operatorname{dim}(Z)\leq \operatorname{dim}(Y)-1$.
\smallskip
\noindent Let $Y\subsetneq X$ be an irreducible analytic proper subset of $X$. Then there exists a modification $f: X_1\to X$ with the following properties.
\begin{enumerate}
\smallskip
\item[(i)] The analytic space $X_1$ is compact and normal.
\smallskip
\item[(ii)] The co-restriction of $f$ to $Y$ is generically finite and the proper transform $Y_1\subset X_1$ of $Y$ is smooth.
\end{enumerate}
\noindent The restriction $\displaystyle f^\star\alpha|_{Y_1}$ of the $f$-inverse image of
$\alpha$ to $Y_1$ is still nef and big. Thus it contains a K\"ahler current
\begin{equation}\label{eq22}
f^\star\alpha|_{Y_1} + i\partial\bar\partial \psi_1\geq g_1|_{Y_1},
\end{equation} where the function $\psi_1$ can be assumed to have analytic singularities and $g_1$ is a Hermitian metric on $X_1$. In particular $\psi_1$ is smooth in the complement of a proper analytic subset $W_1\subsetneq Y_1$. We can also assume that
$W_1$ contains the analytic set in the complement of which the restriction of the map
$\displaystyle f|_{Y_1}: Y_1\to Y$ is a biholomorphism.
\smallskip
\noindent By modifying the function $\psi_1$ as in \cite{Dem90}, we infer the following: \emph{there exists an open set $\Lambda$ containing $Y_1\setminus W_1$ such that}
\begin{equation}\label{eq24}
f^\star\alpha + i\partial\bar\partial\psi_1\geq \frac{1}{2}g_1
\end{equation}
\emph{on} $\Lambda$. A more general version of this is established in the article \cite{CT15} pages 1181-1185.
\smallskip
\noindent Next, we consider the direct image
$W:= f(W_1)\subsetneq Y$ and we use the induction hypothesis: there exists an open subset $U_W\subset X$ and a smooth function $\tau_W: U_W\to \mathbb R$ such that
\eqref{eq11} holds true. By taking the inverse image via $f$ we get
\begin{equation}\label{eq23}
f^\star\alpha + i\partial\bar\partial (\tau_W\circ f)\geq \varepsilon_4 f^\star g
\end{equation}
pointwise on $f^{-1}(U_W)$.
\smallskip
\noindent By combining \eqref{eq23} and \eqref{eq24} we obtain a ${\mathcal{C}}^\infty$ function $\tau_1$ on an open subset $f^{-1}(U_Y)$, the inverse image of an open subset $U_Y$ containing $Y$. In other words, we have
\begin{equation}\label{eq25}
f^\star\alpha + i\partial\bar\partial\tau_1\geq \varepsilon_5 f^\star g
\end{equation}
on $f^{-1}(U_Y)$. The ``pinched'' open subset $\Lambda$ is involved in the gluing process, but this makes absolutely no difference
\smallskip
\noindent The inequality \eqref{eq25} shows that the smooth function $\tau_1$ is constant on every positive dimensional fiber of $f$ at some point of $U_Y$. It therefore descends to $U_Y$ and Claim \ref{senilita} is established.
\medskip
\noindent As we have already mentioned, we can glue the function constructed in our claim
with $\tau$ obtained in the first part of the proof: in this way we obtain $\varphi$.
\end{proof}
\medskip
\noindent We also include the following result whose proof follows exactly as in the non-singular case, modulo the use of Theorem \ref{wpsh}.
\begin{lemma}\label{lem:arbitrary-nef-and-big}
Let $X$ be a compact K\"ahler analytic variety and $\alpha\in H^{1, 1}_{\partial\bar\partial}(X)$ is a nef class. Then $\alpha$ is a big class on $X$ if and only if $\alpha^n>0$, where $n=\operatorname{dim} X$.
\end{lemma}
\begin{proof}
Let $f:Y\to X$ be a resolution of singularities of $X$. Given that $X$ is K\"ahler, it will equally be the case for $Y$. Then $f^*\alpha$ is nef and $(f^*\alpha)^n=\alpha^n$, by the projection formula. By a quick direct image argument it follows that $\alpha$ is big if and only if $f^*\alpha$ is big. But the equivalence between the positivity of the top self-intersection and bigness for a nef class on a K\"ahler manifold is well-known \cite{DP04}.
\end{proof}
\begin{theorem}\label{thm:nef-restricts-to-pseff}
Let $X$ be a compact K\"ahler analytic variety and let $\alpha\in {\mathcal{C}}^{\infty}_{1,1}(X)$ be a
smooth $(1,1)$-form such that $\bar{\partial}\alpha=0, \partial\alpha= 0$. Then $\alpha$ is a nef class if and only if $\alpha|_Z$ is a pseudo-effective class for all irreducible analytic subvarieties $Z\subset X$.
\end{theorem}
\begin{proof} If $\alpha$ is nef, then the restriction $\alpha|_Z$ to any irreducible analytic subvariety $Z\subset X$ is also nef. It follows that $\alpha|_Z$ is pseudo-effective by the usual argument: we construct a closed positive current in the pullback of $\alpha|_Z$ on any non-singular model of $Z$ and then take the direct image.
For the other direction, we proceed as follows: Let $t={\rm inf}\{s\geq 0|\alpha+ s \omega\ {\rm is\ Kahler}\}$, then $\alpha+ t \omega$ is nef but not K\"ahler. Suppose that $t>0$, then $(\alpha+ t \omega)|_Z$ is big (an nef) for every $Z\subset X$ (including $Z=X$), and hence by Lemma \ref{lem:arbitrary-nef-and-big}, $(\alpha+t\omega)^{\operatorname{dim} Z}\cdot Z=((\alpha+ t \omega)|_Z)^{\operatorname{dim} Z}>0$. Then by Theorem \ref{thm:nef-big-to-kahler}, $\alpha+ t \omega$ is K\"ahler, which is easily seen to contradict the definition of $t$. Therefore $t=0$ and so $\alpha$ is nef.
\end{proof}
\begin{remark}
Theorem \ref{thm:nef-restricts-to-pseff} holds without the assumption that $X$ is K\"ahler,
by using the gluing procedure employed in the proof of Theorem \ref{thm:nef-big-to-kahler}.
\end{remark}
\begin{lemma}\label{lem:nef-pullback}
Let $f :X'\to X$ be a proper surjective morphism of normal compact K\"ahler varieties. Then a class $\alpha \in
H^{1,1}_{BC}(X )$ nef if and only if $f ^*\alpha$ is nef.
\end{lemma}
\begin{proof}
If $\alpha $ is nef then it follows easily that $f ^*\alpha$ is nef. Suppose now that $f ^*\alpha$ is nef and in particular $f^*\alpha$ is pseudo-effective.
Let $t={\rm inf}\{s\geq 0\;|\;\alpha +s\omega \ \mbox{ is K\"ahler}\}$. Suppose that $t>0$, then $\alpha +t\omega$ is nef but not K\"ahler, and we claim that
$\int _V (\alpha +t\omega) ^{k}>0$ for any subvariety $V$ of codimension $k$.
It follows that conditions (1) and (3) of Theorem \ref{thm:nef-big-to-kahler} are satisfied by $\alpha +t\omega$, and condition (2) is immediate from Lemma \ref{lem:arbitrary-nef-and-big}.
Thus, by Theorem \ref{thm:nef-big-to-kahler}, $\alpha +t\omega$ is K\"ahler, a contradiction. In particular $t=0$ and $\alpha$ is nef.
We now prove the claim.
For any analytic subvariety $V\subset X$, let $V'$ be the unique irreducible component of $f^{-1}V$ dominating $V$, $F$ a general fiber of $V'\to V$. Assume that $\operatorname{dim} V=k$, $\operatorname{dim} V'=k+j$, $\eta$ K\"ahler on $X'$, and $\lambda=\int _F\eta^j>0$ then, by the projection formula we have
\[\lambda\cdot\int _V (\alpha +t\omega) ^{k}=\int _{V'}f^*(\alpha +t\omega) ^{k}\wedge \eta ^j\geq \int _{V'}f^*(t\omega)^k\wedge \eta ^j=\lambda t^k\int _V \eta ^k >0\]
as $f^*\alpha$ is nef.
\end{proof}
Finally we extend \cite[Corollary 0.3]{DP04} to the singular case.
\begin{corollary}
Let $X$ be a compact normal K\"ahler variety, $\omega $ a K\"ahler form on $X$, and $\alpha \in H^{1,1}_{BC}(X)$, then
\begin{enumerate}
\item $\alpha $ is nef if and only if $\int _V\alpha ^k\wedge \omega ^{p-k} \geq 0$ for every analytic $p$-dimensional subvariety $V\subset X$ and for all $0<k\leq p$, and
\item $\alpha $ is K\"ahler if and only if $\int _V\alpha^k \wedge \omega ^{p-k} >0$ for every analytic $p$-dimensional subvariety $V\subset X$ and for all $0<k\leq p$.
\end{enumerate}
\end{corollary}
\begin{proof} The only if part is clear, so assume that $\int _V\alpha ^k \wedge \omega ^{p-k} \geq 0$ for any analytic subvariety $V\subset X$ with $p=\operatorname{dim} V$ and any $0<k\leq p$. Let $V\subset X$ be a proper subvariety, and $\nu:\tilde V\to V$ the normalization. Suppose that $\tilde \alpha =\nu ^*\alpha $ is the pull-back of $\alpha$, then it follows easily by induction on the dimension that $\tilde\alpha $ is nef. Let $f:X'\to X$ be a resolution of singularities and $V'\subset X'$ a subvariety such that $f(V')=V$. If $\nu ':\tilde V'\to V'$ is the normalization and $\alpha '=f^*\alpha$, then $\tilde \alpha'={\nu'}^* \alpha ' $ is nef as it is the pull-back of $\tilde \alpha$ via the induced map $\tilde V'\to \tilde V$.
It suffices to show that $\alpha _\epsilon:=\alpha +\epsilon \omega $ is nef for any $0<\epsilon \ll 1$. Clearly \[\int _{X'} f^*(\alpha _\epsilon^k \wedge \omega ^{n-k})=\int _X\alpha _\epsilon ^k \wedge \omega ^{n-k}\geq \epsilon^k \int _X \omega ^{n}>0.\]
Let $\omega '$ be a K\"ahler class on $X'$ and $\alpha _\epsilon ' :=f^*\alpha _\epsilon $, then $\omega _\delta :=f^*\omega +\delta \omega '$ is K\"ahler for $\delta>0$ and by continuity $\int _{X'} (\alpha _\epsilon') ^k \wedge \omega_\delta ^{n-k}>0$ for $0<\delta\ll \epsilon $.
Assume now that $V'\subset X' $ is a proper subvariety of dimension $p<\operatorname{dim} X'$, then since $\tilde \alpha'$ is nef (as observed above), we have
\[\int _{V'}(\alpha'_\epsilon)^k \wedge \omega_\delta ^{p-k}=\int _{\tilde V'}(\tilde \alpha'+\epsilon (f\circ {\nu}')^*\omega )^k \wedge ( \nu ')^*\omega_\delta ^{p-k}\geq 0.\]
By \cite[Corollary 0.3]{DP04}, $\alpha _\epsilon '$ is nef, and hence by Lemma \ref{lem:nef-pullback}, $\alpha _\epsilon $ is nef. This proves (1). To see (2), note that if $\alpha$ is K\"ahler then the stated inequalities clearly hold. For the reverse implication, simply observe that the K\"ahler cone coincides with the interior of the nef cone.
\end{proof}
\subsection{Kawamata-Viehweg, Base-Point-Free and Semiampleness Theorems}
The fundamental result in this context is Kawamata-Viehweg vanishing cf. \cite[Theorem 3.7]{Nak87} and \cite[Corollary 1.4]{Fuj13}.
\begin{definition}\label{def:f-nef-big}
Let $f:X\to Y$ be a proper surjective morphism of analytic varieties and $L$ is a line bundle on $X$. Then $L$ is called $f$-nef-big, if $c_1(L)\cdot C\>0$ for all curves $C\subset X$ such that $f(C)=\operatorname{pt}$, and $\kappa(X/Y, L)=\operatorname{dim} X-\operatorname{dim} Y$ (see \cite[(B), Page 554]{Nak87}).
\end{definition}
The following version of (relative) Kawamata-Viehweg vanishing theorem for a proper morphism between analytic varieties is proved in \cite[Theorem 3.7]{Nak87} and \cite[Corollary 1.4]{Fuj13}.
\begin{theorem}\cite[Theorem 3.7]{Nak87}\cite[Corollary 1.4]{Fuj13}\label{thm:kvv-original}
Let $\pi:X\to S$ be a proper surjective morphism from a complex manifold $X$ onto an analytic variety $S$. Let $H$ be a $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor on $X$ such that it is $\pi$-nef-big and $\{H\}$ has SNC support. Then $R^i\pi_*(\omega_X\otimes\mathcal{O}_X(\lceil H\rceil))=0$ for all $i>0$.
\end{theorem}
The following variant is more convenient for us.
\begin{theorem}\cite[Theorem 2.16]{DH20}\label{thm:relative-kvv}
Let $\pi:X\to S$ be a proper surjective morphism of analytic varieties. Let $\Delta\>0$ be a $\mathbb{Q}$-divisor on $X$ such that $(X, \Delta)$ is klt, and $D$ is a $\mathbb{Q}$-Cartier integral Weil divisor on $X$ such that $D-(K_X+\Delta)$ is $\pi$-nef-big. Then
\[
R^i\pi_*\mathcal{O}_X(D)=0\qquad \text{ for all }\ i>0.
\]
\end{theorem}
\begin{theorem}[Base-Point Free Theorem]\cite[Theorem 4.8]{Nak87}\label{thm:relative-bpf}
Let $f: X \to Y$ be a proper surjective morphism between two normal analytic varieties and $B\>0$ a $\mathbb{Q}$-divisor on $X$ such that $(X, B)$ is klt. Let $H$ be a $f$-nef Cartier divisor on $X$ such that $H -(K_X +B)$ is $f$-nef and $f$-big. Then there exist a projective surjective morphism $g: Z \to Y$ from a normal
analytic variety $Z$, a proper surjective morphism $\phi: X \to Z$, and a $g$-ample
line bundle $\mathscr{L}$ on $Z$ such that $f=g\circ\phi$ and $\phi^*\mathscr{L}\cong \mathcal{O}_X(H)$.
\end{theorem}
\subsection{Relative cone and contraction theorems for projective morphisms}
Here we collect a cone theorem from \cite{Nak87}. Recall that if $f:X\to Y$ is a projective surjective morphism of analytic varieties and $W\subset Y$ is a compact subset of $Y$, then $Z_1(X/Y;W)$ is generated by curves $C\subset X$ such that $f(C)$ is a point in $W$. We say that two curves $C_1,C_2$ are numerically equivalent over $W$, $C_1\equiv _WC_2$ if $(C_1-C_2)\cdot f^*L=0$ for any Cartier divisor $L$ defined on a neighborhood of $W$. Then $N_1(X/Y;W):=Z_1(X/Y;W)\otimes _{\mathbb Z}\mathbb R/\equiv _W$.\\
We also define $\widetilde{Z}^1(X/Y; W)$ as the group of line bundles defined over some neighborhood of $W$ modulo the following equivalence relation: for $\mathscr{L}_1\in \Pic(f^{-1}U_1)$ and $\mathscr{L}_2\in \Pic(f^{-1}U_2)$, where $U_1$ and $U_2$ are open neighborhoods of $W$, $\mathscr{L}_1\equiv_W \mathscr{L}_2$ if $\mathscr{L}_1\cdot C=\mathscr{L}_2\cdot C$ for all curves $C\subset X$ such that $f(C)=\operatorname{pt}\in W$. Then $N^1(X/Y; W):=\widetilde{Z}^1(X/Y; W)\otimes_{\mathbb{Z}}\mathbb{R}$.
\begin{definition}[Property \textbf{P} and \textbf{Q}]\label{def:property-pq}\cite{Fuj22}
Let $f:X\to Y$ be a projective surjective morphism of analytic varieties and $W\subset Y$ is a compact subset of $Y$. We say that $f:X\to Y$ and $W$ satisfy property \textbf{P}\ if the following holds:
\begin{enumerate}
\item[(P1)] $X$ is a normal analytic variety,
\item[(P2)] $Y$ is a Stein space,
\item[(P3)] $W$ is a Stein compact subset of $Y$, i.e. $W$ has a fundamental system of Stein open neighborhoods, and
\item[(P4)] $W\cap Z$ has finitely many connected components for any analytic set $Z$ defined on an open neighborhood of $W$.
\end{enumerate}
We will simply say that $f:X\to Y$ satisfies property \textbf{P}\ if $W$ is understood.\\
We say that $f:X\to Y$ satisfies property \textbf{Q}\ if
\begin{enumerate}
\item[(Q1)] $X$ is normal, and
\item[(Q2)] $X$ and $Y$ are both compact.
\end{enumerate}~\\
\begin{itemize}
\item We will say that a projective morphism $f:X\to Y$ and a compact subset $W\subset Y$ satisfies either property \textbf{P}\ or \textbf{Q}\ if either $f:X\to Y$ and $W\subset Y$ satisfy property \textbf{P}\ or $f:X\to Y$ satisfies property \textbf{Q}. Moreover, if only the property \textbf{Q}\ is satisfied, then we will denote $N^1(X/Y), N_1(X/Y)$, etc. to mean $N^1(X/Y;Y), N_1(X/Y;Y)$, etc.
\end{itemize}
\end{definition}
\begin{remark}\label{rmk:relative-NS}
By \cite[Chapter II. 5.19. Lemma]{Nak04}, if $f:X\to Y$ and $W\subset Y$ satisfy either property \textbf{P}\ or \textbf{Q}\, then $N^1(X/Y; W)$ (and hence also $N_1(X/Y;W)$) is finitely dimensional over $\mathbb{R}$. {Unluckily, this result is only stated in the case that $X\to Y$ is a} {\bf projective morphism}. By a result of Siu, \cite[Theorem 1]{Siu69}, it is known that property (P4) holds if and only if $\Gamma (W,\mathcal O _W)$ is notherian. In particular, for any $w\in W$ there is a neighborhood $w\in V\subset Y$ such that $V$ satisfies (P3) and (P4).
\end{remark}
\begin{theorem}[Cone Theorem]\cite[Theorem 4.12]{Nak87}\label{thm:general-relative-cone}
Let $f: X \to Y$ be a projective surjective morphsim of analytic varieties and $W\subset Y$ is a compact subset satisfying either property \textbf{P}\ or \textbf{Q}.
Let $B\>0$ a $\mathbb{Q}$-divisor on $X$ such that $(X, B)$ is klt. Then the following hold:
\begin{enumerate}
\item If $K_X+ B$ is not $f$-nef over $W$, then
\[\overline{NE}(X/Y; W)=\overline{NE}_{K_X+B\geq 0}(X/Y; W)+ \sum \mathbb R_+[l_i]\] where each
$l_i$ is an irreducible curve in $N_1(X/Y;
W)$. Furthermore, $\sum \mathbb R_+[l_i]$ is locally finite and for any $R=\mathbb R_+[l_i]$, there
exists $L\in \widetilde{Z}^1(X/Y; W)$ such that
$R=\{\Gamma \in \overline{NE}(X/Y; W)\setminus \{0\}|(L\cdot \Gamma )=0\}$ and that $L$ is $f$-nef over $W$. Such an $L$ is called a supporting function of $R$
and $R$ is called an extremal ray over $W$ with respect to $K_X+B$.
\item For an extremal ray $R$, there exist an open neighborhood $U$ of $W$
and a proper surjective morphism $\phi :f^{-1}(U)\to Z$ over $U$ onto a normal variety
$Z$ such that \[\phi ( C) =\operatorname{pt} \qquad \mbox{ if and only if }\qquad [C]\in R\]
for any irreducible curve $C$ of $f^{-1}(U)$ which is mapped to a point of $W$. This $\phi$ is denoted by ${\rm cont}_R$ and called the contraction morphism associated with $R$.
\item $\phi = {\rm cont}_R$ has the following properties:
\begin{enumerate}
\item $-(K_X+B)|_{f^{-1}(U)}$ is $\phi$-ample.
\item Let $E$ be an irreducible component of ${\rm Ex}(f)$ of maximal dimension, $n=\operatorname{dim} E-\operatorname{dim} f(E)$ and $p\in f(E)$ a general point, then $E_p=E\cap f^{-1}(p)$ is covered by a family of compact rational curves $\{\Gamma_t\}_{t\in T}$ such that $\phi(\Gamma_t)=\operatorname{pt}$ for all $t\in T$ and $-(K_X+B)\cdot \Gamma_t\<2n$, where $n=\operatorname{dim} X$.
\item ${\rm Image} (\phi ^*: {\rm Pic}(Z)\to {\rm Pic}(f^{-1}(U)))$
$=\{ D \in {\rm Pic}(f^{-1}(U))\;|\; (D\cdot \Gamma )=0\ \forall\ r \in R\}$.
\item The following mutually dual sequences are exact.
\[0\to N_1(f^{-1}(U)/Z;g^{-1}(W))\to N_1(X/Y; W)\to N_1(Z/U; W)\to 0,\]
\[0\leftarrow N^1(f^{-1}(U)/Z;g^{-1}(W))\leftarrow N^1(X/Y; W)\leftarrow N^1(Z/U; W)\leftarrow 0.\]
Here $g: Z\to U$ is the structure morphism. In particular, $\rho (X/Y; W)= \rho (Z/U; W)+1$.
\end{enumerate}
\end{enumerate}
\end{theorem}
\begin{proof}
Everything here is in \cite[Theorem 4.12]{Nak87}, except the claim 3(b). This follows from \cite[Theorem 4.2]{DO22}.
\end{proof}
Finally we prove a relative dlt cone theorem.
\begin{theorem}\label{thm:general-relative-dlt-cone}
Let $(X, \Delta)$ be a dlt pair, $f:X\to Y$ a projective surjective morphism of analytic varieties and $W\subset Y$ is a compact set satisfying either property $\mathbf P$ or $\mathbf Q$. Assume that $X$ is $\mathbb{Q}$-factorial over $W$. Then there are countably many curves $\{C_i\}_{i\in I}$ such that $f(C_i)=\operatorname{pt}$, and
\[
\operatorname{\overline{NE}}(X/Y;W)=\operatorname{\overline{NE}}(X/Y;W)_{(K_X+\Delta)\>0}+\sum_{i\in I}\mathbb{R}^+\cdot[C_i].
\]
\end{theorem}
\begin{proof}
Fix a $f$-ample divisor $H$ on $X$. Then for any $n\in\mathbb{N}$ we can write $K_X+\Delta+\frac 1n H=K_X+(1-\varepsilon)\Delta+(\frac 1n H+\varepsilon \Delta)$ such that $\frac 1n H+\varepsilon \Delta$ is $f$-ample for $\varepsilon\in\mathbb{Q}^+$ sufficiently small (depending on $n$). Note that $(X, (1-\varepsilon)\Delta)$ is a klt pair. Thus by Theorem \ref{thm:general-relative-cone}, there are finitely many $(K_X+\Delta+\frac 1n H)$-negative extremal rays generated by curves $\{C_i\}_{i\in I_n}$ such that
\begin{equation}\label{eqn:perturbed-cone}
\operatorname{\overline{NE}}(X/Y;W)=\operatorname{\overline{NE}}(X/Y;W)_{(K_X+\Delta+\frac 1n H)\>0}+\sum_{i\in I_n}\mathbb{R}^+\cdot [C_i].
\end{equation}
Define $I:=\cup_{n\>1} I_n$. Then clearly $\operatorname{\overline{NE}}(X/Y;W)=\operatorname{\overline{NE}}(X/Y;W)_{(K_X+\Delta+\frac 1n H)\>0}+\sum_{i\in I}\mathbb{R}^+\cdot [C_i]$. Note that we also have
\[\operatorname{\overline{NE}}(X/Y;W)_{(K_X+\Delta)\>0}=\cap_{n=1}^\infty \operatorname{\overline{NE}}(X/Y;W)_{(K_X+\Delta+\frac 1n H)\>0}.\]
Therefore from \eqref{eqn:perturbed-cone} we have
\begin{align*}
\operatorname{\overline{NE}}(X/Y;W) &=\cap_{n=1}^\infty \left(\operatorname{\overline{NE}}(X/Y;W)_{(K_X+\Delta+\frac 1n H)\>0}+\sum_{i\in I}\mathbb{R}^+\cdot [C_i]\right)\\
&\supset \cap_{n=1}^\infty \left(\operatorname{\overline{NE}}(X/Y;W)_{(K_X+\Delta+\frac 1n H)\>0}\right)+\sum_{i\in I}\mathbb{R}^+\cdot [C_i]\\
&=\operatorname{\overline{NE}}(X/Y;W)_{(K_X+\Delta)\>0}+\sum_{i\in I}\mathbb{R}^+\cdot [C_i].
\end{align*}
Suppose now that the inclusion is strict and so we have an element $v\in \cap_{n=1}^\infty \left(\operatorname{\overline{NE}}(X/Y;W)_{(K_X+\Delta+\frac 1n H)\>0}+\sum_{i\in I}\mathbb{R}^+\cdot [C_i]\right)$ not contained in
\[\operatorname{\overline{NE}}(X/Y;W)_{(K_X+\Delta)\>0}+\sum_{i\in I}\mathbb{R}^+\cdot [C_i].\]
Intersecting $\operatorname{\overline{NE}}(X/Y;W)$ with an appropriate affine hyperplane $\mathcal H$ we may assume that $\operatorname{\overline{NE}}(X/Y;W)\cap \mathcal H$ is compact and convex and $v\in \operatorname{\overline{NE}}(X/Y;W)\cap \mathcal H$. For each $n\>1$, we can write $v=v_n+w_n$, where $v_n\in \operatorname{\overline{NE}}(X/Y;W)_{(K_X+\Delta+\frac 1n H)\>0}\cap \mathcal H$ and $w_n\in \sum_{i\in I}\mathbb{R}^+\cdot [C_i]\cap \mathcal H$. By compactness, passing to a subsequence, we may assume that limits exist, and $v_\infty =\lim v_i$ and $w_\infty =\lim w_i$ such that $v=v_\infty +w_\infty$. Since $\operatorname{\overline{NE}}(X/Y;W)_{(K_X+\Delta)\>0}= \cap_{n=1}^\infty \operatorname{\overline{NE}}(X/Y;W)_{(K_X+\Delta+\frac 1n H)\>0}$ is closed, $v_\infty\in \operatorname{\overline{NE}}(X/Y;W)_{(K_X+\Delta)\>0}\cap \mathcal H$.
Since $\overline {\sum_{i\in I}\mathbb{R}^+\cdot [C_i]}\cap \mathcal H$ is compact, $w_\infty\in\overline {\sum_{i\in I}\mathbb{R}^+\cdot [C_i]}\cap \mathcal H$.
By standard arguments (see the end of the proof of \cite[Theorem III.1.2]{Kol96}) one sees that $\operatorname{\overline{NE}}(X/Y;W)_{(K_X+\Delta)\>0}+{\sum_{i\in I}\mathbb{R}^+\cdot [C_i]}$ is closed and hence \[\overline {\sum_{i\in I}\mathbb{R}^+\cdot [C_i]}\subset \operatorname{\overline{NE}}(X/Y;W)_{(K_X+\Delta)\>0}+{\sum_{i\in I}\mathbb{R}^+\cdot [C_i]}.\]
Thus $w_\infty =v_0+w'_\infty$, where $v_0\in \operatorname{\overline{NE}}(X/Y;W)_{(K_X+\Delta)\>0}$ and $w'_\infty\in {\sum_{i\in I}\mathbb{R}^+\cdot [C_i]}$. Finally, since $v=(v_\infty +v_0)+w'_\infty$, we obtain the required contradiction.
\end{proof}~\\
\part{MMP for Projective Morphisms}
\section{Finite generation following Cascini-Lazi\'c}
In \cite{CL10} it is shown that adjoint rings with big boundaries on projective varieties are finitely generated.
In this section we will extend this result to the case of a projective morphism of analytic varieties.
\begin{theorem}\label{t-a} Let
$\pi :X\to U$ be a projective morphism of complex analytic varieties, where $X$ is smooth variety with $\operatorname{dim} X=n$. Let $B_1, \ldots , B_k$
be $\mathbb Q$-divisors on $X$ such that $\lfloor B_i\rfloor = 0$ for all $i$, and such that the support of
$\sum _{i=1}^k B_i$ has simple normal crossings. Let $A$ be a $\pi$-ample $\mathbb Q$-divisor on $X$, and
denote $D_i = K_X + A + B_i$ for every $i$.
Then the adjoint ring
\begin{equation*}
R(X/U; D_1, \ldots , D_k) = \bigoplus _{(m_1,\ldots ,m_k)\in \mathbb N_k}
\pi _* \mathcal O _X \left(\lfloor \sum m_iD_i\rfloor \right)
\end{equation*}
is a locally finitely generated $\mathcal O _U$-algebra.
\end{theorem}
Note that if $K_X+B+A$ is klt and relatively nef, then the finite generation of $R(X/U, K_X+A+B)$ follows from the base point free theorem, cf. \cite[Theorem 4.8, Corollary 4.9]{Nak87}.
The proof in \cite{CL10} is an induction on the dimension proving the two statements \cite[Theorem A and B]{CL10}.
We will begin by showing that \cite[Theorem B]{CL10} implies a similar result in our setting.
\begin{theorem}\label{t-b}
Let $\left(X, \sum ^p_{i=1} S_i\right)$ be a log smooth pair, where
$S_1, \ldots , S_p$ are distinct prime divisors and $\pi :X\to U$ is a projective morphism to a Stein variety $U$. Let $V = \sum ^p
_{i=1} \mathbb R\cdot S_i \subset {\rm Div}_{ \mathbb R} (X)$, and $A\>0$ be a $\pi$-ample $\mathbb{Q}$-divisor on $X$. Define
\[
\mathcal L(V ):=
\{B = \sum_{i=1}^p b_iS_i \in V\; |\; 0 \leq b_i \leq 1\ {\rm for \ all}\ i\}.
\]
Then
\[\mathcal E_A(V ):= \{B \in \mathcal L(V )\; :\; |K_X + A + B/U|_{\mathbb R} \ne \emptyset \}\]
is a rational polytope.
\end{theorem}
\begin{proof} By \cite[Theorem B]{CL10}
we know that Theorem \ref{t-b} holds in the projective case and hence we may assume that $\operatorname{dim} U>0$, so that
$\operatorname{dim} X_u\leq n-1$ for any $u\in U$.
We will first prove the claim assuming that every $S_i$ dominates $U$.
Let $U'\subset U$ be the biggest open subset such that, $\left(X, \sum ^p_{i=1} S_i\right)\times _U U'$ is log smooth over $U'$, thus denoting $\left(X', \sum ^{p}_{i=1} S'_i\right):=\left(X, \sum ^p_{i=1} S_i\right)\times _U U'$ then $(X_u,\sum_{i=1}^p S_{i,u})$ has simple normal crossings for any $u\in U'$.
Let $W_u$ be the subspace of ${\rm Div}_{\mathbb{R}}(X_u)$ spanned
by the irreducible components of $S_{i,u}$ and $V_u\subset W_u$ be the image of $V$ under the restriction map $r_u: {\rm Div}_{\mathbb{R}}(X)\to {\rm Div}_{\mathbb{R}}(X_u)$. Then $ \mathcal E _{A_u}(W_u)$ is a rational polytope, and hence so is $ \mathcal E _{A_u}(V_u)$ (since it is obtained by intersecting a rational polytope with a rational subspace).
Note that $r_u$ defines an isomorphism of $\mathbb{R}$-vector spaces $r_u:V\to V_u$. In what follows we often will identify $V$ and $V_u $.
For every $u\in U'$, let $B_u^1,\ldots, B_u^{k_u}\in r_u({\rm Div}_{\mathbb{R}}(X))$ be a set of ${\mathbb{Q}}$-divisors generating the rational polytope $\mathcal E _{A_u}(V_u)$.
Consider the set $\mathcal C^0=\{\mathbf B\}$ (resp. $\mathcal C^1=\{\mathbf B\}$) of finite subsets $\mathbf B=\{B^1,\ldots , B^k\}$, where $B^i\in {\rm Div }_{\mathbb{Q}} (X)$ such that \[U (\mathbf B):=\{u\in U'\;|\;\mathcal E _{A_u}(V_u)=\langle B_u^1,\ldots ,B_u^k\rangle\}\] is (resp. is not) {uncountably Zariski dense}. Here $\langle\mathbf B\rangle:=\langle B^1,\ldots ,B^k\rangle$ denotes the polytope spanned by $B^1,\ldots ,B^k$.
Note that $U'=\cup _{\mathbf B\in \mathcal C ^0\cup \mathcal C ^1}U(\mathbf B)$, where $\mathcal C ^0, \mathcal C ^1$ are countable as their elements are finite subsets of the countable set $V\cap {\rm Div }_{\mathbb{Q}} (X)$. Since, $\cup _{\mathbf B\in \mathcal C ^1}U(\mathbf B)$ is contained in a countable union of closed analytic subsets, then
\[U^0:=\cup _{\mathbf B\in \mathcal C ^0}U(\mathbf B)=U'\setminus \cup _{\mathbf B\in \mathcal C ^1}U(\mathbf B)\]
contains the complement of countably many analytic proper closed subsets in $U'$. In particular, $\mathcal C ^0$ is non-empty and for any
$\mathbf B\in \mathcal C ^0$, $U(\mathbf B)$ is {uncountably Zariski dense}.
Fix $\bar {\mathbf B}\in \mathcal C ^0$ and write $\bar {\mathbf B}=\{\bar B ^1,\ldots , \bar B ^k\}$.
For any $u\in U( \bar {\mathbf B} )$, we have that $\mathcal E _{A_u}(V_u)$ is the rational polytope generated by the $\mathbb{Q}$-divisors $\bar B ^1|_{X_u},\ldots , \bar B ^k|_{X_u}$ and there exists an integer $m=m(u)$ such that
$|m(K_{X_u}+A|_{X_u}+\bar B^i|_{X_u})|\ne \emptyset$ for all $1\leq i\leq k$. Since $U( \bar {\mathbf B} )$ is uncountably Zariski dense, it must contain an uncountably Zariski dense (in $U'$) set $W\subset U( \bar {\mathbf B} )$ such that $m(u)=\bar m$ is independent of $u\in W$.
Now observe that, by the generic flatness, the upper semi-continuity theorem, and the cohomology and base-change theorem (see Theorem V.4.10 and Theorem III.4.12 in \cite{BS76}) it follows that there is a dense Zariski open subset $U^{\bar{m}}\subset U'$ such that $f$ is flat over $U^{\bar{m}}$ and
\[f_*{\mathcal{O}} _X(\bar m(K_X+A+\bar B^i))\otimes\mathbb{C}(u)\to H^0({\mathcal{O}} _{X_u}(\bar m(K_{X_u}+A|_{X_u}+\bar B^i|_{X_u})))\]
is an isomorphism for all $1\leq i\leq k$ and for all $u\in U^{\bar{m}}$.\\
Since $W$ is uncountably Zariski dense in $U'$, we have that $ W\cap U^{\bar{m}}\ne\emptyset$ and it follows from the relation above that $f_*\mathcal{O}_X(\bar{m}(K_X+A+\bar{B}^i))\otimes\mathbb{C}(u_0)\neq 0$ for any $u_0\in W\cap U^{\bar{m}}\subset U$. Then since $U$ is Stein, we have
\[\Gamma (X,{\mathcal{O}} _X(\bar m(K_X+A+\bar B^i)))=\Gamma (U, f_*{\mathcal{O}} _X(\bar m(K_X+A+\bar B^i)))\ne 0.\]
In particular, $|\bar m(K_X+A+\bar B^i)|\ne \emptyset$ and $|\bar{m}(K_{X_u}+A_u+\bar{B}^i_u)|\neq 0$ for all $1\<i\<k$ and for all $u\in U^{\bar{m}}$.\\
This shows that $\bar{B}^i_{u'}\in\mathcal{E}_{A_{u'}}(V_{u'})$ for all $u'\in U^{\bar{m}}$. Thus we have that
\begin{equation}\label{eqn:cone-comparison}
r_u^{-1}\left( \mathcal E _{A_u}(V_u)\right) \subset r_{u'}^{-1}\left( \mathcal E _{A_{u'}}(V_{u'})\right)\qquad \text{ for all } u\in W \mbox{ and for all } u'\in U^{\bar m}.
\end{equation}
Since $U^{\bar m}\cap U( {\mathbf B} )\ne \emptyset $ for {any ${\mathbf B}\in \mathcal C ^0$}, it follows that for $u'\in U^{\bar m}\cap U( {\mathbf B} )$ we have \[ \langle \bar{\mathbf B}\rangle=r_u^{-1}\left( \mathcal E _{A_u}(V_u)\right) \subset r_{u'}^{-1}\left( \mathcal E _{A_{u'}}(V_{u'})\right)=\langle {\mathbf B}\rangle.\]
By symmetry, we have that $\langle{\mathbf B}\rangle= \langle\bar {\mathbf B}\rangle$ and hence $\mathcal C ^0=\{\bar{\mathbf B}\}$. In particular this shows that $\langle\bar {\mathbf B}\rangle\subset \mathcal E _{A}(V)$
For the reverse inclusion, simply pick $B\in \mathcal E _{A}(V)$, then $\Gamma (X, {\mathcal{O}} _X(m(K_X+A+ B)))\ne 0$ for some $m>0$, and so
$\Gamma (X_u,{\mathcal{O}} _{X_u}(m(K_{X_u}+A_u+ B_u)))\ne 0$ for general $u\in U$. This means that $B_u\in \mathcal E _{A_u}(V_u)$ for general $u\in U(\bar {\mathbf B})$, and hence $B$ is contained in the polytope spanned by $\bar {\mathbf B}$.
Thus $\mathcal E _{A}(V)=\langle\bar {\mathbf B}\rangle$ is a rational polytope.
To complete the proof, we consider the case when $S_1,\ldots ,S_{p'}$ dominate $U$ and $S_{p'+1},\ldots , S_p$ do not dominate $U$ (and hence $\pi (S_i)\cap U'=\emptyset$ for $i=p'+1,\ldots , p$). It suffices to show that if $B = \sum_{i=1}^p b_iS_i$ and $B' = \sum_{i=1}^{p'} b_iS_i$, then $B\in \mathcal E_A(V )$ if and only if $B'\in \mathcal E_A(V )$. One direction is clear: if $B'\in \mathcal E_A(V )$ then $K_X + A + B'\sim _{{\mathbb{R}},U}D'\geq 0$ so that $K_X + A + B\sim _{{\mathbb{R}},U}D'+B-B'\geq 0$, and hence $B\in \mathcal E_A(V )$. Conversely, if $B\in \mathcal E_A(V )$, then $K_X + A + B\sim _{{\mathbb{R}},U}D\geq 0$ and so $K_{X_u} + A_u + B_u\sim _{{\mathbb{R}}}D_u\geq 0$ for all $u\in U''$, where $U''$ is the largest open subset of $U$ containing the points $u\in U$ such that $X_u\not \subset {\rm Supp}(D)$. Note that $B_u=B'_u$ for any $u\in U''$ and hence $K_{X_u} + A_u + B'_u\sim _{{\mathbb{R}}}D_u\geq 0$ for all $u\in U''$. Then arguing as above it follows that $K_{X} + A + B'\sim _{{\mathbb{R}}}D^*\geq 0$ for some effective $\mathbb{R}$-divisor $D^*$ on $X$.
\end{proof}
The rest of this section is devoted to the proof of Theorem \ref{t-a}.
We will proceed by induction and show that Theorems \ref{t-a}$_{n-1}$ and \ref{t-b}$_{n}$ imply Theorem \ref{t-a}$_{n}$ (here Theorem \ref{t-a}$_{n}$ means Theorem \ref{t-a} for $n$-dimensional varieties $\operatorname{dim} X=n$). Thus Theorem \ref{t-a} holds in all dimensions.
Unluckily, we are unable to find a direct proof and so we will follow closely the arguments of \cite{CL10}.
We will not repeat the details of each step of the corresponding proof in \cite{CL10}, rather we will emphasize the necessary changes
to the statements, the arguments and the references used.
As remarked above, \cite{CL10} works with $X$ projective.
We will instead assume that $\pi:X\to U$ is a projective morphism of normal analytic varieties where $U$ is Stein and relatively compact. If $(X,B)$ is a simple normal crossings pair, we will not assume (unless otherwise stated) that
$(X,B)$ is simple normal crossings over $U$.
There are 3 kinds of results that play a prominent role in the arguments of \cite{CL10}.
The extension theorems from \cite[Section 3]{CL10} rely mainly on Kawamata-Viehweg vanishing and hence generalize easily to our context (cf. Theorem \ref{thm:relative-kvv}).
The results about convex polytopes and diophantine approximation require no changes.
The results about the Zariski decomposition are (with one simple exception discussed below) not used in the proof of Theorem \ref{t-a}.
\subsection{Extension theorems}
As an immediate consequence of Kawamata-Viehweg vanishing, one obtains the following basic extension result corresponding to \cite[Lemma 3.1]{CL10}.
\begin{lemma}\label{l-3.1} Let $(X, B)$ be a log smooth pair of dimension $n$, where $B$ is
a $\mathbb Q$-divisor such that $\lfloor B\rfloor = 0$ and $\pi :X\to U$ is a projective morphism to a Stein variety. Let $A$ be a $\pi$-nef-big $\mathbb Q$-divisor.
(i) Let $S$ be a smooth prime divisor such that $S \not \subset {\rm Supp} B$. If $G \in {\rm Div}(X)$ is
such that $G \sim _{\mathbb Q,U} K_X + S + A + B$, then
$|G|_S=|G_{|S}|$
(ii) Let $f : X \to Y$ be a bimeromorphic morphism of varieties projective over $U$, and let
$V \subset X$ be an open set such that $f|_V$ is an isomorphism.
{Let $H'$ be a very ample$/U$ divisor on $Y$}
and let $H = f ^*H'$.
If $F \in {\rm Div}(X)$ is such that $F \sim _{\mathbb Q,U} K_X +(n+1)H +A+B$,
then $|F |$ is basepoint free at every point of $V$.\end{lemma}
\begin{proof} Consider the short exact sequence
\[0\to \mathcal O _X(G-S)\to \mathcal O _X(G)\to \mathcal O _S(G|_S)\to 0.\]
By Kawamata-Viehweg vanishing (Theorem \ref{thm:relative-kvv}), we have $R^i\pi _* \mathcal O _X(G-S)=0 $
for all $i>0$, and hence a surjection $\pi _*\mathcal O _X(G)\to \pi _*\mathcal O _S(G|_S)$; this is equivalent to (i), by Lemma \ref{l-lu}.
The proof of (ii) proceeds by induction. Pick a point $x\in V$. Pick elements $T_1,\ldots ,T_{n}\in |H\otimes \mathfrak m_x|$, and let $X_0=X, X_i=T_1\cap \ldots \cap T_i$ for any $1\leq i\leq n$.
Since $H'$ is very ample over $U$ and $U$ is Stein, $\mathcal O_X(H)\otimes \mathfrak m _x$ is generated over $U$. We may assume that $T_1\cap\ldots \cap T_n$ has a $0$-dimensional
component $X_n'$ supported at $x$.
For any $0\leq i\leq n-1$, we have short exact sequences
\[ 0\to {\mathcal{O}} _{X_i}((F-T_{i+1})|_{X_i})\to {\mathcal{O}} _{X_i}(F|_{X_i})\to {\mathcal{O}} _{X_{i+1}}(F|_{X_{i+1}})\to 0 .\]
Since $R^k\pi_*{\mathcal{O}} _X(F-lH)=0$ for $k>0$ and $0\leq l\leq n$ (by Kawamata-Viehweg vanishing, Theorem \ref{thm:relative-kvv} and by induction), {it is easy to see that $R^k\pi _* {\mathcal{O}} _{X_i}((F-lH)|_{X_i})=0$ for $k>0$ and $0\leq l\leq n-i$} (cf. proof of \cite[Lemma 2.11]{Kaw99}). Thus the homomorphisms \[ \pi_*{\mathcal{O}} _X(F)\to \pi_*{\mathcal{O}} _{X_1}(F|_{X_1})\to \ldots \to \pi_*{\mathcal{O}} _{X_n}(F|_{X_n})\] are surjective.
Note that $x\in X$ is an irreducible component of the support of ${\mathcal{O}} _{X_n}(F|_{X_n})$ and so there is a surjection ${\mathcal{O}} _{X_n}(F|_{X_n})\to {\mathcal{O}} _{X_n}(F|_{X_n})/\mathfrak m _x$. It follows that the evaluation map
$\Gamma ({\mathcal{O}} _X(F))\to {\mathcal{O}} _X(F) /\mathfrak m _x$ is also surjective, i.e. ${\mathcal{O}} _X(F)$ is generated at $x$.
\end{proof}
All results of \cite[Section 3]{CL10} follow similarly assuming that $\pi :X\to U$ is a projective morphism to a (relatively compact) Stein variety. Note that in this context, we consider $\pi$-ample, $\pi$-nef and $\pi$-big divisors
instead of ample, nef and big divisors, however we do not require that smooth varieties (resp. log smooth pairs) are relatively smooth, i.e. the corresponding morphism is not assumed to be smooth.
We will use Theorem \ref{thm:log-resolution} and Lemma \ref{l-log-resolution} for the existence of log resolutions, Theorem \ref{t-rel-ample} for useful facts about relatively ample divisors, Lemma \ref{l-klt} for a result about klt pairs, Theorem \ref{t-bertini+} for the required Bertini Theorem.
For the convenience of the reader, we reproduce the statements of Section 3 of \cite{CL10} with the appropriate modifications.
For ease of notation we will say that $\pi :X\to U$ is a
morphism to a relatively compact variety if it is the restriction of a morphism $\pi' :X'\to U'$ over a relatively compact open subset $U\subset U'$ so that $X=X'\times _{U'}U$.
\begin{lemma}\label{l-3.2}
Let $(X, S+B)$ be a pair of dimension $n$, where $X$ is smooth, $S$ is a smooth prime divisor and $B$ is
a $\mathbb Q$-divisor such that $S$ is not contained in the support of $B$, and $\pi :X\to U$ is a projective
morphism to a relatively compact Stein variety. Let $A$ be a $\pi$-nef-big $\mathbb{Q}$-divisor on $X$ and $D\in {\rm Div}(X)$ such that
$D\sim _{\mathbb{Q},U} K_X+S+A+B$ and $\Sigma \in |D_{|S}|$. Let $\Phi\in {\rm Div }_{\mathbb{Q}}(S)$ be such that the pair $(S,\Phi )$ is klt and $B_{|S}\leq \Sigma +\Phi$.
Then $\Sigma \in |D|_S$.
\end{lemma}
\begin{proof}
See \cite[Lemma 3.2]{CL10}. Note that we use Theorem \ref{thm:log-resolution} for the existence of log resolutions, and Lemma \ref{l-3.1}(i) in place of \cite[Lemma 3.1(i)]{CL10}.
\end{proof}
\begin{lemma}\label{l-3.3}
Let $(X, S+B+D)$ be a log smooth pair of dimension $n$, where $S$ is a prime divisor and $B$ is
a $\mathbb Q$-divisor such that $\lfloor B\rfloor =0$ and $S$ is not contained in the support of $B$, $D\geq 0$ is a ${\mathbb{Q}}$-divisor such that $D\wedge (S+B)=0$. Let $\pi :X\to Y$ be a projective
morphism to a relatively compact Stein variety $Y$, $P$ is a $\pi$-nef ${\mathbb{Q}}$-divisor on $X$ and
$\Delta:= S+B+P$. Assume that $K_X+\Delta \sim _{{\mathbb{Q}},U}D$. Let $k$ be a positive integer such that $kP$ and $kB$ are integral, and write $\Omega =(B+P )|_S$.
Then there is a $\pi$-very ample divisor $H$ such that for all divisors $\Sigma \in |k(K_S+\Omega )|$ and $U\in |H_{|S}|$, and for every positive integer $l$ we have \[l\Sigma +U\in |lk(K_X+\Delta)+H|_S.\]
\end{lemma}
\begin{proof}
The proof of \cite[Lemma 3.3]{CL10} holds verbatim. Here we use Theorem \ref{thm:log-resolution} for the existence of log resolutions. Keeping with the same notations as in the proof of \cite[Lemma 3.3]{CL10}, we can pick a $\pi$-very ample divisor $H$ such that the divisors $D_j+H$ are $\pi$-ample and $\pi$-base point free by (1) and (4) of Theorem \ref{t-rel-ample} (note that then $|D_j+H|$ is also base point free cf. Lemma \ref{l-gu}) and $|D_k+H|_S=|(D_k+H)_{|S}|$ by (5) of Theorem \ref{t-rel-ample}.
It is easy to see that $D_{r_{m-1}}+H+\delta B_m$ is $\pi$-ample for $0<\delta \ll 1$ by (4) of Theorem \ref{t-rel-ample}.
The pair $(S,\Phi=F|_S+(1-\epsilon)W)$ is klt by Lemma \ref{l-klt}; indeed, since $(X,S+F)$ is log smooth and $\lfloor F\rfloor=0$, it follows that $(S,F)$ is klt, and here $W$ is a sufficiently general member of a finite dimensional linear sub-system of the base point free linear system $|(D_{r_{m-1}}+H)_{|S}|$.
\end{proof}
The proof of the next result can also be extracted from the proof of \cite[Theorem 1]{Pau12}.
\begin{theorem}\label{t-3.4} Let $(X, S+B)$ be a log smooth pair of dimension $n$, where $S$ is a prime divisor, $B$ is a ${\mathbb{Q}}$-divisor such that $S\not \subset {\rm Supp}B$, and $\lfloor B\rfloor=0$, and $\pi :X\to U$ is a projective morphism to a relatively compact Stein variety. Let $A$ be a $\pi$-ample $\mathbb Q$-divisor and denote $\Delta =S+A+B$. Let $C\geq 0$ be a ${\mathbb{Q}}$-divisor on $S$ such that $(S,C)$ is canonical and $m>0$ an integer such that $mA$, $mB$, and $mC$ are integral.
Assume that there exists a positive integer $q\gg 0$ such that $qA$ is $\pi$-very ample, $S\not \subset {\rm Bs}|qm(K_X+\Delta +\frac 1m A)|$ and \[C\leq B|_S-B|_S\wedge \frac 1{qm}{\rm Fix}|qm(K_X+\Delta +\frac 1m A)|_S.\]
Then \[|m(K_S+A|_S+C)|+m(B|_S-C)\subset |m(K_X+\Delta )|_S.\]
In particular, if $|m(K_S+A|_S+C)|\ne \emptyset$, then $|m(K_X+\Delta )|_S\ne \emptyset$ and
\[{\rm Fix}|m(K_S+A|_S+C)|+m(B|_S-C)\geq {\rm Fix} |m(K_X+\Delta )|_S\geq m{\mathbf {Fix}}_S(K_X+\Delta).\]\end{theorem}
\begin{proof}
The proof of \cite[Theorem 3.4]{CL10} holds verbatim. Here we use Theorem \ref{thm:log-resolution} and Lemma \ref{l-log-resolution} for the existence of log resolutions, and Theorem \ref{t-bertini+} for the required Bertini Theorem.
\end{proof}
\begin{corollary}\label{c-3.5} Let $(X, S+B)$ be a log smooth pair of dimension $n$, where $S$ is a prime divisor, $B$ is a ${\mathbb{Q}}$-divisor such that $(S,B|_S)$ is canonical, $S\not \subset {\rm Supp}B$, and $\lfloor B\rfloor=0$, and $\pi :X\to U$ is a projective morphism to a relatively compact Stein variety. Let $A$ be a $\pi$-ample $\mathbb Q$-divisor and denote $\Delta =S+A+B$ and $m>0$ an integer such that $mA$, $mB$ are integral, and $S\not \subset {\rm Bs}|m(K_X+\Delta)|$. Let $\Phi _m=B|_S-B|_S\wedge \frac 1m {\rm Fix}|m(K_X+\Delta )|_S$. Then
\[ |m(K_S+A|_S+\Phi _m)|+m(B|_S-\Phi _m)=|m(K_X+\Delta )|_S.\]
\end{corollary}
\begin{proof}
Same as \cite[Corollary 3.5]{CL10}.
\end{proof}
\begin{lemma}
Let $(X, S)$ be a log smooth pair of dimension $n$, where $S$ is a prime divisor and $D$ is a ${\mathbb{Q}}$-divisor such that $S\not \subset {\mathbf B}(D)$. Let $\pi:X\to U$ is a projective morphism to a Stein variety and $A$ is a $\pi$-ample $\mathbb{Q}$-divisor. Then
\[\frac 1 q {\rm Fix}|q(D+A)|_S\leq{\mathbf {Fix}}_S(D)\]
for any sufficiently divisible positive integer $q$.
\end{lemma}
\begin{proof}
Same as \cite[Lemma 3.6]{CL10}.
\end{proof}
\subsection{Proof of Finite Generation Theorem}
The key step in the proof of Theorem \ref{t-a} is to show that the restricted algebras are finitely generated (locally around every point $u\in U$).
In order to accomplish this we will need the following set up.
Let $(X,S+\sum _{i=1}^p S_i)$ be a log smooth pair, where $S,S_1, \ldots , S_p$ are distinct prime divisors and $\pi :X\to U$ is a projective morphism to a Stein space.
Let $V=\sum _{i=1}^p \mathbb R S_i\subset {\rm Div}_{\mathbb R}(X)$, $A$ be a $\pi$-ample $\mathbb{Q}$-divisor and $W\subset {\rm Div}_{\mathbb R}(S)$ is the subspace spanned by the components of $S_1|_S,\ldots ,S_p|_S$.
By Theorem \ref{t-b}, we know that $\mathcal E _{A|_S}(W)$ is a rational polytope. If $E_1,\ldots ,E_d$ are its vertices, then by induction on the dimension,
the ring $R(S/U;K_S+A|_S+E_1,\ldots , K_S+A|_S+E_d)$ is a locally finitely generated $\mathcal O _U$-algebra.
After shrinking $U$, we may assume that this ring is in fact a finitely generated $\mathcal O _U$-algebra.
For any ${\mathbb{Q}}$-divisor $E\in \mathcal E _{A|_S}(W)$ we let \[\mathbf F(E):={\mathbf{Fix}}(K_S+A|_S+E).\]
Recall that since $U$ is Stein, by Lemma \ref{l-2.3} we have ${\mathbf{Fix}}(K_S+A|_S+E)={\mathbf{Fix}}(K_S+A|_S+E/U)$.
By Lemma \ref{l-2.28}, $\mathbf F(E)$ extends to a rational piece-wise affine function on $\mathcal E _{A|_S}(W)$,
and there exists an integer $k>0$ such that for any $E\in \mathcal E _{A|_S}(W)$ and any integer $m>0$ such that $(m/k)A|_S$ and $(m/k)E$ are integral, then \[ \mathbf F(E)=\frac 1m{\rm{Fix}}|m(K_S+A|_S+E)|.\]
The subset \[\mathcal F =\{ E\in \mathcal E _{A|_S}(W)\;|\; E\wedge \mathbf{F}(E)=0\}\subset \mathcal E _{A|_S}(W)\]
is defined by finitely many rational linear equalities and inequalities, and hence is a finite union of rational polytopes $\mathcal F=\cup _{i}\mathcal F _i$. For any $\mathbb{Q}$-divisor $B\in \mathcal B _A^S(V):=\{ B\in \mathcal L (V)\;|\; S\not \subset \mathbf {B}(K_X+S+A+B)\}$, we let
\[ {\mathbf F}_S(B):={\mathbf {Fix}}_S(K_X+S+A+B).\]
For any integer $m>0$ such that $mA$ and $mB$ are integral and $S\not \subset {\rm Bs}|m(K_X+S+A+B)|$, we let
\[ \Phi_m (B):=B|_S -B|_S \wedge \frac 1m {\rm Fix}|m(K_X+S+A+B)|_S,\]
\[ {\mathbf {\Phi}} (B):=B|_S -B|_S \wedge {\mathbf F}_S(B)=\limsup \Phi_m (B).\]
With this notation and assumptions, we have the following analog of \cite[Lemma 4.2]{CL10}.
\begin{lemma}\label{l-4.2} If $B\in \mathcal B _A^S(V)$, then $\Phi _m(B)\in \mathcal E _{A|_S}(W)$ and $\Phi _m(B)\wedge {\mathbf F}(\Phi _m(B))=0$.
Thus if $\mathcal B _A^S(V)\ne \emptyset$, then $\mathcal F \ne \emptyset$.
\end{lemma}
\begin{proof}
This follows by the proof of \cite[Lemma 4.2]{CL10}.
\end{proof}
The next result is the analog of \cite[Theorem 4.3]{CL10}.
\begin{theorem}\label{t-4.3} Let $\mathcal G$ be a rational polytope contained in the interior of $\mathcal L (V)$, and assume that $(S,G_{|S})$ is terminal for every $G\in \mathcal G$. If $\mathcal P =\mathcal G \cap \mathcal B _A^S(V)$, then
\begin{enumerate}
\item $\mathcal P$ is a rational polytope, and
\item $\mathbf \Phi$ extends to a piece-wise affine function on $\mathcal P$, and there exists a
positive integer $\ell$ with the property that $\mathbf \Phi( P)=\Phi_m( P)$ for every
$P\in \mathcal P$ and
every positive integer $m$ such that $mP/\ell$ is integral.
\end{enumerate}
\end{theorem}
\begin{proof}This follows by the proof of \cite[Theorem 4.3]{CL10}.
\end{proof}
\begin{theorem}\label{t-ra} Assume Theorem \ref{t-a} in dimension $n-1$. Let $\pi :X\to U$ be a projective morphism of complex analytic varieties, where $X$ is smooth variety with $\operatorname{dim} X=n$. Let $S,S_1, \ldots , S_p$
be distinct prime divisors on $X$ such that $S+\sum _{i=1}^pS_i$ has simple normal crossings. Let $A$ be a $\pi$-ample $\mathbb Q$-divisor on $X$, $V=\sum_{i=1}^p \mathbb R S_i\subset {\rm Div}_{\mathbb R}(X)$, $B_1,\ldots , B_m \in \mathcal E_{S+A}(V)$ be ${\mathbb{Q}}$-divisors and
denote $D_i = K_X + S+A + B_i$ for every $i$.
Then the ring ${\rm res}_S R(X/U; D_1, \ldots , D_m)$
is a locally finitely generated $\mathcal O _U$-module.
\end{theorem}
\begin{proof}
This follows along the lines of the proof of \cite[Lemma 6.2]{CL10}.
\end{proof}~\\
\begin{proof}[Proof of Theorem \ref{t-a}]
Let $\mathcal P=\mbox{conv}(B_1,\ldots , B_k)\subset {\rm Div}_{\mathbb{R}} (X)$ be the polytope spanned by $B_i$ and $\mathcal R={\mathbb{R}}_+(K_X+A+\mathcal P)$. We may assume that $U$ is a relatively compact Stein space. It suffices to show that $R(X/U, \mathcal R) $ is locally finitely generated (cf. \cite[Lemma 2.27]{CL10}).
By Theorem \ref{t-b}, $\mathcal P_{\mathcal E}=\mathcal P\cap \mathcal E_A(V)$ is a rational polytope, where $V\subset {\rm Div}_{\mathbb{R}} (X)$ is the vector space spanned by the components of $B_1,\ldots , B_k$. Since $H^0(X,{\mathcal{O}} _X(K_X+A+D))=0$ for any divisor $D\in \mathcal P\setminus \mathcal P_{\mathcal E}$, it suffices to show that
$R(X/U, \mathcal R_{\mathcal E}) $ is locally finitely generated, where $\mathcal R_{\mathcal E}={\mathbb{R}}_+(K_X+A+\mathcal P_{\mathcal E})$. By Gordan's lemma (cf. \cite[Lemma 2.11]{CL10}) the monoid
$\mathcal R_{\mathcal E}\cap \operatorname{Div}(X)$ is finitely generated, so there are ${\mathbb{Q}}$-divisors $R_i=p_i(K_X+A+P_i)$, where $p_i\in {\mathbb{Q}}_+$ and $P_i$ are ${\mathbb{Q}}$-divisors with simple normal crossings support such that $\lfloor P_i\rfloor=0$ for $1\leq i\leq \ell$. Since $P_i\in \mathcal E_A(V)$, we have $K_X+A+P_i\sim _{{\mathbb{Q}},U}G_i\geq 0$.
Replacing $B_1,\ldots , B_k$ by $P_1,\ldots , P_\ell$, we may assume that $K_X+A+B_i\sim _{{\mathbb{Q}},U}F_i\geq 0$ for all $i$.
Replacing $X$ by a log resolution (see Theorem \ref{thm:log-resolution}), we may assume that $\left(X,\sum (B_i+F_i)\right)$ is a simple normal crossings pair.
Consider now $W\subset {\rm Div }_{\mathbb{R}} (X)$ the subspace spanned by the components $S_1,\ldots , S_p$ of $\sum (B_i+F_i)$. Let $\mathcal T=\{(t_1,\ldots , t_k)\;|\;t_i\geq 0,\ \sum t_i=1\}$ and for any $\tau =(t_1,\ldots , t_k)\in \mathcal T$, we let
\[ B_\tau=\sum t_iB_i,\qquad F_\tau=\sum t_iF_i\sim _{{\mathbb{Q}},U}K_X+A+B_\tau.\]
Consider the following rational polytopes for $1\leq i\leq p$,
\[\mathcal B =\{F_\tau +B\;|\;\tau \in \mathcal T,\ 0\leq B\in W,\ B_\tau+B\in \mathcal L (W)\}\subset W ,\]
\[\mathcal B_i=\{F_\tau+B \in \mathcal
\;|\; S_i\subset \lfloor B_\tau+B\rfloor \}\subset W .\]
We also have rational polyhedral cones $\mathcal C=\mathbb R _+\mathcal B$, $\mathcal C_i=\mathbb R _+\mathcal B_i$ and monoids $\mathcal S =\mathcal C\cap {\rm Div}(X)$, $\mathcal S _i =\mathcal C_i\cap {\rm Div}(X)$.
Following the proof of \cite[Theorem 6.3]{CL10}, it suffices to show that
\begin{enumerate}
\item $\mathcal C =\cup _{i=1}^p\mathcal C _i$,
\item there exists an integer $M>0$ such that if $\sum \alpha _i S_i\in \mathcal C _j$ for some $j$ and some $\alpha _i\in \mathbb N$ with $\sum \alpha _i \geq M$, then $\sum \alpha _i S_i-S_j\in \mathcal C$, and
\item the rings ${\rm res} _{S_j}R(X/U,\mathcal S _j)$ are locally finitely generated for $1\leq j\leq p$.
\end{enumerate}
(1) Pick $0\ne G\in \mathcal C$. Then there exists $\tau \in \mathcal T$, $0\<B\in W$ and $r>0$ such that $B_\tau +B\in\mathcal{L}(W)$ and $G=r (F_\tau +B)$. Let \[ \lambda ={\rm max}\{t\geq 1|B_\tau+tB+(t-1)F_\tau \in \mathcal L (W)\},\]
and $B'=\lambda B+(\lambda -1)F_\tau $, then \[\lambda G=\lambda r(F_\tau +B)=r(F_\tau +\lambda B+(\lambda -1)F_\tau)=r(F_\tau +B'),\] where $\lfloor B_\tau +B'\rfloor$ is non-empty and hence contains a component $S_{j_0}$ for some $1\<j_0\<p$. Thus $G\in \mathcal C _{j_0}$ as required.
(2) Fix $\epsilon >0 $ such that the coefficients of $B_i$ are $\leq 1-\epsilon $, and hence
for any $\tau \in \mathcal T$ the coefficients of $B_\tau $ are also $\leq 1-\epsilon $. Now let $||\cdot ||$ be the sup norm on the vector space $W$ so that for any $D\in W$, $||D||$ is the largest coefficient of $D$ in the unique decomposition $D=\sum_{i=1}^p a_iS_i$. Since each set $\mathcal B_j$ is compact, there exists a constant $C>0$ such that for any $\Psi \in \cup _{j=1}^p\mathcal B _j$ we have $||\Psi||\leq C$. Define $M:=pC/\epsilon$.
Let $G=\sum _{i=1}^p\alpha _i S_i\in \mathcal C _j$, where $\sum _{i=1}^p\alpha _i \geq M$. Then
\[||G||={\rm max}\{\alpha _i\}\geq \frac{\sum _{i=1}^p\alpha _i}p\geq \frac M p=\frac C \epsilon . \]
Since $G\in \mathcal C _j$, there exists $r>0$ such that $G=r G'$ for some $G'\in \mathcal B _j$. Thus $||G'||\leq C$, and hence $r=\frac{||G||}{||G'||}\geq \frac 1 \epsilon$.
Since $G'\in \mathcal B _j$, we may write $G'=F_\tau +B$ where $\tau \in \mathcal T$, $0\leq B\in W$, $B_\tau +B\in \mathcal L (W)$ and $S_j\subset \lfloor B_\tau +B\rfloor$.
But then ${\rm mult}_{S_j}(B)=1-{\rm mult}_{S_j}(B_\tau )\geq \epsilon \geq 1/r$, so that
\[ G-S_j=r(F_\tau +B-\frac 1r S_j)\in \mathcal C.\]
(3) We pick generators $E_1,\ldots , E_l$ of $\mathcal S _j=\mathcal C_j\cap {\rm Div}(X)$. For any $i\in \{ 1, \ldots ,l\}$, there exist
$k_i\in \mathbb Q _{>0}$, $\tau _i\in \mathcal T \cap \mathbb Q ^k$,
$0\leq B_i\in W$ such that $B_{\tau _i}+B_i\in \mathcal L (W)$,
$S_j\leq \lfloor B_{\tau _i}+B_i\rfloor$ and $E_i=k_i(F_{\tau _i}+B_i)$. If $E'_i:=K_X+A+B_{\tau _i}+B_i$, then $E_i\sim_{\mathbb{Q}} k_iE_i'$.
Now, ${\rm res}_{S_j}(X/U;E'_1,\ldots , E'_l)$ is finitely generated by
Theorem \ref{t-ra} and hence ${\rm res}_{S_j}(X/U;E_1,\ldots , E_l)$ is also finitely generated (cf. \cite[Lemma 2.25]{CL10}).
Finally the claim follows from the surjection
\[{\rm res}_{S_j}(X/U;E_1,\ldots , E_l)\to {\rm res}_{S_j}(X/U;\mathcal S _j).\]
\end{proof}~\\
\begin{corollary}\label{c-fg1}
Let $\pi :X\to U$ be a projective morphism of normal varieties and $(X,B)$ is a klt pair such that $K_X+B$ is $\pi$-big. Then $R(X/U, K_X+B):=\oplus_{m\>0}\pi_*\mathcal{O}_X(\lfloor m(K_X+B)\rfloor)$ is locally finitely generated over $U$. In particular, if $W\subset U$ is a compact subset, then after shrinking $U$ near $W$ suitably, $R(X/U, K_X+B)$ is finitely generated over $U$, and hence the log canonical model ${\rm Projan}R(X/U, K_X+B)\to U$ of $(X,B)$ over $U$ exists.
\end{corollary}
\begin{proof}
Working locally on $U$ we may assume that $U$ is a relatively compact Stein space. Since $K_X+B$ is $\pi$-big, we have $K_X+B\sim _{{\mathbb{Q}},U} A+N$, where $A$ is $\pi$-ample $\mathbb{Q}$-divisor and $N\geq 0$. Let $f:Y\to X$ be a log resolution of $(X, B+N)$ as in Theorem \ref{thm:log-resolution}. Write $K_Y+\Gamma=f^*(K_X+B)+E$ such that $\Gamma\>0, E\>0, \Gamma\wedge E=\emptyset, f_*\Gamma=B$ and $f_*E=0$. Let $F\>0$ be a $f$-exceptional $\mathbb{Q}$-divisor such that $-F$ is $f$-ample. Then $A'=f^*A-F$ is $(\pi\circ f)$-ample. Choose a rational number $0<\epsilon\ll 1$ such that $(Y, \Gamma+\epsilon f^*N+\epsilon F)$ is klt and
\[
(1+\epsilon)f^*(K_X+B)+E\sim_{\mathbb{Q}} K_Y+\Gamma+\epsilon f^*N+\epsilon F+\epsilon A'.
\]
Thus from Theorem \ref{t-a} and \cite[Corollary 2.26]{CL10} it follows that $R(X/U, K_X+B)$ is locally finitely generated over $U$.\\
Moreover, if $W\subset U$ is a compact subset, then there exists a positive integer $m>0$ and finitely many open subsets $U_i\subset U, 1\leq i\leq k$ such that $W\subset \cup_{i=1}^k U_i $ and $R(X_i/U_i, K_{X_i}+B_i)$ is finitely generated in degree $\<m$ for all $i=1,2\ldots, k$; where $X_i=X\times _U U_i$ and $B_i=B|_{X_i}$.
The claim then follows replacing $U$ by $\cup_{i=1}^k U_i$.
\end{proof}
\section{Relative MMP for projective morphisms}
In this section we prove Theorem \ref{c-fg} and \ref{t-mmpscale}.\\
\begin{proof}[Proof of Theorem \ref{c-fg}]
We follow the ideas of \cite{Fuj15}. By the proof of \cite[Theorem 5.1]{Fuj15} (also see \cite[21.5]{Fuj22}), there is a projective morphism $g:Z\to Y$ from a complex manifold $Z$ such that $X\dasharrow Z$ is bimeromorphic to the Iitaka fibration of $K_X+B$ over $Y$ and $(Z,B_Z\>0)$ is a log smooth klt pair such that $K_Z+B_Z$ is big over $Y$ and
\[ \oplus_{m\geq 0} f_*\mathcal O _X(me(K_X+B))\cong \oplus_{m\geq 0} g_*\mathcal O _Z(me'(K_Z+B_Z))\]
for some integers $e,e'>0$. {Then the result follows from Corollary} \ref{c-fg1}.
\end{proof}~\\
\begin{proof}[Proof of Theorem \ref{t-mmpscale}] We are free to replace $U$ by arbitrarily small neighborhoods of $W$, see \cite[1.11]{Fuj22}.
If $K_X+B$ is nef over $W$, then there is nothing to prove.
Otherwise, by the Cone Theorem (cf. Theorem \ref{thm:general-relative-cone}), there is a negative extremal ray $R={\mathbb{R}} _+[l]$
and a divisor $L\in A^1(X/U;W)$ such that $R=\overline{NE}(X/U;W)\cap L^\perp$, where $L$ is nef over $U$.
Let $\phi={\rm cont}_R:X\to Z$ be the corresponding morphism (which is defined after possibly further shrinking $U$). If $\operatorname{dim} Z<\operatorname{dim} X$, this is a Mori fiber space. If $\operatorname{dim} Z=\operatorname{dim} X$ and $\phi$ contracts a divisor, then this is a divisorial contraction. In this case we let $(Z,\phi _* B)=(X_1,B_1)$ and we note that $(X_1,B_1)$ is klt and ${\mathbb{Q}}$-factorial near $W$. If on the other hand, $\operatorname{dim} Z=\operatorname{dim} X$ and $\phi$ is small, then by {Corollary} \ref{c-fg1}, $R(X/Z, K_X+B)$ is finitely generated and hence we obtain a small birational morphism $\psi :X\dasharrow X_1:={\rm Projan}R(X/Z, K_X+B)$ (note that as $X\to Z$ is birational, the bigness assumption is automatically satisfied). In this case we note that $(X_1,B_1=\psi _*B)$ is klt and ${\mathbb{Q}}$-factorial near $W$. We may now replace $(X,B)$ by $(X_1,B_1)$ and repeat the procedure. This proves (1).
Suppose now that we are running the MMP with scaling of a sufficiently $\pi$-ample $\mathbb{Q}$-divisor $A$.
This means that we have a sequence of flips and divisorial contractions
\[(X,B)=(X_0, B_0)\dasharrow (X_1,B_1)\dasharrow (X_2,B_2)\dasharrow \ldots,\] a $\pi$-ample divisor $A$ and a sequence of rational numbers $\lambda _0\geq \lambda _1\geq \lambda _2\geq \ldots \geq 0$ such that
$K_{X_i}+B_i+\lambda A_i$ is nef over $U$ for $\lambda _i\geq \lambda \geq \lambda _{i+1}$.
It suffices to show that this sequence terminates locally over a neighborhood of any point of $W$.
We claim that there exists a constant $\epsilon>0$ such that we may assume that $B\geq \epsilon A$.
Indeed, in Case (2), first if $B$ is $\pi$-big, then $B\sim _{{\mathbb{Q}},U}\delta A +E$, where $\delta >0$ and $E\geq 0$. Then for any rational $0<\gamma\ll 1$, $(X, B'=(1-\gamma)B+\gamma(\delta A+E))$ is klt, and $K_X+B'\sim_{\mathbb{Q}, U} K_X+B$. Thus the above MMP is also a $(K_X+B')$-MMP with the scaling of $A$, and $B'\>\gamma\delta A$. In this case we are done by replacing $B$ by $B'$ and setting $\epsilon=\gamma\delta$. On the other hand, if $K_X+B$ is $\pi$-big, then we can write $K_X+B\sim _{{\mathbb{Q}},U}\delta A +E$, where $\delta >0$ and $E\geq 0$. Then again for any rational $0<\gamma \ll 1$, $(X,B'=B+\gamma (\delta A +E))$ is klt and $K_X+B'\sim_{{\mathbb{Q}},U} (1+\gamma)(K_X+B)$. It follows that the above MMP is a $(K_X+B')$-MMP with the scaling of $(1+\gamma)A$. Since $B'\geq \gamma \delta A$, the claim follows letting $\epsilon:=\frac {\gamma \delta}{1+\gamma}$. In Case (3) this holds since $K_X+B$ is not $\pi$-pseudo-effective, and therefore $K_X+B+\epsilon A$ is not $\pi$-pseudo-effective for some $0<\epsilon \ll 1$. In particular, $\lambda _i>\epsilon$ for all $i$. Replacing $B$ by $B+\epsilon A$ and $\lambda _i$ by $\lambda _i-\epsilon$ the claim follows.
Fix $||\cdot ||$ a norm on $N^1 (X/U;W)$. Let $\lambda =\lim \lambda _i$.
We may pick ample ${\mathbb{Q}}$-divisors $H_1,\ldots ,H_r$ such that
\begin{enumerate}
\item $H_j\geq \epsilon A$ for some $0<\epsilon \ll 1$ and $1\leq j\leq r$,
\item $(X,H_j)$ is klt for $1\leq j\leq r$,
\item $||(B+\lambda A)- H_j||\ll 1$ for $1\leq j\leq r$,
\item if $\mathcal C =\mathbb R_+(K_X+B)+\sum _{j=1}^r\mathbb R_+(K_X+H_j)\subset {\rm Div}_{\mathbb{R}} (X)$, then $K_X+B+\lambda A$ is in the interior of $\mathcal C$, and the dimension of $\mathcal C$ equals $\operatorname{dim} N^1(X/U;W)$.
\end{enumerate}
By Theorem \ref{t-a}, $R(X/U, \mathcal C)$ is a locally finitely generated $\mathcal O _U$ algebra.
Arguing as in \cite[Theorem 6.5]{CL13}, the corresponding MMP with scaling terminates.
\end{proof}
\part{MMP in dimension $4$}
\section{Cone and Contraction Theorems}
\subsection{Cone and contraction theorems in dimension 3}
We begin by proving a unified cone theorem for $\mathbb{Q}$-factorial dlt pairs $(X, B)$ that works both when $K_X+B$ pseudo-effective and non pseudo-effective. We will need the following lemma on the length of extremal rays.
\begin{lemma}\label{lem:lc-length}
Let $(X, \Delta)$ be a compact K\"ahler lc pair of dimension $n$. Let $\Delta_0\>0$ be a $\mathbb{Q}$-divisor such that $(X, \Delta_0)$ is klt. Let $R$ be a $(K_X+\Delta)$-negative extremal ray of $\operatorname{\overline{NA}}(X)$ and $f:X\to Y$ is the projective morphism contracting $R$, i.e. a curve $C\subset X$ is contracted by $f$ if and only if $[C]\in R$. Then there is a rational curve $\Gamma\subset X$ contained in a fiber of $f$ such that $R=\mathbb{R}^+\cdot[\Gamma]$ and
\[
0<-(K_X+\Delta)\cdot\Gamma\<2n.
\]
\end{lemma}
\begin{proof}
The same proof as in \cite[Theorem 3.8.1]{BCHM10} works using \cite[Theorem 4.2]{DO22} in place of \cite[Theorem 1]{Kaw91}.
\end{proof}
\begin{theorem}\label{thm:unified-cone-thm}
Let $(X,B)$ be a $\mathbb{Q}$-factorial compact K\"ahler $3$-fold dlt pair. Then there exists a countable collection of rational curves $\{C_i\}_{i\in I}$ such that $0<-(K_X+B )\cdot C_i\leq 6$ and
\[ \operatorname{\overline{NA}}(X)=\operatorname{\overline{NA}}(X)_{(K_X+B )\geq 0}+\sum _{i\in I}\mathbb{R}^+\cdot[C_i].\]
Moreover, if $\omega$ is a K\"ahler class, then there are only finitely many extremal rays $R_i=\mathbb{R}^+\cdot[C_i]$ satisfying $(K_X+B +\omega)\cdot C_i<0$ for $i\in I$.
\end{theorem}
\begin{proof}
The theorem is known when $K_X+B$ is pseudo-effective or when $X$ is projective. We may therefore assume that $K_X+B$ is not pseudo-effective and $X$ is not projective. In this case from \cite[Lemma 2.39]{DH20} it follows that the base of the MRC fibration $X\dasharrow Z$ has $\operatorname{dim} Z=2$. Let $\Omega$ be the collection of all K\"ahler classes $\omega$ such that $(K_X+B+\omega)\cdot F=0$ where $F$ is a curve corresponding to the general fiber of $X\dasharrow Z$. Then by \cite[Theorem 4.6]{DH20}, for every $\omega\in\Omega$, we can decompose the cone $\operatorname{\overline{NA}}(X)$ as a sum of $(K_X+B+\omega)$-non-negative part and the $(K_X+B+\omega)$-negative extremal rays. Let $I_\omega$ be the set of all $(K_X+B+\omega)$-negative extremal rays of $\operatorname{\overline{NA}}(X)$ for $\omega\in\Omega$ and define $I:=\cup_{\omega\in\Omega}I_\omega$. Note that $I$ is a countable set, since there are only countably many numerically distinct curve classes on a compact K\"ahler space.\\
Now define a cone $N\subset N_1(X)$ as
\[
N:=\operatorname{\overline{NA}}(X)_{(K_X+B)\>0}+\sum_{i\in I} \mathbb{R}^+\cdot[\Gamma_i].
\]
First we will show that $\operatorname{\overline{NA}}(X)=\overline{N}$. Clearly $\overline{N}\subset \operatorname{\overline{NA}}(X)$ holds, so assume that the reverse inclusion does not hold. Then there is a nef class $\alpha$ such that $\alpha \cdot \gamma >0$ for all $0\neq\gamma \in \overline{N}$ and $\alpha \cdot \gamma=0$ for some $\gamma\in\operatorname{\overline{NA}}(X)$. Clearly $\operatorname{\overline{NA}}(X)_{(K_X+B)\>0}$ is a closed sub-cone of $\operatorname{\overline{NA}}(X)$. Let $K$ be a compact slice of $\operatorname{\overline{NA}}(X)_{(K_X+B)\>0}$. Then there exists an $\epsilon >0$ such that $(\alpha -\epsilon(K_X+B))\cdot \gamma >0$ for all $0\neq\gamma \in K$. But then $\alpha-\epsilon(K_X+B)$ is strictly positive on $\operatorname{\overline{NA}}(X)\setminus\{0\}$, and so $\alpha'=\frac{1}{\epsilon} \alpha=K_X+B+\omega$, where $\omega:=\frac{1}{\epsilon}(\alpha-\epsilon(K_X+B))$ is strictly positive on $\overline {{\rm NA}}(X)$, and hence a K\"ahler class by \cite[Corollary 3.16]{HP16}. Replacing $\alpha$ by $\alpha'$ we may assume that $\alpha=K_X+B+\omega$ for some K\"ahler class $\omega$ such that $\alpha\cdot\gamma>0$ for all $0\neq \gamma\in\overline{N}$ and $\alpha\cdot\gamma=0$ for some $\gamma\in\operatorname{\overline{NA}}(X)$.\\
We may assume that the general fiber of the MRC fibration $X\dashrightarrow Z$ generates one of the extremal rays considered in the set $I$ above; let it be denoted by $R_0:=\mathbb{R}^+\cdot[\Gamma_{i_0}]$ for $i_0\in I$. Then by our construction we have $\alpha\cdot \Gamma_{i_0}=(K_X+B+\omega)\cdot\Gamma_{i_0}>0$. Choose $0<t<1$ such that $(K_X+B+t\omega)\cdot\Gamma_{i_0}=0$. Then from \cite[Theorem 4.6]{DH20} it follows that $\{0\}\neq\alpha^\bot\cap\operatorname{\overline{NA}}(X)$ is a $(K_X+B+t\omega)$-negative extremal face of $\operatorname{\overline{NA}}(X)$. Let $R$ be a $(K_X+B+t\omega)$-negative extremal ray contained in this face. Then $R=R_i$ for some $i\in I$ such that $R_i\in I_{t\omega}\subset I$. This is a contradiction, since $\alpha\cdot R>0$ by construction.\\
Next, using \cite[Lemma 6.1]{HP16} we will show that $N$ is a closed cone. We note that the proof of \cite[Lemma 6.1]{HP16} works with $K_X$ replaced by $K_X+B$. Observe that we only need to show that the intersection numbers $(K_X+B)\cdot\Gamma_i$ for $i\in I$ are all bounded by a fixed constant independent of $i$. To that end, we claim that $0<-(K_X+B)\cdot\Gamma_i\<6$ for all $i\in I$. Let $R_i=\mathbb{R}^+[\Gamma_i]$ for some $i\in I$. Then by our construction there is a K\"ahler class $\omega$ such that $(K_X+B+\omega)\cdot F=0$ for a general fiber $F$ of the MRC fibration $X\dashrightarrow Z$ and $R_i$ is a $(K_X+B+\omega)$-negative extremal ray. Then by \cite[Theorem 4.6]{DH20}, there is a nef supporting class $\beta=K_X+B+\eta$ of $R_i$, where $\eta$ is a K\"ahler class. By \cite[Theorem 1.7]{DH20} we can contract $R_i$; let $f:X\to Y$ be the contraction of $R_i$ such that $-(K_X+B)$ is $f$-ample. Then by Lemma \ref{lem:lc-length} there is a rational curve $C_i\subset X$ such that $f(C_i)=\operatorname{pt}$ and $-(K_X+B)\cdot C_i\<6$. Therefore $R_i=\mathbb{R}^+\cdot[C_i]$ and $-(K_X+B)\cdot C_i\<6$ and we are done by \cite[Lemma 6.1]{HP16}.\\
Finally, for any K\"ahler class $\omega$, if $(K_X+B+\omega)\cdot C_i<0$, then $\omega\cdot C_i<-(K_X+B)\cdot C_i\<6$. Hence by a Douady space argument there are finitely many extremal rays $R_i=\mathbb{R}^+\cdot[C_i]$ satisfying $(K_X+B+\omega)\cdot R_i<0$.
\end{proof}~\\
We deduce the non $\mathbb{Q}$-factorial version of this theorem below which is used throughout the article.
\begin{corollary}\label{cor:nQ-unified-cone}
Let $(X, B)$ be a compact K\"ahler $3$-fold dlt pair. Then there exists a countable collection of rational curves $\{C_i\}_{i\in I}$ such that $0<-(K_X+B)\cdot C_i\<6$ for all $i\in I$ and
\[
\operatorname{\overline{NA}}(X)=\operatorname{\overline{NA}}(X)_{(K_X+B)\>0}+\sum_{i\in I} \mathbb{R}^+\cdot [C_i].
\]
Moreover the following holds:
\begin{enumerate}
\item For any K\"ahler class $\omega$, there are only finitely many extremal rays $R_i:=\mathbb{R}^+\cdot[C_i]$ such that $(K_X+B+\omega)\cdot R_i<0$.
\item For any $(K_X+B)$-negative extremal ray $R=\mathbb{R}^+\cdot[C_{i}]$, there is a nef class $\alpha\in H^{1, 1}_{\operatorname{BC}}(X)$ such that $\alpha^\bot\cap\operatorname{\overline{NA}}(X)=R$ and $\alpha=K_X+B+\eta$ for some K\"ahler class $\eta$.
\end{enumerate}
\end{corollary}
\begin{proof}
Since $(X, B)$ is a dlt pair, there is a log resolution $\phi:Y\to X$ of $(X, B)$ such that $a(E, X, B)>-1$ for all exceptional divisors $E$ of $\phi$. Define $B_Y:=\phi^{-1}_*B+\operatorname{Ex}(\phi)$. Then running a $(K_Y+B_Y)$-MMP over $X$ as in \cite[Proposition 2.21]{DH20}, we may assume that there is a $\mathbb{Q}$-factorial dlt pair $(X', B')$ and a small projective bimeromorphic morphism $f:X'\to X$ such that $K_{X'}+B'=f^*(K_X+B)$. Then by the cone Theorem \ref{thm:unified-cone-thm} for $(X', B')$ there exist countably many rational curves $\{C'_i\}_{i\in I'}$ on $X'$ such that $0<-(K_{X'}+B')\cdot C'_i\<6$ for all $i\in I'$ and
\begin{equation}\label{eqn:cone-on-model}
\operatorname{\overline{NA}}(X')=\operatorname{\overline{NA}}(X')_{(K_{X'}+B')\>0}+\sum_{i\in I'} \mathbb{R}^+\cdot [C'_i].
\end{equation}
Now from \cite[Proposition 3.14]{HP16} it follows that $f_*\operatorname{\overline{NA}}(X')=\operatorname{\overline{NA}}(X)$. Let $f(C'_i)=C_i\subset X$ for all $i\in I'$ such that $f(C'_i)\neq \operatorname{pt}$ and let $\{C_i\}_{i\in I}$ be the collection of all non contracted curves. Applying $f_*$ on both sides of \eqref{eqn:cone-on-model} we claim $\operatorname{\overline{NA}}(X)=\operatorname{\overline{NA}}(X)_{(K_{X}+B)\>0}+\sum_{i\in I} \mathbb{R}^{\>0}\cdot [C_i]$. If not, then assume that
\begin{equation}\label{eqn:cone-equality-on-x}
\operatorname{\overline{NA}}(X)\supsetneq \operatorname{\overline{NA}}(X)_{(K_X+B)\>0}+\sum_{i\in I} \mathbb{R}^{\>0}\cdot [C_i].
\end{equation}
Then there exists a $(1, 1)$ class $\alpha\in N^1(X)$ such that $\alpha$ is positive on the RHS of \eqref{eqn:cone-equality-on-x} and $\operatorname{\overline{NA}}(X)\cap \alpha_{\<0}\neq \emptyset$. Let $\omega$ be a K\"ahler class on $X$ and $\lambda\in\mathbb{R}^+$ is defined as $\lambda:=\inf \{t\>0: \alpha+t\omega \mbox{ is a K\"ahler class}\}$. Then $\alpha+\lambda\omega$ is a nef class which is not K\"ahler. Consequently, from \cite[Corollary 3.16]{HP16} it follows that $(\alpha+\lambda\omega)^\bot \cap \operatorname{\overline{NA}}(X)\neq \{0\}$. Let $\beta=\alpha+\lambda\omega$; then $\beta$ is nef and $\beta^\bot\cap \operatorname{\overline{NA}}(X)\neq \{0\}$. In particular, $\beta^\bot\cap \operatorname{\overline{NA}}(X)$ is an extremal face of $\operatorname{\overline{NA}}(X)$; let's denote this face by $F$. Let $F'$ be the extremal face of $\operatorname{\overline{NA}}(X')$ defined by $f^*\beta$, i.e. $F':=(f^*\beta)^\bot\cap\operatorname{\overline{NA}}(X')$. Then from Lemma \ref{lem:cone-push-forward} it follows that $F'=f^{-1}F\cap\operatorname{\overline{NA}}(X')$. We claim that $K_{X'}+B'$ is negative on $F'\setminus f^{-1}(\mathbf{0})$, where $\mathbf{0}\in\operatorname{\overline{NA}}(X)$ is the zero vector. Indeed, if $\gamma'\in F'\setminus f^{-1}(\mathbf{0})$, then $f_*\gamma'\in F\setminus \{\mathbf{0}\}$ and thus $(\alpha+\lambda\omega)\cdot f_*\gamma'=0$. In particular, $\alpha\cdot f_*\gamma'<0$, and thus $(K_X+B)\cdot f_*\gamma'<0$ (it follows from the construction of $\alpha$). Therefore by the projection formula we have $(K_{X'}+B')\cdot \gamma'<0$.\\
Now by the cone theorem on $(X', B')$ and \cite[Theorem 7.1]{DH20}, there must be a $(K_{X'}+B')$-negative extremal ray, say $R'=\mathbb{R}^{\>0}\cdot[C'_i]\subset F'\setminus f^{-1}(\mathbf{0})$. Then $C_i=f(C'_i)\neq \operatorname{pt}$ is one of the curves in the collection $\{C_i\}_{i\in I}$ above and $(K_X+B)\cdot C_i<0$. But $[C_i]\in F=(\alpha+\lambda\omega)^\bot\cap \operatorname{\overline{NA}}(X)$ and this is a contradiction, since by our assumption $\alpha\cdot C_i>0$, and hence $(\alpha+\lambda\omega)\cdot C_i>0$.\\
Now $(1)$ is proved exactly as in Theorem \ref{thm:unified-cone-thm}. For the second part, from \cite[Lemma 6.1]{HP16} we see that $V=\operatorname{\overline{NA}}(X)_{(K_X+B)\>0}+\sum_{i\in I, i\neq i_0} \mathbb{R}^+[C_i]$ is a closed subcone on $\operatorname{\overline{NA}}(X)$; note that \cite[Lemma 6.1]{HP16} is only stated for $K_X$, but this was never used in the proof, and the exact same proof works for $K_X+B$. Then by \cite[Lemma 6.7(d)]{Deb01} there is a nef class $\alpha\in H^{1, 1}_{\operatorname{BC}}(X)$ such that $\alpha$ is strictly positive on $V\setminus \{\mathbf{0}\}$ and $\alpha^\bot\cap\operatorname{\overline{NA}}(X)=R$. Then scaling $\alpha$ appropriately we observe that $\alpha-(K_X+B)$ is strictly positive on $\operatorname{\overline{NA}}(X)\setminus \{\mathbf{0}\}$, and thus by \cite[Corollary 3.16]{HP16}, $\alpha-(K_X+B)=\eta$ is a K\"ahler class on $X$, i.e. $\alpha=K_X+B+\eta$.
\end{proof}~\\
The following lemma is taken from \cite{Wal18}.
\begin{lemma}\cite[Lemma 3.1]{Wal18}\label{lem:cone-push-forward}
Let $f:V\to W$ be a surjective linear transformation of finite dimensional vector spaces over $\mathbb{R}$. Suppose that $C_V\subset V$ and $C_W\subset W$ are closed convex cones of maximal dimensions and $H\subset W$ is a vector subspace of codimension $1$. Assume that the following hold:
\begin{enumerate}
\item $f(C_V)=C_W$.
\item $C_W\cap H\subset \partial C_W.$
\end{enumerate}
Then $f^{-1}H\cap C_V\subset \partial C_V$ and also $f^{-1}H\cap C_V=f^{-1}(H\cap C_W)\cap C_V$.
\end{lemma}~\\
The following contraction theorem is a direct generalization of \cite[Theorem 7.2]{DH20}.
\begin{theorem}[Contraction Theorem]\label{thm:contraction-non-q-factorial}
Let $(X, B)$ be a compact K\"ahler $3$-fold klt pair, and $\alpha\in N^1(X)$ be a nef class such that $\alpha-(K_X+B)$ is nef and big.
Then there exists a proper morphism $f:X\to Y$ with connected fibers to a normal compact K\"ahler variety $Y$ with rational singularities and a K\"ahler class $\omega_Y\in N^1(Y)$ such that $\alpha=f^*\omega_Y$. In particular, if $\alpha-(K_X+B)$ is a K\"ahler class, then $f$ is projective.
\end{theorem}
\begin{proof}
Let $g:Z\to X$ be a small $\mathbb{Q}$-factorization of $X$ obtained by running an appropriate relative MMP on a log resolution of $(X, B)$ as in \cite[Proposition 2.21]{DH20}. Set $K_Z+B_Z:=g^*(K_X+B)$; then $g^*\alpha-(K_Z+B_Z)$ is nef and and big. Thus by \cite[Theorem 1.7]{DH20}, there is proper morphism $h:Z\to Y$ with connected fibers to a normal compact K\"ahler variety $Y$ with rational singularities and a K\"ahler class $\omega_Y\in N^1(Y)$ such that $g^*\alpha=h^*\omega_Y$. Now we will apply the rigidity lemma. Note that since $g$ is a projective morphism, the positive dimensional fibers of $g$ are covered by projective curves. Let $C\subset Z$ be a curve such that $g(C)=\operatorname{pt}$. Then by the projection formula $g^*\alpha\cdot C=0$. Thus $0=h^*\omega_Y\cdot C=\omega_Y\cdot h_*C$, and hence $C$ is contracted by $h$, as $\omega_Y$ is a K\"ahler class on $Y$. Therefore, by the rigidity lemma \cite[Lemma 4.1.13]{BS95}, there is a proper morphism $f:X\to Y$ such that $f\circ g=h$, and thus by pushing forward by $g$ it follows that $\alpha=f^*\omega_Y$.
Finally, if $\omega _X:=\alpha-(K_X+B)$ is a K\"ahler class, then $-(K_X+B)\equiv_f \omega_X$, and hence $-(K_X+B)$ is $f$-ample, thus $f$ is projective.
\end{proof}
The following variant is often useful in applications.
\begin{corollary}\label{cor:contraction-non-q-factorial}
Let $(X, B)$ be a compact K\"ahler $3$-fold dlt pair, and $\alpha\in N^1(X)$ is a nef class such that $\alpha-(K_X+B)$ is a K\"ahler class. Moreover, assume that $B=B_0+B'$, where $B_0\>0, B'\>0$ are $\mathbb{Q}$-divisors such that $K_X+B_0$ is $\mathbb{Q}$-Cartier and $(X, B_0)$ has klt singularities. Then there exists a proper morphism $f:X\to Y$ with connected fibers to a normal compact K\"ahler variety $Y$ with rational singularities and a K\"ahler class $\omega_Y\in N^1(Y)$ such that $\alpha=f^*\omega_Y$. In particular, if $\alpha-(K_X+B)$ is a K\"ahler class, then $f$ is projective.
\end{corollary}
\begin{proof}
Let $\alpha=K_X+B+\omega$, where $\omega$ is a K\"ahler class. Then for a sufficiently small $\varepsilon\in\mathbb{Q}^+$ we can write $\alpha=K_X+B_0+(1-\varepsilon)B'+(\omega+\varepsilon B')$ so that $\omega+\varepsilon B'$ is a K\"ahler class and $(X, B_0+(1-\varepsilon)B')$ is klt. In particular, $\alpha-(K_X+B_0+(1-\varepsilon)B')$ is a K\"ahler class, and thus by Theorem \ref{thm:contraction-non-q-factorial} there exists a projective morphism $f:X\to Y$ such that $\alpha=f^*\omega_Y$ for some K\"ahler class $\omega_Y$ on $Y$.
\end{proof}
\section{Termination of flips for effective pairs}
We will prove termination of flips for effective pairs as in \cite{Bir07}. In order to do this first we prove the existence of local dlt models (local and global) and the ACC property for log canonical thresholds.\\
\begin{theorem}[Global dlt model]\label{thm:global-dlt-model}
Let $(X, B)$ be a compact K\"ahler lc pair of dimension $4$. Then there exists a $\mathbb{Q}$-factorial dlt pair $(X', B')$ and a projective bimeromorphic morphism $g:X'\to X$ such that $K_{X'}+B'=g^*(K_X+B)$.
\end{theorem}
\begin{proof}
Let $f:Y\to X$ be a log resolution of $(X, B)$.
Define $B_Y:=f^{-1}_*B+\operatorname{Ex}(g)$. Then using the cone Theorem \ref{thm:general-relative-dlt-cone} we will run a $(K_Y+B_Y)$-MMP over $X$. If $R=\mathbb{R}^+\cdot[C_i]$ is a $(K_Y+B_Y)$-negative extremal ray of $\operatorname{\overline{NE}}(Y/X)$, then from a standard argument using the rationality theorem as in \cite[Theorem 4.11]{Nak87} it follows that there is a $f$-nef line bundle $L$ on $Y$ such that $L-(K_Y+B_Y)$ is $f$-ample and $L^\bot\cap\operatorname{\overline{NE}}(X/Y)=R$. Write $L=K_Y+B_Y+H$ for some $f$-ample divisor $H$; then $L=K_Y+(1-\varepsilon B_Y)+(H+\varepsilon B_Y)$ such that $H+\varepsilon B_Y$ is $f$-ample for $\varepsilon\in\mathbb{Q}^+$ sufficiently small. Note that $(Y, (1-\varepsilon)B_Y)$ is klt and thus by Theorem \ref{thm:relative-bpf} there is a projective bimeromorphic morphism $\phi:X\to Z$ over $Y$ contracting the ray $R$. If $\phi$ is a small morphism, then the existence of flip follows from Corollary \ref{c-fg1}. Note that from our construction above it follows that at each step $(Y_i, B_{Y_i})$, the contracted locus is contained in $\lfloor B_{Y_i}\rfloor$. Since the log minimal model program is known in dimension $\<3$ due to \cite{DH20}, special termination holds and the above MMP terminates. Let $g:(X', B')\to (X, B)$ be the end result of this MMP. Then from the negativity lemma it follows that $K_{X'}+B'=g^*(K_X+B)$ such that $(X', B')$ is a $\mathbb{Q}$-factorial compact K\"ahler dlt pair of dimension $4$.
\end{proof}~\\
We say that a complex space $X$ is \textit{relatively compact} if there is another complex space $Y$ such that $X$ is an open subspace of $Y$ and the closure $\overline{X}\subset Y$ is compact.
\begin{theorem}[Local dlt-model]\label{thm:local-dlt-model}
Let $(X, B)$ be a log canonical pair, where $X$ is a relatively compact Stein open subset of a K\"ahler variety and there is a compact subset $W\subset X$. Then shrinking $X$ around $W$ if necessary, there exists a projective bimeromorphic morphism $f:Y\to X$ such that $K_Y+B_Y=f^*(K_X+B)$ and $(Y, B_Y)$ is a $\mathbb{Q}$-factorial dlt pair, where $B_Y:=f^{-1}_*B+\operatorname{Ex}(f)$.
\end{theorem}
\begin{proof}
The proof of \cite[Theorem 3.1]{KK10} works here with few changes. Their proof uses ample divisors on $X$ and Bertini's theorem on a resolution of singularities of $X$. The ampleness assumption is replaced by assuming that $X$ is Stein, and the required Bertini theorem follows from Theorem \ref{t-bertini+}. Additionally, \cite[Theorem 3.1]{KK10} uses \cite{BCHM10} to obtain a log terminal model of klt pair by running a relative MMP for projective morphism. We achieve the same thing here by Theorem \ref{t-mmpscale}.
\end{proof}
\begin{definition}\label{def:lct}
Let $(X, B)$ be a lc pair and $M\>0$ an $\mathbb{R}$-Cartier divisor. Then we define
\[
\operatorname{lct}(X, B; M):=\sup\;\{t\>0\;:\; (X, B+tM) \mbox{ is lc}\}.
\]
Now fix two sets $I\subset [0, 1]$ and $J\subset [0, \infty)$. Let $\mathfrak{I}_n(I)$ be the set of all lc pair $(X, B)$, where $X$ is a K\"ahler variety of dimension $n$ (not necessarily compact), and the coefficients of $B$ belong to the set $I$. Then we define
\[
\operatorname{LCT}_n(I, J):=\{\operatorname{lct}(X, B; M)\; :\; (X, B)\in\mathfrak{I}_n(I)\},
\]
where the coefficients of $M$ belong to the set $J$.
\end{definition}
\begin{theorem}\label{thm:acc-for-lct}
Fix a positive integer $n$, and sets $I\subset [0, 1]$ and $J\subset [0, \infty)$. If $I$ and $J$ are DCC sets, then $\operatorname{LCT}_n(I, J)$ satisfies the ACC.
\end{theorem}
\begin{proof}
By contradiction assume that there is a strictly increasing sequence $\{c_i\}$, where $c_i=\operatorname{lct}(X_i, B_i; M_i)$. Now first assume that there is a component $S_i$ of $M_i$ which is a lc center of $(X_i, B_i+c_iM_i)$ for infinitely many $i$. Let the coefficient of $S_i$ in $B_i$ and $M_i$ be $b_{i1}$ and $m_{i1}$, respectively. Then we have $b_{i1}+c_im_{i1}=1$. Since $b_{i1}$ and $m_{i1}$ are both contained in DCC sets, by passing to a common subsequence we may assume that $b_{i1}$ and $m_{i1}$ are both monotonically increasing sequences. Then from $c_{i1}m_{i1}=1-b_{i1}$ we see that the LHS is a strictly increasing sequence (since $\{c_i\}$ is strictly increasing) while the RHS is a monotonically decreasing sequence, a contradiction.\\
Thus passing to a tail of the sequence $\{c_i\}$ we may assume that all lc centers of $(X_i, B_i+c_iM_i)$ are contained in the support of $M_i$ are of codimension at least $2$. Let $Z_i$ be a maximal lc center of $(X_i, B_i+c_iM_i)$ contained in ${\rm Supp} M_i$ for all $i$.
Next choose a relatively compact Stein open subset $U_i\subset X_i$ such that $U_i\cap Z_i\neq\emptyset$ and $Z_i|_{U_i}$ is still a maximal lc center of $(U_i, (B_i+c_iM_i)|_{U_i})$. Replacing $(X_i, B_i+c_iM_i)$ by $(U_i, (B_i+c_iM_i)|_{U_i})$ we may assume that $X_i$ is relatively compact Stein space. Note that shrinking $X_i$ further we can pick a small open subset $V_i\subset X_i$ such that $V_i\cap Z_i\neq\emptyset$ is still a maximal lc center of $(V_i, (B_i+c_iM_i)|_{V_i})$, and additionally $\overline{V}_i\subset X_i$ holds. \\
Now let $f_i:Y_i\to U_i$ be a dlt model of $(U_i, (B_i+c_iM_i)|_{U_i})$ as in Theorem \ref{thm:local-dlt-model}. Then there is an exceptional divisor $E_i$ intersecting the strict transform of $M_i$ such that $f_i(E_i)=Z_i$. Now write
\[
K_{Y_i}+E_i+\Gamma_i=f^*(K_{X_i}+B_i+c_iM_i)
\]
so that $f_{i*}\Gamma_i=B_i+c_iM_i$.\\
Then by adjunction, $(E_i, \Theta_i)$ is a dlt pair, where $K_{E_i}+\Theta_i=(K_X+E_i+\Gamma_i)|_{E_i}=f^*_i((K_{X_i}+B_i+c_iM_i)|_{V_i})$. Note that $\Theta_i$ has a component whose coefficient in $\Theta_i$ is of the form
\begin{equation}\label{eqn:adjunction-coefficient}
\frac{m-1+f+kc_i}{m},
\end{equation}
where $k, m\>1$ and $f\in D(I)$.\\
Now let $F_i$ be a general fiber of the induced morphism $f_{E_i}:=f_i|_{E_i}: E_i\to f_i(E_i)$. Then $F_i$ is projective, since $f_i$ is projective, and by adjunction we have $K_{F_i}+\Theta_{F_i}=(K_{E_i}+\Theta_i)|_{F_i}\equiv 0$. Note that $\Theta_{F_i}$ has a coefficient of the form \eqref{eqn:adjunction-coefficient}, and thus we arrive at a contradiction by Theorem 1.5 and Lemma 5.2 of \cite{HMX14}.
\end{proof}
\begin{theorem}\cite[Theorem 1.3]{Bir07}\label{thm:effective-termination}
Let $(X,B)$ be a dlt $4$-fold pair such that $(K_{X}+B)\sim_{\mathbb{Q}} D\>0$. Then any sequence $\{(X_i, B_i)\}$ of $(K_{X}+B)$-flips where $X_i$ is K\"ahler for all $i$, terminates.
\end{theorem}
\begin{proof}
The same proof as in \cite{Bir07} works here using Theorem \ref{thm:local-dlt-model} and \ref{thm:acc-for-lct}.
\end{proof}
\section{MMP for $\kappa(X, K_X+B)\>0$}
In this section we will prove Theorem \ref{thm:effective-dlt-mm}. First we prove the following easy lemma which will allow us to perturb a nef (but not K\"ahler) class of the form $K_X+B+\omega$, where $\omega$ is a K\"ahler class, such that its null locus intersects $\operatorname{\overline{NA}}(X)$ precisely along an extremal ray.
\begin{lemma}[General and very general K\"ahler class]\label{lem:very-general-kahler} Let $V$ be a vector space (resp. a finite dimensional vector space) over $\mathbb{R}$ and $\mathcal{C}\subset V$ a cone in $V$ which is not contained in any hyperplane.
Let $V^*$ denote the dual space of $V$, and fix a finite collection (resp. a countable collection) of dual vectors $\{C_i\}_{\\i\in I}$ in $V^*$ such that $\omega\cdot C_i:=C_i(\omega)>0$ for all $\omega\in \mathcal{C}$. Additionally, assume that $C_i\neq \lambda C_j$ for any $i\neq j\in I$ and $\lambda \in\mathbb{R}$. Fix an element $D\in V$. Then there is a finite (resp. countable) union of hyperplanes $\mathcal{H}\subset V$ such that if $\omega\in \mathcal{C}\setminus \mathcal{H}$, then for any $t\in \mathbb{R}$, $(D+t\omega)\cdot C_i=0$ for at most one $i\in I$.
\end{lemma}
\begin{proof}
Since $C_i\neq \lambda C_j$ for any $i\neq j$ and $\lambda\in \mathbb{R}$, $\langle C_i, C_j\rangle^\bot$ is a codimension $2$ linear subspace of $V$. Define for $i\neq j$
\[
\mathcal{H}(C_i, C_j):=\{\omega\in \mathcal{C}\;|\; (D+t\omega)\in \langle C_i, C_j\rangle^\bot \mbox{ for some } t\in\mathbb{R}\}.
\]
Then $\mathcal{H}(C_i, C_j)$ is contained in some hyperplane. Indeed, if $D+t\omega\in \langle C_i, C_j\rangle^\bot$, then $t\omega\in \langle C_i, C_j\rangle^\bot-D$, and hence $\omega$ is contained in the linear subspace spanned by $\langle C_i, C_j\rangle^\bot-D$, which is contained in a hyperplane.\\
Define $\mathcal{H}:=\cup_{i\neq j} \mathcal{H}(C_i, C_j)$. Thus $\mathcal{H}$ is contained in a finite (resp. countable) union of hyperplanes, and for any $\omega\in\mathcal{C}\setminus\mathcal{H}$ it follows from our construction above that $D+t\omega\not\in \langle C_i, C_j\rangle^\bot$ for all $i\neq j\in I$ and any $t\in \mathbb{R}$; in particular, for any $t\in \mathbb{R}$, $(D+t\omega)\cdot C_i\neq 0$ for at most one $i\in I$.
\end{proof}~\\
In the following we will show that if $(X, B)$ is a dlt pair such that $K_X+B\sim_{\mathbb{Q}} M\>0$ and all $(K_X+B)$-negative extremal contractions are contained in the support of $\lfloor B\rfloor$, then we have a minimal model.
\begin{theorem}\label{thm:special-effective-dlt-mmp}
Let $(X, S+B)$ be a $\mathbb{Q}$-factorial compact K\"ahler dlt pair of dimension $4$ such that $\lfloor S+B\rfloor=S$ and $(K_X+S+B)\sim_{\mathbb{Q}} D\>0$ for some effective $\mathbb{Q}$-divisor $D\>0$. Assume that ${\rm Supp} (D)\subset S$. Then there exists a finite sequence of flips and divisorial contractions
\[
\xymatrixcolsep{3pc}\xymatrix{\phi:X=X_0\ar@{-->}[r] & X_1\ar@{-->}[r] &\cdots \ar@{-->}[r] & X_n}
\]
such that $K_{X_n}+S_n+B_n$ is nef, where $S_n+B_n=\phi _*(S+B)$.
\end{theorem}
\begin{proof}
If $K_X+S+B$ is nef, then we are done, and so we will assume that $K_X+S+B$ is not nef.
Note that the set of all curves in $X$ corresponds to countably many classes of curves $\{C_i\}_{i\in I}$ in $N_1(X)$, by \cite[Lemma 4.4]{Toma16}. So we can choose a very general K\"ahler class $\omega\in H^{1,1}_{\operatorname{BC}}(X)$ as in Lemma \ref{lem:very-general-kahler}.
Define
\[\lambda:=\inf\{t\>0\;|\;K_X+S+B+t\omega \mbox { is K\"ahler}\}.\]
Replacing $\omega $ by $\lambda\omega$, we may assume that $K_X+S+B+\omega$ is nef but not K\"ahler. Then $K_X+S+B+\omega\equiv D+\omega$ is a nef and big class but not K\"ahler. We make the following claim.
\begin{claim}\label{clm:passing-to-a-component}
There exists an irreducible component $T$ of $S$ and a curve $\Gamma =C_i$ for some $i\in I$, such that $(K_X+S+B+\omega )\cdot \Gamma=0$ and $T\cdot \Gamma<0$ and
$(K_X+S+B+\omega )\cdot C_{j}>0$ for any $i\ne j\in I$.
In particular, $K_T+B_T+\omega_T:=(K_X+S+B)|_T+\omega|_T$ is nef but not K\"ahler.
\end{claim}
\begin{proof}[Proof of Claim \ref{clm:passing-to-a-component}]
Since $K_X+S+B+\omega$ is nef and big but not K\"ahler, by Theorem \ref{thm:nef-big-to-kahler} there exists a subvariety $V\subset X$ such that $((K_X+S+B+\omega)|_V)^{\operatorname{dim} V}=0$, i.e. $((D+\omega)|_V)^{\operatorname{dim} V}=0$. By Lemma \ref{lem:arbitrary-nef-and-big}, it follows that $(D+\omega)|_V$ is not a big class on $V$. In particular, $V$ is contained in the support of $D$, and hence there is an irreducible component, say $T$ of $S$ such that $V\subset T$. Clearly, $K_T+B_T+\omega_T:=(K_X+S+B)|_T+\omega|_T$ is nef but not K\"ahler. Then by Corollary \ref{cor:nQ-unified-cone}, the $(K_T+B_T)$-negative extremal face $F:=(K_T+B_T+\omega_T)^\bot\cap \operatorname{\overline{NA}}(T)$ is generated by finitely many curve classes, say $[\Sigma_1],\ldots, [\Sigma_r]$, i.e. $F=\langle\Sigma _{1},\ldots , \Sigma _r\rangle$.
Then $\mathbb R\cdot [\Sigma _{1}]=\mathbb R\cdot [C_i]\subset N_1(X)$ for some $i\in I$.
Since $\omega$ is very general and $K_X+S+B+\omega$ is nef, from Lemma \ref{lem:very-general-kahler} it follows that $(K_X+B+\omega)\cdot C_i=0$ and $(K_X+S+B+\omega )\cdot C_{j}>0$ for all $j\ne i\in I$. Therefore $\mathbb R_{\geq 0}\cdot[\Sigma _k] =\mathbb R_{\geq 0}\cdot [C_i]$ in $N_1(X)$ for all $1\leq k\leq r$. Let $\Gamma =C_{i}$. Observe that we have $D\cdot \Gamma<0$, in particular, there is an irreducible component $T'$ of $S$ such that $T'\cdot \Gamma<0$. It then follows that \[(K_{T'}+B_{T'}+\omega|_{T'})\cdot \Gamma=(K_X+S+B+\omega)\cdot \Gamma=(D+\omega)\cdot\Gamma=0,\] and thus $K_{T'}+B_{T'}+\omega|_{T'}$ is nef but not K\"ahler. Thus replacing $T$ by $T'$ we may assume that $(K_T+B_T+\omega|_T)\cdot\Gamma=0$ and $T\cdot\Gamma<0$. In particular, $T\cdot \Sigma_k<0$ and thus $\Sigma_{k}\subset T$ for all $1\<k\<r$.
\end{proof}
By Corollary \ref{cor:contraction-non-q-factorial}, there exists a projective morphism $\varphi:T\to W$ contracting the $(K_T+B_T)$-negative extremal face $F=(K_T+B_T+\omega_T)^\bot\cap\operatorname{\overline{NA}}(T)$.
Note that $W$ is a normal compact K\"ahler variety and $\omega_W$ a K\"ahler class on $W$ such that $K_T+B_T+\omega|_T=\varphi^*\omega_W$.
Also, recall that the face $F$ is generated by the classes of finitely many curves $\Sigma_1,\ldots,\Sigma_r\subset T$ such that $T\cdot\Sigma_i<0$ for all $i=1,\ldots, r$, and a curve $C\subset T$ is contracted by $\varphi$ if and only if its class $[C]\in F$. Thus
\[
{\rm NE}(T/W)=\operatorname{\overline{NE}}(T/W)=\left\{\sum_{i=1}^ra_i\Sigma_i\;|\; a_i\>0 \mbox{ for all } i=1, 2,\ldots, r \right\},
\]
and hence from \cite[Proposition 4.7(3)]{Nak87} it follows that $\mathcal{O}_T(-mT)$ is $\varphi$-ample, where $m>0$ is the Cartier index of $T$ in $X$.
By \cite[Proposition 7.4]{HP16} there exists a proper bimeromorphic morphism $f:X\to Y$ to a normal compact analytic variety $Y$ such that $f|_T=\varphi$ and $f|_{X\setminus T}$ is an isomorphism. From the discussion above it follows that the face $F$ of $\operatorname{\overline{NA}}(T)$ corresponds to a $(K_X+S+B)$-negative extremal ray $R=\mathbb{R}_{\>0}\cdot[\Gamma]$, where $\Gamma=C_i$. Moreover, we know that $(K_X+S+B+\omega)\cdot \Gamma=0$, and thus $-(K_X+S+B)$ is $f$-nef-big. Then $Y$ has rational singularities by Lemma \ref{lem:rational-singularities}. From \cite[Lemma 3.3]{HP16} it follows that $\rho(X/Y):=\operatorname{dim}_{\mathbb{R}}H^{1,1}_{\operatorname{BC}}(X)-\operatorname{dim}_{\mathbb{R}}H^{1,1}_{\operatorname{BC}}(Y)=1$. An immediate consequence of this is that $K_X+S+B+\omega=f^*\omega_Y$ for some $(1, 1)$ class $\omega_Y$ on $Y$.
Clearly $\omega_Y$ is nef and big. If $V$ is a subvariety of $Y$ of positive dimension, then we claim that $(\omega _Y|_V)^{\operatorname{dim} V}>0$. If $V\subset W$, then let $\lambda=\int _F \omega ^{d}>0$, where {$F$ is a general fiber of $f^{-1}(V)\to V$} and $d=\operatorname{dim} F$. Then by the projection formula
(see eg. \cite[Corollary 4.5]{Nicol})
\begin{equation*}
\begin{split}
\lambda\cdot \int _V (\omega _Y)^{\operatorname{dim} V}= \int _{f^{-1}(V)}(f^* \omega _Y)^{\operatorname{dim} V}\wedge\; \omega ^d &= \int _{\varphi ^{-1}(V)}(\varphi ^* \omega _W )^{\operatorname{dim} V}\wedge\; \omega ^d\\
&= \lambda\cdot \int _V\omega _W ^{\operatorname{dim} V}>0.
\end{split}
\end{equation*}
If $V\not \subset W$, then let $V'$ be the strict transform of $V$. If
$V'$ is not contained in the support of $D$, then clearly $(K_X+S+B+\omega)|_{V'}=(D+\omega)|_{V'}$ is big (and nef), and so $(\omega _Y|_V)^{\operatorname{dim} V}=(D+\omega)|_{V'}^{\operatorname{dim} V}>0$ by Lemma \ref{lem:arbitrary-nef-and-big}. On the other hand, if $V'$ is contained in a component, say $T'\ne T$, of the support of $D$, then $(K_X+S+B+\omega)|_{T'}=K_{T'}+B_{T'}+\omega _{T'}$, where $(T',B_{T'})$ is dlt, $\omega _{T'}=\omega|_{T'}$
and \[K_{T'}+B_{T'}+\omega _{T'}=(K_X+S+B+\omega)|_{T'}=(f^*\omega _Y)|_{T'}=
(f|_{T'})^*\omega _{W'},\] where $\omega _{W'}=\omega _Y|_{W'}$. By Corollary \ref{cor:contraction-non-q-factorial}, there is a contraction $g:T'\to \bar W$ such that
$K_{T'}+B_{T'}+\omega _{T'}\equiv g^* \omega _{\bar W}$, where $\omega _{\bar W}$ is a K\"ahler class on $\bar W$. The curves $\Gamma$ contracted by $g$ are precisely the curves in $T'$ such that $\Gamma \cdot (K_X+S+B+\omega)=\Gamma \cdot (K_{T'}+B_{T'}+\omega _{T'})=0$.
But these are also the curves contracted by $f$ and so by the rigidity lemma (see \cite[Lemma 4.1.13]{BS95}) it follows that $W'=\bar W$. Thus
\[(\omega _Y|_V)^{\operatorname{dim} V}=((K_X+S+B+\omega)|_{V'})^{\operatorname{dim} V}=((K_{T'}+B_{T'}+\omega _{T'})|_{V'})^{\operatorname{dim} V}=(\omega _{\bar W}|_V)^{\operatorname{dim} V}>0.\]
Then from Theorem \ref{thm:nef-big-to-kahler} it follows that $\omega_Y$ is a K\"ahler class, and hence $Y$ is a K\"ahler variety.\\
Now if $f$ is a divisorial contraction, then by a similar argument as in the projective case one can show that $Y$ is $\mathbb{Q}$-factorial and $(Y, S_Y+B_Y)$ has dlt singularities, where $B_Y:=f_*(S+B)$.\\
If $f:X\to Y$ is a flipping contraction, then by Corollary \ref{c-fg1} the flip $f':X'\to Y$ exists, and again as in the algebraic case it follows that $X'$ is $\mathbb{Q}$-factorial and $(X', S'+B')$ has dlt singularities, where $S'+B':=\phi_*(S+B)$ and $\phi:X\dashrightarrow X'$ is the induced bimeromorphic map.\\
Finally, the termination of flips follows from Special Termination in this case, since all the contracted curves are contained in ${\rm Supp}(D)$ and ${\rm Supp}(D)\subset S=\lfloor S+B\rfloor$. Note that the special termination holds here, since MMP in dimension $\<3$ is known due to \cite{DH20}.
\end{proof}
\begin{remark}\label{rmk:cone-theorem}
The above proof essentially gives a cone theorem in dimension 4 under the given hypothesis. More specifically, with the same hypothesis as in Theorem \ref{thm:special-effective-dlt-mmp}, if $K_X+S+B$ is not nef, then there exists countably many rational curves $\{C_i\}_{i\in I}$ in $X$ such that
$0<-(K_X+S+B)\<6$ and $\operatorname{\overline{NA}}(X)=\operatorname{\overline{NA}}(X)_{(K_X+B)\>0}+\sum_{i\in I}\mathbb{R}^+\cdot[C_i]$.
\end{remark}
\begin{remark}\label{rmk:choice-of-epsilon}
Let $(X, \Delta)$ be a $\mathbb{Q}$-factorial compact K\"ahler $4$-fold dlt pair and $C$ an effective $\mathbb{Q}$-divisor. Fix a positive real number $t>0$ and let $\Lambda$ be the countable set indexing \textit{all} $(K_X+\Delta)$-negative curve classes $[\Gamma_i]$ on $X$ such that $-(K_X+\Delta)\cdot \Gamma_i\<6$, $C\cdot\Gamma_i>0$ and $(K_X+\Delta+tC)\cdot\Gamma_i>0$. Let $m>0$ be the smallest positive integer such that $m(K_X+\Delta)$ and $mC$ are both Cartier. Then the intersection numbers $(K_X+\Delta)\cdot \Gamma_i$ and $C\cdot\Gamma_i$ are all contained in the set $\frac{1}{m}\mathbb{Z}$ for all $i\in \Lambda$. Moreover, since $0<-(K_X+\Delta)\cdot\Gamma_i\leq 6$ for all $i\in \Lambda$, the numbers $(K_X+\Delta)\cdot \Gamma_i$ are contained in a finite set, say $\mathcal{K}$. Then $(\mathcal{K}+\frac{t}{m}\mathbb{N})\cap\mathbb{R}_{>0}$ is a DCC set and hence it has a non-zero minimum, say $\gamma>0$. Then we can choose a sufficiently small rational number $\epsilon\in\mathbb{Q}^+$ such that
\begin{equation}\label{eqn:fixing-epsilon}
0<\epsilon<\frac{t\gamma}{\gamma+6}.
\end{equation}
\end{remark}~\\
The following theorem allows us to run MMP with scaling in certain cases. This result is in the technical heart of the proof of Theorem \ref{thm:effective-dlt-mm} below.
\begin{theorem}\label{t-scale}
Let $(X,\Delta=S+B)$ be a $\mathbb{Q}$-factorial compact K\"ahler $4$-fold dlt pair. Assume that there is an effective $\mathbb{Q}$-divisor $C\geq 0$ and effective $\mathbb{R}$-divisors $D ,D'\geq 0$, and a positive real number $\alpha >0$ such that
\begin{enumerate}
\item $K_X + \Delta+C$ is nef,
\item $K_X + \Delta \sim _{\mathbb R}D$,
\item $D=\alpha C +D'$, and ${\rm Supp}(D')\subset S$.
\end{enumerate}
Then we can run a $(K_X+\Delta)$-MMP with scaling of $C$ and it terminates with a log terminal model $\phi:X\dashrightarrow Y$ (see Definition \ref{def:log-terminal-and-log-minimal-model}) such that $K_Y + \phi_*\Delta$ is nef.
\end{theorem}
\begin{proof}
Let \[t:={\rm inf}\{s\geq 0\;|\;K_X+\Delta +sC\ {\rm is \ nef}\}.\]
Then $0\leq t\< 1$. Note that if $t=0$, then we are done, otherwise, by Theorem \ref{thm:nef-restricts-to-pseff}, there is a subvariety $V\subset X$ such that $(K_X+\Delta +(t-\epsilon)C)|_V$ is not pseudo-effective for any $t\geq \epsilon >0$. We fix $\epsilon$ satisfying the following
\begin{equation}\label{eqn:choosing-epsilon}
0<\epsilon<\min\left\{t+\alpha, \frac{t\gamma}{\gamma+6}, \frac{1}{6m^2+1}\right\},
\end{equation}
where $\gamma \in\mathbb{R}^+$ and $m\in\mathbb{Z}^+$ are defined in the Remark \ref{rmk:choice-of-epsilon} above.
Since
\[ K_X+\Delta +(t-\epsilon )C=\frac {t+\alpha-\epsilon }{t+\alpha}( K_X+\Delta +tC)+\frac {\epsilon }{t+\alpha}(K_X+\Delta -\alpha C)\]
and $\frac {t+\alpha-\epsilon }{t+\alpha}>0$, it follows that $(K_X+\Delta -\alpha C)|_V\equiv D'|_V$ is not pseudo-effective. Since the support of $D'$ is contained in $S$, $V$ is contained in an irreducible component, say $T$, of $S$.
Also, note that $t(K_X+\Delta+(t-\epsilon)C)=(t-\epsilon)(K_X+\Delta+tC)+\epsilon(K_X+\Delta)$. Thus $(K_X+\Delta)|_V$ is not pseudo-effective; in particular, $(K_T+\Delta _T )|_V$ is not pseudo-effective, and hence not nef, where $K_T+\Delta _T:=(K_X+\Delta )|_T$. Let $I$ be the countable set of all $(K_T+\Delta_T)$-negative extremal rays generated by the rational curves $\{\Gamma_i\}_{i\in I}$ as in Corollary \ref{cor:nQ-unified-cone}. We make the following claim.
\begin{claim}\label{clm:zero-at-a-curve}
$(K_X+\Delta+tC)\cdot \Gamma_i=0$ for some $i\in I$.
\end{claim}
\begin{proof}[Proof of Claim \ref{clm:zero-at-a-curve}]
To the contrary assume that $(K_X+\Delta +tC)\cdot \Gamma_i>0$ for all $i\in I$. Then we claim that there is a $\delta>0 $ such that $(K_X+\Delta +tC)\cdot \Gamma _i\geq \delta$ for all $i\in I$. To see this, let $m\>1$ be the smallest positive integer such that $m(K_X+\Delta)$ and $mC$ are both Cartier. Then $(K_X+\Delta)\cdot \Gamma_i$ and $C\cdot \Gamma_i$ belong to $\frac 1m \mathbb Z$ for all $i\in I$.
Since $0>(K_X+\Delta )\cdot \Gamma_i=(K_T+\Delta _T)\cdot \Gamma_i\geq -6$ by Corollary \ref{cor:nQ-unified-cone}, the intersection numbers
$(K_X+\Delta)\cdot \Gamma_i$ are contained in a finite set $\mathcal K\subset\frac{1}{m}\mathbb{Z}$.
But then, since $t>0$ is a fixed number, the set $( \mathcal K+\frac tm\cdot \mathbb N)\cap \mathbb R _{>0}$ is a DCC set and hence has a positive minimum $\delta>0$, i.e. $(K_X+\Delta+tC)\cdot\Gamma_i\>\delta$ for all $i\in I$.\\
Now comparing with Remark \ref{rmk:choice-of-epsilon} we see that $I\subset \Lambda$, and hence $\delta\>\gamma$. Then from our choice of $\epsilon>0$ in equation \eqref{eqn:choosing-epsilon}, it follows that
\[
0<\epsilon<\frac{t\gamma}{\gamma+6}\<\frac{t\delta}{\delta+6}.
\]
Thus we have \[t(K_T+\Delta _T+(t-\epsilon )C|_T)\cdot \Gamma_i\geq (t-\epsilon)\delta +\epsilon (K_T+\Delta _T)\cdot \Gamma_i\geq (t-\epsilon)\delta -6\epsilon>0\]
for all $i\in I$, and if $\eta\in \overline {\rm NA}(T)_{K_T+\Delta _T\geq 0}$, then
\[t(K_T+\Delta _T+(t-\epsilon )C|_T)\cdot \eta = (t-\epsilon)(K_T+\Delta _T+tC|_T)\cdot \eta +\epsilon (K_T+\Delta _T)\cdot \eta \geq 0.\]
Therefore by the cone theorem on $T$ (see Corollary \ref{cor:nQ-unified-cone}), $K_T+\Delta _T+(t-\epsilon )C|_T$ intersects every class in $\eta \in \overline {\rm NA}(T)$ non-negatively. Then by \cite[Proposition 3.6]{HP16}, $K_T+\Delta _T+(t-\epsilon )C|_T$ is nef, which is a contradiction.
\end{proof}~\\
Now let $\{R_j\}_{j\in J_T}$ be the countable set of all $(K_T+\Delta_T)$-negative extremal rays which are spanned by rational curves $\Gamma_j\subset T$ as in Corollary \ref{cor:nQ-unified-cone}, for some component $T$ of $S=\lfloor \Delta \rfloor$. Let $J'_T\subset J_T$ be the subset of $(K_T+\Delta_T+tC|_T)$-trivial curves, and $J':=\cup _{T\in S}J'_T$ and $J:=\cup _{T\in S}J_T$, where $T\in S$ means $T$ is a component of $S$. By the above claim, $J'\ne \emptyset$. Let $\bar R_j\in N^1(X)$ be the image of $R_j$ for $j\in J$, and $\mathcal C\subset N^1(X)$ be the cone corresponding to the image of $\sum_{T\in S}{\operatorname{\overline{NA}}}(T)$. Note that since ${\operatorname{\overline{NA}}}(T)={\operatorname{\overline{NA}}}(T)_{K_T+\Delta _T\geq 0}+\sum _{j\in J_T}R_j$ for each component $T\subset S$, we have $\mathcal C=\mathcal C _{K_X+\Delta \geq 0}+\sum _{j\in J}\bar R_j$. Moreover, $\mathcal C \subset {\operatorname{\overline{NA}} }(X)$ and
\[\{ \bar R_j\; |\; j\in J'\}\subset (K_X+\Delta +tC)^\perp\cap {\operatorname{\overline{NA}} }(X).\]
Let $\omega \in N^1(X)$ be a very general K\"ahler class as in Lemma \ref{lem:very-general-kahler}, and
\[\lambda:={\rm inf}\{l>0\;|\;(-tC+l\omega ) \cdot \bar R_j\geq 0\ \mbox{ for all } j\in J'\}.\]
\begin{claim}\label{clm:unique-ray} There is a unique ray $ \bar R_{j'}$ for some $j'\in J'$ such that $(-tC+\lambda\omega)\cdot \bar R_{j'}=0$.\end{claim}
\begin{proof} Since $\omega $ is very general in $N^1(X)$ as in Lemma \ref{lem:very-general-kahler}, it suffices to show that there is one such ray.
By definition of $\lambda$, for each $n\>1$, there is a $j'_n \in J'$ such that $(-tC+(\lambda-1/n)\omega)\cdot\Gamma_{j'_n}<0$, where $\bar{R}_{j'_n}=\mathbb{R}^+\cdot[\Gamma_{j'_n}]$. Then we have
\[(K_T+\Delta_T +(\lambda/2)\omega )\cdot \Gamma_{j'_n}= (K_X+\Delta+(\lambda/2)\omega)\cdot\Gamma_{j'_n}=
((\lambda/2)\omega-tC)\cdot \Gamma_{j'_n}<0\]
for all $n>\frac{2}{\lambda}$. By the cone theorem (Corollary \ref{cor:nQ-unified-cone}) there are only finitely many $(K_T+\Delta_T +(\lambda/2)\omega)$-negative extremal rays and so the $\Gamma _{j'_n}$ correspond to only finitely many distinct numerical equivalence classes in $N_1(T)$, and hence in $N_1(X)$.
Thus, there is a ray $j'\in J'$ such that $(-tC+(\lambda-1/n)\omega)\cdot\Gamma_{j'}<0$ for infinitely many $n>0$, and hence $(-tC+\lambda\omega)\cdot\Gamma_{j'}\leq 0$. Then from our construction of $\lambda$ above it follows that $(-tC+\lambda\omega)\cdot\Gamma_{j'} =0$.
\end{proof}
Re-scaling $\omega$, we may assume that $(-tC+\omega )\cdot \bar R_{j'}=0$ and $(-tC+\omega )\cdot \bar R_{j}>0$ for all $R_j\ne R_{j'}$, $j\in J'$. Now recall that $m\>1$ is the smallest positive integer such that $m(K_X+\Delta)$ and $mC$ are both Cartier.
\begin{claim}\label{clm:nef-but-not-kahler} For any $0< \varepsilon \ll 1$, the class
$\alpha_{\varepsilon} :=K_X+\Delta +(1-\varepsilon) tC+\varepsilon \omega \in N^1(X)$ is nef but not K\"ahler.
\end{claim}
\begin{proof}
We begin by showing that $ \mathcal C \subset (\alpha_\varepsilon)_{\geq 0}$ or equivalently that $\alpha _\varepsilon |_T$ is nef for all components $T$ of $S$.
Write
\[\alpha_\varepsilon=(1-\varepsilon)(K_X+\Delta + tC)+\varepsilon (K_X+\Delta +\omega ).\]
It then follows that, if $ \mathcal C $ is not contained in $ (\alpha_\varepsilon)_{\geq 0}$, then there is a $(K_X+\Delta +\omega)$-negative extremal ray $\bar R_j$ for some $j\in J$ such that $\alpha_\varepsilon\cdot \bar R_j<0$.
Note that this set of rays, say indexed by the set $\Lambda$, is a finite set, by Corollary \ref{cor:nQ-unified-cone} (applied on each component $T$ of $S$). So, in particular we may assume that there exists a $\gamma >0$ such that if $j\in\Lambda$ and $(K_X+\Delta + tC)\cdot \bar R_j>0$, then $(K_X+\Delta + tC)\cdot \Gamma _j>\gamma$, where $\bar R_j=\mathbb R _{\geq 0}[\Gamma _j]$.
But then $\alpha _\varepsilon \cdot \Gamma _j\geq (1-\varepsilon)\gamma-6\varepsilon>0$ for $\varepsilon <\gamma /(6+\gamma)$, which is a contradiction.
Therefore, we may assume that $(K_X+\Delta + tC)\cdot \bar R_j=0$ for all $j\in\Lambda$, i.e. $\Lambda\subset J'_T$.
But then, by Claim \ref{clm:unique-ray}, $(-tC+\omega)\cdot \bar R_j\geq 0$ for all $j\in\Lambda$, and so
\[\alpha _\varepsilon \cdot \bar R_j=(K_X+\Delta + tC)\cdot \bar R_j+\varepsilon(-tC+\omega)\cdot \bar R_j\geq 0,\]
this is a contradiction to the fact that $\alpha_{\varepsilon}\cdot\bar R_j<0$ for all $j\in\Lambda$.
Thus $\alpha _\varepsilon |_T$ is nef for all component $T$ of $S$.
Now if $\alpha _\varepsilon$ is not nef on $X$, then by Theorem \ref{thm:nef-restricts-to-pseff} there is a subvariety $V\subset X$ such that $\alpha _\varepsilon |_V$ is not pseudo-effective. Since $\alpha _\varepsilon=K_X+\Delta +(1-\epsilon)t C+\varepsilon \omega$ and $\omega $ is Kahler, $(K_X+\Delta +(1-\varepsilon)t C) |_V$ is not pseudo-effective. Observe that
\[
K_X+\Delta+(1-\varepsilon)tC=\frac{(1-\varepsilon)t+\alpha}{t+\alpha}(K_X+\Delta+tC)+\frac{\varepsilon t}{t+\alpha}(K_X+\Delta-\alpha C)
\]
and thus $(K_X+\Delta-\alpha C)|_V\equiv D'|_V$ is not pseudo-effective. Since ${\rm Supp} D'\subset S$, it follows that there is a component $T$ of $S$ such that $V\subset T$. In particular, $\alpha_{\varepsilon}|_T$ is not pseudo-effective; this is a contradiction to the fact that $\alpha_{\varepsilon}|_T$ is nef for all $T$ of $S$ as proved above.
\end{proof}~\\
From what we have proved above it follows that $\mathcal C \cap (\alpha_\varepsilon)^\bot=\bar R_{j'}$ for a unique $j'\in J'$ as in Claim \ref{clm:unique-ray}. Thus $\bar R_{j'}\subset \alpha_\varepsilon^\bot\cap\operatorname{\overline{NA}}(X)$. Note that a priori we don't know whether this inclusion is an equality or not. However, we have the following:
\[\alpha:=\frac 1 \varepsilon \alpha _\varepsilon =K_X+\Delta +\omega+\frac{1-\varepsilon}\varepsilon(K_X+\Delta +tC)=K_X+\Delta +\omega_\varepsilon,\]
where
\begin{enumerate}
\item $\omega _\varepsilon:=\omega+\frac{1-\varepsilon}\varepsilon(K_X+\Delta +tC) $ is K\"ahler,
\item $\alpha$ is nef, and
\item $\alpha ^\perp \cap \mathcal C =\bar R _{j'}\subset \alpha ^\perp \cap \operatorname{\overline{NA}}(X)$.
\end{enumerate}
Then we have
\[R_{j'}\subset F:=(\alpha |_T)^\perp \cap \operatorname{\overline{NA}}(T)\]
for some component $T$ of $S$.\\
Note that this inclusion could be strict, never the less, from Corollary \ref{cor:nQ-unified-cone} it follows that $F$ is spanned by a finite collection
of $(K_T+\Delta _T)$-negative extremal rays $\{R_j\}_{j\in J''}$ such that $(K_T+\Delta _T+tC_T)\cdot R_j=0$, i.e. $J''\subset J'$. Note that $R_{j'}$ is one of these extremal rays. By Corollary \ref{cor:contraction-non-q-factorial}, there exists a projective contraction $\varphi:T\to W$ to a normal compact K\"ahler variety $W$ contracting the face $F$ such $\alpha |_T =\varphi ^*\alpha _W$, where $\alpha _W$ is a K\"ahler class on $W$. Let $R_j$ be generated by the curve $\Sigma_j\subset T$ and $J''=\{1,2,\ldots, r\}$, i.e. $R_j=\mathbb{R}^+\cdot[\Sigma_j]$ for all $j=1,2,\ldots, r$. Then by our construction $(K_X+\Delta+\omega_\epsilon)\cdot\Sigma_j=0$ for all $j=1,2,\ldots, r$. Note that $R_{j'}=\mathbb{R}^+\cdot[\Sigma_{j'}]$, where $\Sigma_{j'}=\Sigma_j$ for some $j\in\{1,2,\ldots, r\}$. Let $\bar R_{j'}$ be the image of $R_{j'}$ in $N_1(X)$ and $\bar R_{j'}=\mathbb{R}^+\cdot[\Gamma_{j'}]\subset \operatorname{\overline{NA}}(X)$. Then $\mathbb{R}^+\cdot[\Sigma_{j'}]=\mathbb{R}^+\cdot[\Gamma_{j'}]$ in $N_1(X)$. Now recall that, since $\omega$ is very general (and hence so is $\omega_\varepsilon$), $(K_X+\Delta+\omega_\varepsilon)\cdot \Gamma_{j'}=0$ and $(K_X+\Delta+\omega_\varepsilon)\cdot\Gamma_j>0$ for all $j\neq j'\in J'$. Therefore
\begin{equation}\label{eqn:num-equivalent-on-x}
\mathbb{R}^+\cdot [\Sigma_j]=\mathbb{R}^+\cdot[\Sigma_{j'}]=\mathbb{R}^+\cdot[\Gamma_{j'}] \mbox{ in } N_1(X)\mbox{ for all } j=1,2,\ldots, r.
\end{equation}
Next we claim that $\mathcal{O}_T(-mT)$ is $\varphi$-ample. First observe that
\[\mbox{NE}(T/W)=\operatorname{\overline{NE}}(T/W)=\left\{\sum_{j=1}^r a_j[\Sigma_j]\; |\; a_j\>0 \mbox{ for all }j\right\}. \]
Therefore by \cite[Proposition 4.7(3)]{Nak87} it is enough to show that $-T\cdot \Sigma_j>0$ for all $j\in J''$.
Now let $\Sigma\subset T$ be a curve in a fiber of $\varphi$ such that $\Sigma$ is not contained in ${\rm Supp}(S-T)$. Then there are real numbers $a_j\geq 0$ for all $j\in J''$ such that $[\Sigma]=\sum _{j\in J''}a_j[\Sigma_j]$ in $N_1(T)$. Now recall that $tC\cdot\Sigma_j=-(K_X+\Delta)\cdot\Sigma_j=-(K_T+\Delta_T)\cdot\Sigma_j>0$, and thus $D'\cdot\Sigma_j<0$ for all $j\in J''$. Write $D'=bT+D''$ such that $b>0$ and $D''$ doesn't contain $T$ as a component. Then $(bT+D'')\cdot \Sigma=\sum _{j\in J''}a_j(D'\cdot \Sigma_j)<0$, and hence $T\cdot \Sigma<0$, since $D''\cdot \Sigma\>0$ by construction of $\Sigma$. But from equation \eqref{eqn:num-equivalent-on-x} it follows that $\mathbb{R}^+\cdot[\Sigma_j]=\mathbb{R}^+\cdot[\Sigma]$ for all $j=1, 2,\ldots, r$. Hence $T\cdot\Sigma_j<0$ for all $j=1,2,\ldots, r$.\\
Then by \cite[Proposition 7.4]{HP16}, $\varphi$ extends to a projective bimeromorphic morphism $\phi:X\to Y$ to a normal compact analytic variety $Y$ such that $\phi|_T=\varphi$. Note that by construction $-(K_X+\Delta)$ is $\phi$-ample. Then from Lemma \ref{lem:rational-singularities} it follows that $Y$ has rational singularities. Consequently, by Lemma \ref{l-HP16} we have $\alpha =\phi^*\omega _Y$ for some $(1, 1)$ class $\omega _Y$ on $Y$.
Clearly $\omega_Y$ is nef and big. Following the arguments of Theorem \ref{thm:special-effective-dlt-mmp}, it follows that if $V$ is a subvariety of $Y$ of positive dimension, then $(\omega _Y|_V)^{\operatorname{dim} V}>0$ as long as $V$ is contained in $W$ or in the image of the support of $D'$ or not contained in the image of the support of $D$.
Thus, we may assume that $V'$, the strict transform of $V$, is contained in the support of $D$ but not in the support of $D'$. Then we write
\[\alpha _\epsilon =K_X+\Delta+(1-\epsilon)tC+\epsilon \omega =(1-\lambda)(K_X+\Delta-\alpha C) +\lambda (K_X+\Delta +tC)+\epsilon \omega, \]
where $\lambda =\frac {(1-\epsilon)t+\alpha}{\alpha +t}$ so that $0<\lambda < 1$.
Since $(K_X+\Delta-\alpha C)|_V\equiv D'|_{V'}\geq 0$, $(K_X+\Delta +tC)|_{V'}$ is nef and $\omega|_{V'}$ is K\"ahler, then $\alpha _\epsilon |_{V'}$ is big and so $\omega _Y|_V$ is also big.
Then from Theorem \ref{thm:nef-big-to-kahler} it follows that $\omega_Y$ is a K\"ahler class, and hence $Y$ is a K\"ahler variety.
In particular, $\operatorname{Null}(\alpha)=\operatorname{Ex}(\phi)$. Also, observe that from the discussion above it follows that a curve $C\subset X$ contracted by $\phi$ if and only if $\mathbb{R}^+\cdot[C]=\mathbb{R}^+\cdot[\Gamma_{j'}]=\bar R_{j'}$ in $N_1(X)$. Thus it follows that $\alpha^\bot\cap\operatorname{\overline{NA}}(X)=\bar R_{j'}$, and hence from Lemma \ref{l-HP16} again it follows that $\rho(X/Y)=\operatorname{dim}_{\mathbb{R}}H^{1,1}_{\operatorname{BC}}(X)-\operatorname{dim}_{\mathbb{R}}H^{1,1}_{\operatorname{BC}}(Y)=1$.\\
Now if $\phi:X\to Y$ is a divisorial contraction, then we replace $(X, \Delta)$ by $(Y, \phi_*\Delta)$. Note that $K_Y+\phi_*\Delta+t\phi_*C$ is nef on $Y$. If $\phi$ is flipping contraction, then the flip $\phi':X'\to Y$ exists by Corollary \ref{c-fg1}. Let $\psi:X\dasharrow X'$ be the induced bimeromorphic map. Then from a standard argument it follows that $(X',\psi_*\Delta)$ is a $\mathbb{Q}$-factorial dlt pair, $K_{X'}+\psi_*(\Delta+tC)$ is nef (as $(K_X+\Delta +tC)\cdot R_{j'}=0$), $K_{X'}+\psi_*\Delta\equiv\psi_*D$ and $\psi_* D=(\alpha/t)\psi_*(tC)+\psi_*D'$, where the support of $\psi_*D'$ is contained in the support of $\psi_* S$.
Therefore, replacing
\[X,\Delta , S,B,C,D,D',\alpha\qquad {\rm by}\qquad X',\psi_*\Delta ,\psi_*S,\psi_*B,\psi_*(tC),\psi_*D,\psi_*D',\frac \alpha t,\]
the hypothesis still hold and we may repeat the procedure. In this way we obtain a sequence of $(K_X+\Delta)$-flips and divisorial contractions for the $(K_X+\Delta)$-MMP with scaling of $C$. Since $K_X+\Delta \sim_\mathbb{Q} D\geq 0$, this procedure terminates after finitely many steps by Theorem \ref{thm:effective-termination}.\\
\end{proof}~\\
\begin{lemma}\label{lem:extracting-divisor}
Let $(X, B)$ be a compact K\"ahler lc pair of dimension $4$ and $\{E_i\}_{i\in I}$ is a finite set of exceptional divisors over $X$ with $a(E_i, X, B)\<0$ for all $i\in I$. Then there exists a $\mathbb{Q}$-factorial dlt pair $(X', B')$ and projective bimeromorphic morphism $f:X'\to X$ such that the following holds:
\begin{enumerate}
\item $K_{X'}+B'=f^*(K_X+B)$.
\item Every $E_i$ is an $f$-exceptional divisor, and for an arbitrary $f$-exceptional divisor $F$ either $F=E_i$ for some $i\in I$ or $a(F, X, B)=-1$ holds.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $g:Y\to X$ be a log resolution of $(X, B)$ which extracts all exceptional divisors $\{E_i\}_{i\in I}$. Let $\{F_j\}_{j\in J}$ be the set of all $g$-exceptional divisors. Let $J'\subset J$ such that $\{F_j\}_{j\in J'}=\{E_i\}_{i\in I}$. We define $B_Y:=f^{-1}_*B-\sum_{j\in J'}a(F_j, X, B)F_j+\sum_{j\in J\setminus J'}F_j$. Observe that $B_Y\>0$ is an effective divisor and
\[
K_Y+B_Y=g^*(K_X+B)+\sum_{j\in J\setminus J'}(1+a(F_j, X, B))F_j.
\]
Now we run a $(K_Y+B_Y)$-MMP over $X$ as in the proof of Theorem \ref{thm:global-dlt-model} and obtain a $\mathbb{Q}$-factorial dlt pair $(X', B')$ such that $K_{X'}+B'$ is nef over $X$. Let $f:X'\to X$ be the induced bimeromorphic morphism. Then from the negativity lemma it follows that $K_{X'}+B'=f^*(K_X+B)$.
\end{proof}~\\
\begin{definition}
Let $X$ be a normal variety and $D=\sum a_iD_i$ is an $\mathbb{R}$-divisors. Then we define $D^{\<1}:=\sum a'_iD_i$, where $a'_i=\min\{a_i, 1\}$. \\
\end{definition}
\begin{proof}[Proof of Theorem \ref{thm:effective-dlt-mm}] We closely follow the proof of \cite[Proposition 3.4]{Bir10} using Theorem \ref{t-scale} as our main technical tool for running MMP with scaling.\\
Let $(W, \Delta)$ be a log pair, i.e. $\Delta\>0$ is a $\mathbb{Q}$-divisor such that $K_W+\Delta$ is $\mathbb{Q}$-Cartier. We will call $(W, \Delta)$ an effective pair if there exists an effective $\mathbb{Q}$-Cartier divisor $D\>0$ such that $K_W+\Delta\sim_\mathbb{Q} D$. We will denote such a pair by the triple $(W, \Delta, D)$. Let $\mathcal{M}$ be the collection of all $4$-dimensional triples $(X, B, M)$ such that $(X, B)$ is a $\mathbb{Q}$-factorial dlt pair with $(K_X+B)\sim_\mathbb{Q} M\>0$ and $(X, B)$ does not admit a log minimal model. Let $\theta(X, B, M)$ be the number of components $P$ of $M$ such that ${\rm mult}_P(B)<1$. Pick $(X, B, M)\in\mathcal{M}$ such that $\theta(X, B, M)$ is minimal. If $\theta(X, B, M)=0$, then ${\rm Supp} M\subset \lfloor B\rfloor$ and thus by Theorem \ref{thm:special-effective-dlt-mmp}, $(X, B)$ has a log minimal model 9in fact a log terminal model); hence $(X, B, M)\not\in \mathcal{M}$. So assume that $\theta(X, B, M)>0$. Let $f:Y\to X$ be a log resolution of the pair $(X, B+M)$. Let $E$ be the reduced sum of all exceptional divisors of $f$. Then $(Y, B_Y:=f^{-1}_*B+E)$ is a log smooth dlt pair and
\[
M_Y:=(K_Y+B_Y)-f^*(K_X+B)+f^*M\sim_\mathbb{Q} K_Y+B_Y.
\]
Note that $M_Y\>0$ is an effective divisor, since $(X, B)$ is dlt. Moreover, the components of $M_Y$ are either the components of $f^{-1}_*M$ or $f$-exceptional divisors, and
\begin{equation}\label{eqn:theta}
\theta(Y, B_Y, M_Y)=\theta(X, B, M).
\end{equation}
Observe that, if $(Y, B_Y)$ has a log minimal model, then $(X, B)$ also has log minimal model (see \cite[Remark 2.6(i)]{Bir10}). Therefore replacing $(X, B, M)$ by $(Y, B_Y, M_Y)$ we may assume that $(X, B+M)$ is a log smooth pair. Define $\alpha>0$ as follows:
\[
\alpha:=\min\{t>0\;:\; \lfloor (B+tM)^{\<1}\rfloor\neq \lfloor B\rfloor\}.
\]
Note that $\alpha$ is a rational number, since $B$ and $M$ are $\mathbb{Q}$-divisors. We can write $(B+\alpha M)^{\<1}=B+C$, where $C$ is an effective $\mathbb{Q}$-divisor such that ${\rm Supp}\; C\subset {\rm Supp} M$. Moreover, we can write $\alpha M=C+M'$ such that ${\rm Supp} M'\subset {\rm Supp} \lfloor B\rfloor$, and $C=\alpha M$ outside of ${\rm Supp} \lfloor B\rfloor$. In particular, ${\rm Supp} M\subset {\rm Supp} (B+C)$.\\
Now observe that we have $(K_X+B+C)\sim_{\mathbb{Q}} M+C$ such that $(X, B+C)$ is a log smooth dlt pair and $\theta(X, B+C, M+C)<\theta(X, B, M)$. Therefore by the minimality of $\theta$, it follows that $(X, B+C)$ has a log minimal model, say $(Y, B_Y+C_Y+E)$, where $\phi:X\dashrightarrow Y$ is the induced bimeromorphic map and $E$ is the sum of all exceptional divisors of $\phi^{-1}$. If $D$ is divisor on $X$, we will denote $\phi_*D$ by $D_Y$ from now on.
Observe that $(K_Y+B_Y+E)\sim_{\mathbb{Q}} M_Y+E$, where $M_Y:=\phi_*M$, since $(K_X+B)\sim_{\mathbb{Q}} M$. Moreover, since $\alpha M=C+M'$ on $X$ for some $\mathbb{Q}$-divisor $M'\>0$ such that ${\rm Supp} M'\subset \lfloor B\rfloor$, it follows that $M_Y+E=(\frac{1}{\alpha}M'_Y+E)+\frac{1}{\alpha}C_Y$ such that ${\rm Supp} (M'_Y+E)\subset \lfloor B_Y+E\rfloor$. Then the hypothesis of Theorem \ref{t-scale} are satisfied and we can run a $(K_Y+B_Y+E)$-MMP with the scaling of $C_Y$. Assume that this MMP terminates with $Y\dashrightarrow Y'$ such that $K_{Y'}+B_{Y'}+E_{Y'}$ is nef.
Note that this is a nef model of $(X,B)$; however, it is not clear whether it is a log minimal model of $(X, B)$ or not, since the strict inequality $a(P,X,B)< a(P,Y',B_{Y'}+E_{Y'})$ does not necessarily hold for every divisor $P$ on $X$ exceptional over $Y'$.
Let \[\mathcal T=\{t\in [0,1]\; |\; K_X+B+tC \mbox{ has a log minimal model}\}.\]
Note that using the minimality of $\theta(X, B, M)$ we have already shown above that $(X, B+C)$ has a log minimal model, i.e. $1\in \mathcal T$. Now our goal is to show that $0\in \mathcal T$.
For any $0<t\in \mathcal T$, let $\phi_t:X\dasharrow Y_t$ be a log minimal model for $K_X+B+tC$ such that $K_{Y_t}+B_t+E_t+tC_t$ is nef. Proceeding as above, we run a $(K_{Y_t}+B_t+E_t)$-MMP with the scaling of $tC_t$ as in Theorem \ref{t-scale}.
Since $a(P,X,B+tC)< a(P,Y_{t},B_{t}+tC+E_{t})$ for any divisor $P$ on $X$ exceptional over $Y_{t}$, we also have that $a(P,X,B+t'C)< a(P,Y_{t},B_{t}+t'C+E_{t})$ for any divisor $P$ on $X$ exceptional over $Y_{t}$ and $0\leq t-t'\ll 1$.
But then, this MMP with the scaling of $tC_t$ also yields a log minimal model for $K_X+B+t'C$ for $0\leq t-t'\ll 1$. Thus $[t', t]\subset\mathcal T$.
Let $\tau ={\rm inf}\{t\in \mathcal T \}$. By what we have seen above, if $\tau \in \mathcal T$, then $\tau =0$ and we are done. Suppose therefore that $\tau \not \in \mathcal T$ and $t_k\in \mathcal T$ is a strictly decreasing sequence with $\lim t_k=\tau$; we will derive a contradiction.
For each $k\>1$, let $(Y_{t_k}, B_{t_k}+t_kC_{t_k}+E)$ be a log minimal model of $(X, B+t_kC)$ whose existence is guaranteed by the definition of $\mathcal T$. Then we get a nef model $(Y'_{t_k}, B'_{t_k}+E'_{t_k}+\tau C'_{t_k})$ of $(X, B+\tau C)$ by running a $(K_{Y_{t_k}}+B_{t_k}+E+\tau C_{t_k})$-MMP with the scaling of $(\tau-t_k)C_{t_k}$ as in Theorem \ref{t-scale}.
Let $D\subset X$ be a divisor contracted by $X\dasharrow Y'_{t_k}$, then by the arguments in Step 5 of the proof of \cite[Proposition 3.4]{Bir10}, we have \[a(D,X,B+t_kC)<a(D,Y'_{t_k},B'_{t_k}+\tau C'_{t_k}+E'_{t_k}).\]
Passing to a subsequence of the $t_k$, we may assume that $X\dasharrow Y'_{t_k}$ contracts a fixed set of components of the support of $B+C$.
By \cite[Claim 3.5]{Bir10} we have that
\[a(D,Y'_{t_k},B'_{t_k}+\tau C'_{t_k}+E'_{t_k})=a(D,Y'_{t_{k+1}},B'_{t_{k+1}}+\tau C'_{t_{k+1}}+E'_{t_{k+1}})\]
for every divisor $D$ over $Y'_{t_k}$ and for all $k\>1$.
It then follows that
\[a(D,X,B+\tau C)=\lim a(D,X,B+t_kC) \leq a(D,Y'_{t_k},B'_{t_k}+\tau C'_{t_k}+E'_{t_k}).\]
This is not yet a log minimal model because we need the inequality to be strict for every divisor $D$ on $X$ exceptional over $Y'_{t_k}$. To remedy this, it suffices to construct a bimeromorphic model $\nu:Y^\sharp\to Y'_k$ which extracts exactly the divisors $D$ on $X$ exceptional over $Y'_{t_k}$ such that
$a(D,X,B+\tau C)=a(D,Y'_{t_k},B'_{t_k}+\tau C'_{t_k}+E'_{t_k})$ holds. Note that $a(D,X,B+\tau C)\leq 0$ and $(Y'_{t_k},B'_{t_k}+\tau C'_{t_k}+E'_{t_k})$ is lc, so this can be done by Lemma \ref{lem:extracting-divisor}. Let $K_{Y^\sharp}+B_{Y^\sharp}+\tau C_{Y^\sharp}=\nu ^*(K_{Y'_{t_k}}+B'_{t_k}+\tau C'_{t_k}+E'_{t_k})$ such that $\nu_*B_{Y^\sharp}=B'_{t_k}+E'_{t_k}$; then $({Y^\sharp},B_{Y^\sharp}+\tau C_{Y^\sharp})$ is a $\mathbb Q$-factorial dlt pair and $a(D,X,B+\tau C)<a(D,{Y^\sharp},B_{Y^\sharp}+\tau C_{Y^\sharp})$ for every divisor $D$ on $X$ exceptional over $Y^\sharp$. Therefore $X\dasharrow Y^\sharp$ is a log minimal model of $(X,B+\tau C)$. Thus, we have shown that $\tau \in \mathcal T$, which is a contradiction.
\end{proof}~\\
\begin{corollary}\label{cor:klt-ltm}
Let $(X, B)$ be a $\mathbb{Q}$-factorial compact K\"ahler plt pair of dimension $4$. Then $(X, B)$ has log terminal model.
\end{corollary}
\begin{proof}
This follows from Theorem \ref{thm:effective-dlt-mm} and Lemma \ref{lem:lmm-to-ltm}.
\end{proof}
\section{MMP for Semi-stable pairs}
The main result of this section is Theorem \ref{thm:ss-mmp}. We start with various definitions and establish necessary results first.
\begin{definition}\label{def:klt-semi-stable-pair}
Let $f:X\to T$ be a proper surjective morphism from a normal K\"ahler variety $X$ to a smooth curve $T$ and $W\subset T$ a compact subset. Let $B\>0$ be an effective $\mathbb{Q}$-divisor on $X$. We say that $(X, B/T;W)$ is a \textit{semi-stable klt pair} if $(X,X_w+B)$ is plt for any $w\in W$. It is well known that this implies (and is in fact equivalent to) the following conditions:
\begin{enumerate}
\item the fibers $X_w$ of $f$ are all reduced, irreducible and normal,
\item ${\rm Supp} B$ does not contain any fiber $X_w$, and
\item $K_X+B$ is $\mathbb{Q}$-Cartier and $(X_w, B_w)$ is klt, where $B_w:=B|_{X_w}$.
\end{enumerate}
\end{definition}
By abuse of notation, we will occasionally omit $W$ and simply say that $f:(X, B)\to T$ is a semi-stable klt pair to mean that $(X, B/T; W)$ is a semi-stable klt pair.
We wish to run a relative MMP for $K_X+B$ over $T$ in a neighborhood of $W$ (so we will repeatedly replace $T$ by an appropriate neighborhood of $W$). We will say that $K_X+B$ is nef over $W$ if $K_{X_w}+B_w=(K_X+B)|_{X_w}$ is nef for every $w\in W$.\\
\begin{definition}\label{def:Neron-Severi-group}
Let $f:X\to T$ be a proper morphism from a normal analytic variety $X$ to a smooth curve $T$ such that every fiber of $f$ is an irreducible and reduced normal complex space. Let $W\subset T$ be a fixed compact subset and $U\subset T$ an open neighborhood of $W$.\\
If $\tau$ is a real closed bi-dimension $(1, 1)$ current on $X_u$ for some $u\in U$, then for any real closed $(1, 1)$ form $\eta$ on $f^{-1}U $ with local potentials, we define
\[
\tau (\eta):=(\iota_{u,*}\tau)(\eta)=\tau (\eta|_{X_u}),
\]
where $\iota_u:X_u\hookrightarrow X$ is the closed embedding.\\
We define $N_1(X/T;W)$ to be the vector space generated by the real closed bi-dimension $(1, 1)$ currents $\tau$ on $X_w$ as $w$ varies in $W$, modulo the following equivalence relation: \[ \tau_1\equiv \tau_2 \mbox{ if and only if } \tau_1(\alpha)=\tau_2(\alpha) \] for all classes $\alpha\in H^{1,1}_{\rm BC}(X_{U})$, for some open neighborhood $U\subset T$ of $W$ such that $X_U=f^{-1}U\supset f^{-1}W$.
We define $\operatorname{\overline{NA}}(X/T, W)\subset N_1(X/T, W)$ to be the closed cone generated by the classes of closed positive currents.
We also define $N^1(X_U/U, W)$ as the vector space generated by the classes $\alpha\in H^{1,1}_{\operatorname{BC}}(X_U)$ modulo the following equivalence relation:
\[
\alpha_1\equiv\alpha_2 \mbox{ if and only if } [\tau](\alpha_1)=[\tau](\alpha_2)
\]
for $\tau$ real closed bi-dimension $(1, 1)$ currents on $X_w$ for all $w\in W$.
Note that if $U\supset U'$ are open subsets containing $W$, then there is a natural restriction map $N^1(X_U/U, W)\to N^1(X_{U'}/U', W)$. Finally let $N^1(X/T, W):=\varinjlim_{W\subset U} N^1(X_U/U, W)$.\\
We also define $\Pic(X/T, W)$ as the direct limit of $\Pic(f^{-1}U)$, where $W\subset U\subset T$ is an open neighborhood of $W$, i.e.
\[
\Pic(X/T, W):=\varinjlim_{W\subset U} \Pic(f^{-1}U).
\]
\end{definition}
\begin{remark}\label{rmk:infinite-dim}
We note that $N^1(X/T, W)$ and $N_1(X/T, W)$ could be infinitely dimensional vector spaces over $\mathbb{R}$, since $X$ and $T$ are not assumed to be compact here.
\end{remark}
\subsection{Relative cone theorem for $4$-folds} We now prove a weak form of the relative cone theorem for proper morphisms $f:X\to T$ from a K\"ahler variety to a curve. We say that a form $\omega$ or a class $\omega\in N^1(X/T; W)$ is relatively nef (resp. relatively K\"ahler) if $\omega _t:=\omega |_{X_t}$ is nef (resp. K\"ahler) for any $t\in T$.
\begin{lemma}\label{l-douady}
Let $f:X\to T$ be as above, $\omega$ a relatively K\"ahler form and $W\subset T$ a compact subset. Fix $M>0$ and let $\{C_i\}_{i\in I}$ be the set of $f$-vertical curves such that $f(C_i)\subset W$ and $C_i\cdot \omega \leq M$,
then the $C_i$ belong to finitely many families of curves.
\end{lemma}
\begin{proof}
Let $\eta$ be a K\"ahler form on $X$. Then for each $t\in W$ there exists an $\epsilon_t>0$ such that $(\omega -\epsilon_tf^*\eta)|_{X_t}$ is a K\"ahler form on $X_t$. It follows that $(\omega -\epsilon_t f^*\eta )|_{X_s}$ is K\"ahler for any $s$ in a neighborhood of $t$. Since $W$ is compact, we may pick an $\epsilon >0$ such that $(\omega -\epsilon f^*\eta )|_{X_t}$ is K\"ahler for every $t$ is a neighborhood of $W$.
Then
\[
\eta\cdot C_i<\frac 1 \epsilon \omega \cdot C_i\leq \frac{M}{\epsilon}.
\]
By \cite[Theorem 5.5]{Tom21}, the curves $C_i$ belong to finitely many families. Note that \cite{Tom21} is applicable here, because $(\eta, \eta^2,\ldots, \eta^{\operatorname{dim} X})$ can be taken as a degree system here, and the collection $\mathfrak C$ is the set of the structure sheaves $\mathcal{O}_{X_t}$ of fibers $X_t$ for all $t\in T$, and $\mathfrak F$ is the collection of structure sheaves $\mathcal{O}_C$ of curves $C\subset X$ contained in the fibers of $f$.
\end{proof}
The following result gives a weak form of relative cone theorem for semi-stable klt pairs.
\begin{theorem}\label{thm:weak-cone0} Let $f:X\to T$ be a proper surjective morphism from a K\"ahler $4$-fold $X$ to a curve $T$ such that $f_*\mathcal{O}_X=\mathcal{O}_T$. Let $W\subset T$ be a compact subset and $(X,B/T; W)$ is a semi-stable klt pair. Fix a K\"ahler form $\omega$ on $X$.
Then there are finitely many classes of curves $\{C_i\}_{i\in J}$ ($J$ is a finite set) over $W$ such that $0> C_i\cdot (K_X+B)\geq -6$ and for each $t\in W$
\[\overline{\rm NA}(X_t)=\overline{\rm NA}(X_t)_{(K_{X_t}+B_t+\omega_t)\geq 0}+\sum _{i\in J}\mathbb R ^+[C_i].\]
Suppose now that $K_{X_t}+B_t+\omega _t$ is nef for all $t\in W$, where $\omega _t:=\omega |_{X_t}$ is K\"ahler for all $t\in W$. Let \[\lambda :=\inf\{s\geq 0\;|\; K_{X_t}+B_t+s\omega _t \mbox{ is nef for all } t\in W\}.\]
If $\lambda >0$, then there are finitely many classes of curves $\{C_i\}_{i\in I}$ ($I\subset J$) over $W$ which satisfy the following properties:
\begin{enumerate}
\item $C_i\subset X_t$ for some $t\in W$, and $\mathbb R _{\geq 0}[C_i]$ is a $(K_{X_{t}}+B_{t})$-negative extremal ray of $\operatorname{\overline{NA}}(X_t)$ such that $(K_{X_{t}}+B_{t}+\lambda \omega _{t})\cdot C_i=0$,
\item if $C\subset X_t$ is a curve such that $(K_{X_t}+B_t+\lambda \omega _t)\cdot C=0$ for some $t\in W$, then $[C]\equiv \sum_{i\in I} c_i[C_i]$ in $H^{1,1}_{\operatorname{BC}}(X)$ for some $c_i\in \mathbb R_{\geq 0}$,
\item if $\omega \in N^1(X/T,W)$ is general, then $|I|=1$ (i.e. we may assume that there is a unique such class $[C_i]\in N_1(X/T,W)$).
\end{enumerate}
\end{theorem}
\begin{proof}
By Corollary \ref{cor:nQ-unified-cone}, for any $t\in T$ there are finitely many $K_{X_t}+B_t+\omega_t$-negative extremal rays $C_i$ where $i\in J_t$ and $0> C_i\cdot (K_{X_t}+B_t)=C_i\cdot (K_X+B)\geq -6$.
Let $J=\cup _{t\in T}J_t$. Since $\omega \cdot C_i=\omega _t\cdot C_i<-1 (K_{X_t}+B_t)\cdot C_i\leq 6$, it follows from Lemma \ref{l-douady} that $J$ is finite. The first statement is proven.
Suppose now that $K_{X_t}+B_t+\omega _t$ is nef for all $t\in W$. Define the set \[\Lambda:=\{t\in W\; |\; K_{X_t}+B_t+\lambda\omega_t \mbox{ is nef but not K\"ahler} \}\subset T.\]
Then $\Lambda\ne \emptyset$, as otherwise arguing as in the proof of Lemma \ref{l-douady} above, one sees that $K_X+B+\lambda \omega $ is relatively K\"ahler over a neighborhood of $W$, which contradicts the definition of $\lambda$.
For any $t\in \Lambda$, we have $F_t:=(K_{X_{t}}+B_{t}+\lambda\omega_{t})^\bot\cap\operatorname{\overline{NA}}(X_{t})\neq \{0\}$ by \cite[Corollary 3.16]{HP16}. Moreover, from Corollary \ref{cor:nQ-unified-cone} it follows that $F_t$ is generated by finitely many classes of curves, each of which generates a $(K_{X_{t}}+B_{t})$-negative extremal ray.
Let $\Gamma:=\{C\subset X_t\;|\; t\in \Lambda,\ C \mbox{ generates a } (K_{X_t}+B_t)\mbox{-negative extremal ray such that } (K_{X_t}+B_t+\lambda\omega_t)\cdot C=0 \}$.
Then for a curve $C\in \Gamma$ we have $C\subset X_t$ for some $t\in\Lambda$, and
\[
\omega\cdot C=\omega_{t}\cdot C=\frac{-1}{\lambda}(K_{X_{t}}+B_{t})\cdot C\<\frac{6}{\lambda}.
\]
By Lemma \ref{l-douady} the curves in $\Gamma$ belong to finitely many families, and hence correspond to finitely many numerical classes. This proves (1).\\
For (2), let $C\subset X_t$ be a curve such that $(K_{X_t}+B_t+\lambda\omega_t)\cdot C=0$. Then $[C]\in F_t$, and by Corollary \ref{cor:nQ-unified-cone} and Part (1) above it follows that there is a subset $J\subset I$ such that $F_t$ is generated by the curves $C_j$ for $j\in J$. In particular, $[C]=\sum c_i[C_i]$ in $H^{1,1}_{\operatorname{BC}}(X_t)$ for some $c_i\in\mathbb{R}_{\geq 0}$, and hence also in $H^{1,1}_{\operatorname{BC}}(X)$.\\
For (3) notice that by what we have seen above the classes of $K_{X_t}+B_t$ negative extremal rays $[C_i]\in N_1(X/T,W)_{(K_{X}+B)\leq 0}$ are discrete.
\end{proof}
\begin{definition}
We say that $(X,B)$ is a minimal model over $W$ if $K_X+B$ is nef over $W$. If, possibly replacing $T$ by an appropriate neighborhood of $W$, there is a morphism $g:X\to Z$ over $T$ such that $\operatorname{dim} X>\operatorname{dim} Z$ and $-(K_X+B)$ is ample on each fiber of $g$, then we say that $g$ is a Mori fiber space over $W$.
We say that $(X/T;W)$ is $\mathbb Q$-factorial if: (i) every Weil divisor $D$ defined over a neighborhood of $W$ is $\mathbb{Q}$-Cartier over a (possibly smaller) neighborhood of $W$, and (ii) $(\omega_X^{\otimes m})^{**}$ is a line bundle over a neighborhood of $W$ for some $m\>1$.
\end{definition}~\\
We will use the following variant of \cite[Lemma 3.3]{HP16}. The main point here is that $X$ and $Y$ are not assumed to be compact. The proof is similar to that of \cite{HP16}, however, we reproduce it here for the convenience of the reader.
\begin{lemma}\cite[Lemma 3.3]{HP16}\label{l-HP16}
Let $f:X\to Y$ be a proper birational map between normal complex spaces in Fujiki's class $\mathcal C$ with rational singularities.
Then we have an injection
\[f^*:H^{1,1}_{\rm BC}(Y)=H^1(Y,\mathcal H_Y)\hookrightarrow H^1(X,\mathcal H_X) =H^{1,1}_{\rm BC}(X)\]
such that ${\rm Im}(f^*)=\{\alpha \in H^1(X,\mathcal H_X)\;|\; \alpha\cdot C=0 \mbox{ for all curves } C\subset X \mbox{ s.t. } f(C)=\operatorname{pt}\}$.
\end{lemma}
\begin{proof} Note that we are not assuming that $X,Y$ are compact and so it is not clear that $H^1(X,\mathcal H_X)\to H^2(X,\mathbb R)$ and $H^1(Y,\mathcal H_Y)\to H^2(Y,\mathbb R)$ are injective. However, we still have a commutative diagram similar to \cite[Eqn. (5), page 224]{HP16}:
\begin{equation}\label{eqn:vertical-class}
\xymatrixcolsep{3pc}\xymatrixrowsep{3pc}\xymatrix{
0\ar[r] & H^1(Y, \mathcal{H}_Y)\ar[r]\ar[d] & H^1(X, \mathcal{H}_X)\ar[r]^{\varphi}\ar[d]^{\psi} & H^0(Y, R^1f_*\mathcal{H}_X)\ar[d]^{\cong}\\
0\ar[r] & H^1(Y, \mathbb{R})\ar[r] & H^1(X, \mathbb{R})\ar[r]^{\varphi'} & H^0(Y, R^2f_*\mathbb{R})
}
\end{equation}
Suppose now that $\alpha \in H^1(X,\mathcal H_X)$ such that $\alpha \cdot C=0$ for all curves $C\subset X$ such that $f(C)=\operatorname{pt}$. Then from the claim $(\star)$ in the proof of \cite[Thm. 12.1.3, page 649]{KM92} it follows that $(\varphi'\circ\psi)(\alpha)=0$. Therefore from the diagram above it follows that there exists a $\beta\in H^1(Y, \mathcal{H}_Y)$ such that $\alpha=f^*\beta$.
\end{proof}~\\
\begin{lemma}\label{lem:rational-singularities}
Let $f:X\to Y$ be a proper morphism of normal analytic varieties and $f_*\mathcal{O}_X=\mathcal{O}_Y$. Let $B\>0$ be an effective $\mathbb{Q}$-divisor such that $K_X+B$ is $\mathbb{Q}$-Cartier. Assume that one of the following conditions hold:
\begin{enumerate}
\item[(i)] $(X, B)$ is klt and $-(K_X+B)$ is $f$-nef-big.
\item[(ii)] $(X, B)$ is dlt, $K_X$ is $\mathbb{Q}$-Cartier and $-(K_X+B)$ is $f$-ample.
\end{enumerate}
Then $Y$ has rational singularities.
\end{lemma}
\begin{proof}
If we are in case (ii), then since $B$ is $\mathbb{Q}$-Cartier, perturbing the coefficients of $B$ slightly we may assume that $(X, B)$ is klt. Then the rest of the argument of the proof of \cite[Lemma 2.41]{DH20} holds. We note that in \cite[Lemma 2.41]{DH20}, $X$ is assumed to compact and $\mathbb{Q}$-factorial, neither of which are necessary for the proof. Moreover, it is also assumed in \cite[Lemma 2.41]{DH20} that when $(X, B)$ is klt, then $-(K_X+B)$ is $f$-nef and $f$-big, which is stronger than $f$-nef-big, however this does not affect the proof since the necessary relative vanishing theorem holds for $f$-nef-big divisors by \cite[Theorem 2.16]{DH20}.
\end{proof}~\\
\begin{proposition}\label{pro:extremal-ray-contraction}
Let $(X, B/T; W)$ be a $\mathbb{Q}$-factorial semi-stable klt pair of dimension $4$. Let $R=\mathbb{R}^+\cdot[\Gamma]$ be a $(K_X+B)$-negative extremal ray of $\operatorname{\overline{NA}}(X,B/T; W)$ generated by a curve $\Gamma\subset X$. Assume that contraction of $R$ exists, i.e. there is an open neighborhood $U$ of $W$ and a projective morphsim $g:f^{-1}U\to Z$ over $U$ such that a (compact) curve $C\subset f^{-1}U$ which maps to a point $f(C)\in W$ is contracted by $g$ if and only if $[C]\in R$. Let $h:Z\to U$ be the induced morphism. Then the following hold:
\begin{enumerate}
\item We have the following exact sequences:
\begin{equation}\label{eqn:N^1}
\xymatrixcolsep{3pc}\xymatrix{0\ar[r] & N^1(Z/U, W)\ar[r] & N^1(f^{-1}U/U, W)\ar[r]^-{\alpha\mapsto \alpha\cdot\Gamma} & \mathbb{R}\ar[r] & 0}
\end{equation}
\begin{center}
and
\end{center}
\begin{equation}\label{eqn:Pic}
\xymatrixcolsep{3pc}\xymatrix{0\ar[r] & \Pic(Z/U, W)\ar[r] & \Pic(f^{-1}U/U, W)\ar[r]^-{L\mapsto L\cdot\Gamma} & \mathbb{Z}\ar[r] & 0. }
\end{equation}
\item If $g$ is a divisorial contraction, then $(Z/U, W)$ is $\mathbb{Q}$-factorial semi-stable klt pair.
\item If $g$ is a flipping contraction with flip $g':V\to Z$, then $(V/U, W)$ is $\mathbb{Q}$-factorial semi-stable klt pair.
\end{enumerate}
\end{proposition}
\begin{proof}
First note that using \cite[Proposition 1.4]{Nak87} we may replace $U$ by a smaller open neighborhood of $W$ and assume that $-(K_X+B)|_{f^{-1}U}$ is $g$-ample. Thus by Lemma \ref{lem:rational-singularities}, $Z$ has rational singularities. The exactness of the sequence \eqref{eqn:N^1} follows from Lemma \ref{l-HP16}. Next, let $L$ be a line bundle on $f^{-1}U$ such that $L\cdot \Gamma=0$. Then $L\cdot C=0$ for all curves in the fibers of $g$; in particular, $L|_{g^{-1}(z)}$ is nef for all $z\in h^{-1}W$. Then $(L-(K_X+B))|_{g^{-1}(z)}$ is ample for all $z\in h^{-1}W$. Then again from \cite[Proposition 1.4]{Nak87} it follows that $L-(K_X+B)$ is $g$-ample over a neighborhood of $h^{-1}W$. Since $h:Z\to U$ is proper and flat (as $U$ is a smooth curve), and hence is both an open and closed morphism, shrinking $U$ suitably near $W$ we may assume that $L-(K_X+B)$ is $g$-ample.\\
Next, for the exactness of the sequence \eqref{eqn:Pic}, observe that if $L\cdot\Gamma=0$, then we need to show that $L\cong g^*M$ for some line bundle $M$ on $Z$. Since $g_*L$ is unique, it is enough to show locally on $Z$ that $g_*L$ is a line bundle and $L\cong g^*M_Z$ locally over $Z$, where $M_Z$ is a line bundle on an appropriate open subset of $Z$. So we may assume that $Z$ is Stein. Then $L$ is given by a Cartier divisor (since $g$ is projective), and hence by the base-point free theorem as in \cite[Theorem 4.8]{Nak87} and the rigidity lemma \cite[Lemma 4.1.13]{BS95} it follows that $L\cong g^*M_Z$ for some line bundle $M_Z$ on $Z$. Then by the projection formula, $g_*L\cong M$ is a line bundle, as required. This shows the exactness of the sequence \eqref{eqn:Pic}.\\
Now assume that $g$ is a divisorial contraction. Then from a standard argument using \eqref{eqn:Pic} it follows that $(Z/U, W)$ is $\mathbb{Q}$-factorial. Moreover, by \cite[Corollary 3.43]{KM98}, $(Z, g_*B)$ has klt singularities in this case. Then by inversion of adjunction $(X, X_w+B)$ has dlt singularities for any $w\in W$. Note that $R=\mathbb{R}^+\cdot[\Gamma]$ is also a $(K_X+X_w+B)$-negative extremal ray, and hence by \cite[Corollary 3.44]{KM98} $(Z, Z_w+B_Z)$ has dlt singularities, where $Z_w+B_Z=g_*(X_w+B)$. Thus $(Z, B_Z/U; W)$ is a semi-stable klt pair.\\
If $g$ is a flipping contraction, let $g': V\to Z$ be the flip. Then again from a standard argument it follows that $(V, B'/U; W)$ is a $\mathbb{Q}$-factorial semi-stable klt pair, where $B':=\phi_*B$ and $\phi:f^{-1}U\to V$ is the induced bimeromorphic morphism.
\end{proof}~\\
\begin{lemma}\label{lem:n1-pushforward}
Let $(X, B/T; W)$ be a $\mathbb{Q}$-factorial semi-stable klt pair of dimension $4$, and $\phi:X\dasharrow X'$ is either a $(K_X+B)$-flip or a divisorial contraction over $T$. Then $\phi_*:N^1(X/T;W)\to N^1(X'/T;W)$ is well defined and surjective.
\end{lemma}
\begin{proof}
First assume that $\phi$ is a divisorial contraction. Then $\phi$ is a morphism and $E=\operatorname{Ex}(\phi)$ is a divisor such that $-E$ is a $\phi$-ample. Let $\alpha \in N^1(X/T; W)$, and choose $\lambda \in \mathbb R$ such that $(\alpha+\lambda E)\cdot R=0$, where $R=\mathbb{R}^+\cdot[\Gamma]$ is the $(K_X+B)$-negative extremal ray of $\operatorname{\overline{NA}}(X/T; W)$ contracted by $\phi$. Then by Proposition \ref{pro:extremal-ray-contraction}, $\alpha+\lambda E=\phi^*\alpha'$ for a uniquely determined $\alpha'\in N^1(X'/T; W)$, and so we define $g_*\alpha :=\alpha'$. Then clearly for any $\beta\in N^1(X'/T; W)$ we have $\phi_*(\phi^*\beta)=\beta$, and hence $\phi_*:N^1(X/T; W)\to N^1(X'/T; W)$ is surjective.\\
Now assume that $\phi:X\dashrightarrow X'$ is a flip. Let $g:X\to Z$ be the flipping contraction and $g':X'\to Z$ is the associated flip. We claim that $\phi_*:N^1(X/T;W)\to N^1(X'/T;W)$ is well defined, and in fact an isomorphism in this case. Indeed, for any $\alpha \in N^1(X/T; W)$ we can find a $\lambda \in \mathbb R$ such that $(\alpha+\lambda (K_X+B))\cdot R=0$, where $R$ is the $(K_X+B)$-negative extremal ray of $\operatorname{\overline{NA}}(X/T; W)$ contracted by $g$. But then $\alpha+\lambda (K_X+B)=g^* \beta$ for some uniquely determined $\beta \in N^1(Z/T; W)$. We then define $\phi _* \alpha:=g'^*\beta -\lambda (K_{X'}+B')$. The surjectivity of $\phi_*$ follows exactly as in the divisorial contraction case. Let $\phi_*\alpha=0$; then $K_{X'}+B'=\frac{1}{\lambda} g'^*\beta$. This is a contradiction, since $-(K_{X'}+B')$ is $g'$-ample. Hence, $\phi_*$ is also injective.
\end{proof}~\\
\begin{lemma}\label{l-psef}
Let $(X,B/T;W)$ be a semistable klt pair. Then the following are equivalent.
\begin{enumerate}
\item $\kappa (K_{X_t}+B_t)\geq 0$ for all $t\in W$.
\item $\kappa (K_{X_t}+B_t)\geq 0$ for very general $t\in W$.
\item $W\subset {\rm Supp}f_*\mathcal O _X(m(K_X+B))$ for some $m>0$.
\item For every positive constant $\mu >0$, $K_{X_t}+B_t+\mu \omega _t$ is pseudo-effective for very general $t\in W$.
\end{enumerate}
\end{lemma}
\begin{proof}
(1) clearly implies (2).
(2) implies (3). Since the supports of $f_*\mathcal O _X(m(K_X+B))$ are closed subsets of $W$ it suffices to show that for a very general point $w\in W$ there is an integer $m>0$ such that
$w\in {\rm Supp}f_*\mathcal O _X(m(K_X+B))$. Assume that $\kappa (K_{X_t}+B_t)\geq 0$ for very general $t\in W$. Let $T'\subset W$ be the set of $t\in W$ for which $\kappa (K_{X_t}+B_t)\geq 0$ and for each $t\in T'$, let $m(t)>0$ be the smallest positive integer such that $m(t)(K_{X_t}+B_t)$ is Cartier and $H^0(X_t, m(t)(K_{X_t}+B_t))\neq 0$. Let $m(t)(K_{X_t}+B_t)\sim M(t)\>0$ for some effective Cartier divisor $M(t)$ for any $t\in T'$. Since $T'$ is a complement of countably many analytic subsets, it follows that for any $w\in W$ there is a subset $T''\subset T'$ such that $w$ is an accumulation point of $T''$ and $m(K_{X_t}+B_t)\sim M(t)$ for all $t\in T''$ for some positive integer $m$ independent of $t\in T''$. Therefore, from Grauert's theorem (see \cite[Theorem III.4.7]{GPR94}) it follows that $w\in {\rm Supp}f_* \mathcal O _X(m(K_X+B))\ne 0$. This concludes the proof that (2) implies (3).
Since $W$ is compact, we see that there are finitely
(3) implies (1). Suppose that $W\subset {\rm Supp}f_*\mathcal O _X(m(K_X+B))$, then for any $t\in W$, there is an open subset $t\in V\subset T$ and an effective divisor $D_V$ on $X_V$ such that $m(K_V+B_V)\sim _V D_V$. Discarding vertical components of $B_V$ we may assume that $D_V$ contains no fibers and hence we have $m(K_{X_t}+B_t)\sim D_t:=D_V|_{X_t}$ for every $t\in V$ and $\kappa (K_{X_t}+B_t)\geq 0$ for all $t\in W$.\\
(2) clearly implies (4) and hence it suffices to show that (4) implies (2).
So, suppose that for every $\mu >0$, $K_{X_t}+B_t+\mu \omega _t$ is pseudo-effective for very general $t\in W$.
Let $W_k=\{t\in W\;|\; K_{X_t}+B_t+\frac 1k \omega _t\ {\rm is\ pseudo-effective}\}$, then $W_k$ contains the complement of countably many points and hence so does $W_\infty =\cap _{k\geq 0}W_k$.
But then $K_{X_t}+B_t$ is pseudo-effective for any $t\in W_\infty$. By \cite[Theorem 1.1]{DO22}, $\kappa (K_{X_t}+B_t)\geq 0$.
\end{proof}~\\
Now we are ready to prove the existence of minimal models for a semi-stable klt pairs $(X, B/T;W)$ when $K_X+B$ is effective over $W$ i.e. when any of the equivalent conditions of Lemma \ref{l-psef} hold.
\begin{theorem}\label{thm:ss-pseff-mmp}
Let $f:(X,B)\to T$ be a
semi-stable klt pair of dimension $4$ and $W\subset T$ a compact subset. If $(X/T;W)$ is $\mathbb{Q}$-factorial and $K_X+B$ is effective over $W$, then we can run the $(K_X+B)$-MMP over a neighborhood of $W$ in $T$ which ends with a minimal model over $W$.
\end{theorem}
\begin{proof}
Suppose that $K_X+B$ is not nef over $W$. Choose a K\"ahler class $\omega$ on $X$ such that $K_{X_t}+B_t+\omega_t$ is nef for all $t\in W$, where $\omega_t:=\omega|_{X_t}$.
We may assume that $\omega$ is {general} in $N^1(X/T;W)$.
Let \[\lambda:=\inf\{s\>0\;|\; K_{X}+B+s\omega \mbox{ is nef over } W\},\] then we have the following. By Theorem \ref{thm:weak-cone0}, there exists a $(K_{X_t}+B_t)$-negative extremal ray $R_t=\mathbb R _{\geq 0}[C]\subset N_1(X_t)$ on $X_t$ for some $t\in W$ such that $(K_{X_t}+B_t+\lambda \omega _t)\cdot C=0$ and if $C'\subset X_{t'}$ is a $(K_{X}+B+\lambda\omega)$-trivial curve for some $t'\in W$, then $[C']\in R:=\mathbb R _{\geq 0}[C]\subset \operatorname{\overline{NA}}(X/T;W)$.
Replacing $\lambda \omega $ by $\omega$, we may assume that $K_{X_t}+B_t+ \omega_t$ is nef for all $t\in W$ and $K_X+B+\omega$ supports the extremal ray $R\subset N_1(X/T;W)$.
Note that $K_X+B+ \omega$ may cut out $(K_{X_t}+B_t)$-negative faces $F_t$ from multiple or even all fibers $X_t$ with $t\in W$.
By Theorem \ref{thm:contraction-non-q-factorial}, there is an extremal contraction $g_t:X_t\to Z_t$ for the face $F_t\subset \operatorname{\overline{NA}}(X_t)$.
By \cite[Proposition 11.4]{KM92}, this extends to a contraction $g:X_U\to Z_U$ over a neighborhood $U$ of $t\in T$, where $X_U=X\times _TU$ (we note that $X_t,Z_t$ are compact, $g_{t,*}{\mathcal{O}} _{X_t}={\mathcal{O}} _{Z_t}$ and $R^1g_{t,*}{\mathcal{O}} _{X_t}=0$, as $Z_t$ has rational singularities). Note that $X_U\to Z_U$ is a surjective morphism of normal varieties with connected fibers which contracts precisely the set of curves $C\subset X_t$ for some $t\in U$ such that $[C]\in R\subset N_1(X/T;W)$. Suppose that $U,U'\subset T$ are two such open subsets, then over $U\cap U'$, $X_U\to Z_U$ and $X_{U'}\to Z_{U'}$ are isomorphic, since they are both surjective morphisms of normal varieties with connected fibers which contract identical subsets (see the rigidity lemma in \cite[Lemma 4.1.13]{BS95}).
Thus these contractions glue together to give a projective contraction $g:X\to Z$ over $T$.
Note that if $\operatorname{dim} Z_t<\operatorname{dim} X_t$ for some $t\in T$, then from the flatness over $T$ it follows that $\operatorname{dim} Z<\operatorname{dim} X$, which is impossible as $K_X+B$ is pseudo-effective. In particular, $g$ is bimeromorphic.
If $g$ is a divisorial contraction, then we replace $X$ with $Z$ and $B$ with $g_*B$. If $g$ is a flipping contraction, then flip $g^+:X^+\to Z$ exists by Corollary \ref{c-fg1}. Then we replace $X$ by the flip $X^+$.
Note that by construction, for every $t\in W$ we have that $K_{X_t}+B_t+\omega _t$ is nef, and by Corollary \ref{cor:contraction-non-q-factorial}, $K_{X_t}+B_t+\omega _t=g_t^*\omega _{Z_t}$ for some K\"ahler form $\omega _{Z_t}$ on $Z_t$. Since $-(K_X+B)$ is $g$-nef-big, by Proposition \ref{lem:rational-singularities}, $Z$ has rational singularities. By Lemma \ref{l-HP16}, $K_X+B+\omega =g^*\alpha$ for some form $\alpha$ on $Z$.
Since $(g^*\alpha)|_{X_t}=g_t^*\omega _{Z_t}$ for every $t\in W$, it follows that $\alpha $ is K\"ahler over $W$ (see eg. the proof of Theorem \ref{thm:special-effective-dlt-mmp}).\\
If $X\to Z$ is a flipping contraction, then since $K_{X^+}+B^+$ is ample over $Z$, it follows that $(g^+)^*\alpha +\epsilon(K_{X^+}+B^+)$ is K\"ahler over $W$.\\
Termination of flips follows from Theorem \ref{thm:effective-termination}, however termination of divisorial contractions is not immediately clear as $N^1(X/T;W)$ may be infinite dimensional. But observe that if $X\to Z$ is a divisorial contraction, then the exceptional divisor $E$ dominates $T$ and so $\rho (X_t)>\rho (Z_t)$ for general $t\in T$. Therefore there are no infinite sequences of divisorial contractions.
\end{proof}~\\
Next we prove the existence of Mori fiber space when $K_X+B$ is not effective over $W$.
\begin{theorem}\label{thm:ss-non-pseff-mmp}
Let $(X,B/T;W)$ be a $\mathbb{Q}$-factorial semi-stable klt pair of dimension $4$, where $W\subset T$ is a compact subset. If $K_X+B$ is not effective over $W$ (see Lemma \ref{l-psef}), then we can run a $(K_X+B)$-MMP over a neighborhood of $W$ which ends with a Mori fiber space.
\end{theorem}
\begin{proof}
Throughout the proof we will repeatedly shrink $T$ in a neighborhood of $W$ without further mention.
The existence of flips and divisorial contractions here works exactly as in Theorem \ref{thm:ss-pseff-mmp}, and so we will only discuss the termination of flips below.
To see termination, we proceed as follows. First by inversion of adjunction, $(X,X_t+B)$ is dlt for any $t\in T$. Moreover, it is easy to see that any $(K_X+B)$-MMP over $T$ is also a $(K_X+X_t+B)$-MMP over $T$ for a fixed $t\in T$, and thus by special termination the flipping locus is disjoint from $X_t$ after a finitely many steps. Note also that any divisorial contraction must induce a nontrivial morphism on $X_t$ for general $t\in T$, and hence decreases its Picard number $\rho (X_t)$. Therefore, we may assume that there are no divisorial contractions after finitely many steps of this minimal model program.
We fix a point $t_0\in T$, and from now on we will assume that any $(K_X+B)$-MMP over $T$ is disjoint from the fixed fiber $X_{t_0}$; regardless of what MMP we run. In particular, the flipping loci do not dominate the base curve $T$, and hence the flipping curves for any given flip are contained in finitely many fibers of $f$.
Since there are at most countably many flips for any given $(K_X+B)$-MMP over $T$, it follows that, for very general $t\in T$, any finite sequence of steps of a $(K_X+B)$-MMP over $T$ will induce an isomorphism on a neighborhood of $X_t$.
By contradiction assume that flips do not terminate for any $(K_X+B)$-MMP over $T$. Let $\omega$ be a K\"ahler class on $X$ such that $K_X+B+\omega$ is K\"ahler over $W$. Now we will discuss the strategy our proof first without full technical details. The idea is as follows. We run a minimal model program with the scaling of $\omega$: $X=X^1\dasharrow X^2\dasharrow \ldots \dasharrow X^n$. As we have observed above, this MMP is disjoint from a very general fiber $X_s$ and from any fiber $X_t$ for $n\gg 0$. It follows that there is a sequence of fibers $X_{t_i}\cong X^i_{t_i}$ containing a flipping curve for $X^i\dasharrow X^{i+1}$. Let $C_i\subset X_{t_i}$ be a curve whose isomorphic image in $X^i_{t_i}$ is a flipping curve of $X^i\dashrightarrow X^{i+1}$; we will identify $C_i$ with its image in $X^i_{t_i}$. Suppose that $(K_{X^i}+B^i+\lambda _i\omega ^i)\cdot C_i=0$, where $\lambda _1\geq \lambda _2\geq \ldots $ are the nef thresholds. By Lemma \ref{l-psef} $\lim \lambda _i=\mu >0$, as $K_X+B$ is not effective over $W$, and so
\[\omega \cdot C_i=\omega ^i\cdot C_i=\frac{-1}{\lambda _i}(K_{X^i_{t_i}}+B^i_{t_i})\cdot C_i\leq \frac{6}{\mu}.\] By Lemma \ref{l-douady}, these $C_i\subset X$ belong to finitely many families and so must be contained in finitely many fibers. This is a contradiction, and hence the sequence of flips terminates. Unluckily, there are several technical issues that arise in the proof. Since we do not have a cone theorem here, it is not clear whether for each $i$ there is a \textit{unique} $(K_{X^i}+B^i)$-negative extremal ray $R_i$ of $\operatorname{\overline{NA}}(X/T; W)$ such that $(K_{X^i}+B^i+\lambda_i\omega^i)\cdot R_i=0$.
However, this can be achieved as long as each $\omega ^i$ is general in $N^1(X/T; W)$, and so at each step it suffices to perturb the given K\"ahler class. Thus we end up with a sequence of K\"ahler classes $\omega_{i+1} =\omega _i+\epsilon _i\alpha _i$ such that $\alpha _i$ is general in $N^1(X/T; W)$ and $0<\epsilon _i\ll 1$. This is discussed in detail below.\\
As mentioned above, we will run a $(K_X+B)$-MMP over $W$ with scaling of a sequence of general K\"ahler classes $\omega_i$. This means that:
{\it There exists a sequence $X=X^1\dasharrow X^2\dasharrow \cdots \dasharrow X^n$ of $K_X+B$ flips and divisorial contractions over $W$ and real numbers $\lambda _1> \lambda _2> \cdots > \lambda _n>0$ satisfying the following properties:
\begin{enumerate}
\item $\omega_i:=\omega_{i-1}+\epsilon_i \alpha_i, \omega_1=\omega$, where $\alpha_i\in N^1(X/T; W)$ is a general class and $0<\epsilon_i\ll 1$ for all $i\>1$. In particular, we may assume that $\omega+2(\omega _i-\omega)$ and $K_X+B+\omega_i$ are both K\"ahler over $W$ for $i\>1$.
\item $\lambda_i:=\inf\{s\>0\;:\; K_{X^i}+B^i+s\omega_i^i\mbox{ is nef over }W\}$.
\item For each $i\>1$, $(K_{X^i}+B^i+\lambda_i\omega^i_i)^\bot\cap \operatorname{\overline{NA}}(X^i/T; W)=R$ is an extremal ray. Moreover,
there is a point $w_i\in W$ and a curve $C_i\subset X^i_{w_i}$ spanning the ray $R$.
\item $K_{X^i}+B^i+t\omega ^i_i$ is K\"ahler over $W$ for $0<t-\lambda_i \ll 1$.
\item There is a positive integer $n\>1$ such that
there is a morphism $X_n\to Z_n$ over $W$ such that $-(K_{X^n}+B^n)$ is relatively ample over $Z_n$ and $K_{X^n}+B^n+\lambda _n\omega ^n_n$ is relatively trivial over $Z_n$.
\end{enumerate}}
Note that this MMP is still disjoint from the fiber $X_{t_0}$.
We explain the details of running this MMP below.
Let $X:=X^1$ and $\lambda _0=1$.
Suppose that $\phi ^{i-1}:X^1\dasharrow X^{i-1}$ have already been constructed so that properties (1-4)$^{i-1}$ are satisfied. In particular, by (3-4)$^{i-1}$ we have that
$K_{X^{i-1}}+B^{i-1}+t\omega^{i-1}_{i-1} = \phi ^{i-1}_*(K_X+B+t\omega_{i-1})$ is K\"ahler
for $0< t- \lambda _{i-1}\ll 1$ and $(K_{X^{i-1}}+B^{i-1}+\lambda_{i-1}\omega^{i-1}_{i-1})^\bot\cap \operatorname{\overline{NA}}(X^{i-1}/T; W)=R_{i-1}$ is an extremal ray spanned by a curve $C_{i-1}$. If $R_{i-1}$ defines a Mori fiber space, then we are done.
Otherwise, by what we argued above, we may assume that we have a flip, say $\psi ^{i-1}:X^{i-1}\dasharrow X^i$.
If $g^{i-1}:X^{i-1}\to Z^{i-1}$ and $h^i:X^i\to Z^{i-1}$ are the corresponding flipping and flipped contraction, then arguing as in the proof of Theorem \ref{thm:ss-pseff-mmp}, $\eta_{Z^{i-1}}:=g^{i-1}_*(K_{X^{i-1}}+B^{i-1}+\lambda_{i-1}\omega^{i-1}_{i-1})$ is K\"ahler over $W$. Since $\rho (X^i/Z^{i-1})=1$ and $K_{X^i}+B^i$ is ample over $Z^{i-1}$, it follows that $-\omega^{i}_{i-1}$ is K\"ahler over $Z^{i-1}$. Then for $0<\delta \ll 1$ we have
\[K_{X^{i}}+B^{i}+(\lambda_{i-1}-\delta)\omega^{i}_{i-1}=\psi ^{i-1}_*(K_{X^{i-1}}+B^{i-1}+(\lambda_{i-1}-\delta)\omega^{i-1}_{i-1})=(h^i)^*\eta-\delta \omega^{i}_{i-1}\] is K\"ahler over $W$.
Note that since $N^1(X/T;W)\to N^1(X^i/T;W)$ is surjective by Lemma \ref{lem:n1-pushforward}, and since $\alpha_i \in N^1(X/T;W)$ is a general class, then so is its pushforward $\alpha^i_i \in N^1(X^i/T;W)$. In particular, $\omega^i_i=\omega^i_{i-1}+\epsilon _i\alpha^i_i$ is a general class in $N^1(X^i/T; W)$.
Since $0<\epsilon _i\ll 1$, we may assume that \[K_{X^i}+B^i+(\lambda_{i-1}-\delta)\omega^{i}_{i}=K_{X^i}+B^i+(\lambda_{i-1}-\delta)\omega^{i}_{i-1}+\epsilon _i(\lambda_{i-1}-\delta)\alpha^{i}_{i}\] is K\"ahler over $W$.
Let $\lambda _i:=\inf\{s\>0\;:\; K_{X^i}+B^i+s\omega_i^i\mbox{ is nef over }W\}$. Clearly property (2)$^i$ is satisfied. Since $0<\epsilon _i\ll 1$, \[K_X+B+\omega _i=K_X+B+\omega _{i-1}+\epsilon _i \alpha _i\qquad {\rm and}\qquad \omega +2(\omega_i -\omega )=\omega+2(\omega_{i-1} -\omega )+2\epsilon _i \alpha _i,\] property (1)$^{i-1}$ implies property (1)$^{i}$.
To see (3)$^i$ we proceed as follows.
We write \begin{equation}\label{eqn:kahler-scaling}
K_{X^i}+B^i+\lambda _i\omega _i^i=\frac 1{m+1}\left(K_{X^i}+B^i+m\left(K_{X^i}+B^i+\left(\frac {m+1} m\right)\lambda _i\omega _i^i \right)\right).
\end{equation}
For $m\gg 0$, $\lambda _i<\lambda _i\left(\frac {m+1} m\right)\leq \lambda _{i-1}-\delta$, and hence $K_{X^i}+B^i+\lambda _i\left(\frac {m+1} m\right)\omega ^i_i$ is K\"ahler over $W$.
From Theorem \ref{thm:weak-cone0} it easily follows that the face $F=(K_{X^i}+B^i+\lambda_i\omega^i_i)\cap\operatorname{\overline{NA}}(X/T;W)$ is generated finitely many classes of curves.
Since $\omega _i^i$ is general in $N^1(X^i/T; W)$, it follows that $(K_{X^i}+B^i+\lambda_i\omega^i_i)^\bot\cap \operatorname{\overline{NA}}(X^i/T; W)=R_i$ is an extremal ray spanned by a curve $C_i\subset X^i_{w_i}$ for some $w_i\in T$ and so (3)$^i$ holds.\\
To see (4)$^i$, simply note that the sum of a nef class and a K\"ahler class is K\"ahler, and hence $K_{X^i}+B^i+t\omega ^i_i$ is K\"ahler over $W$ for $\lambda _{i-1}-\delta \geq t>\lambda _i$ and $\delta>0$.
Finally, we must show that the process terminates after finitely many steps.
We claim that $\lim\lambda_i>0$. By contradiction assume that $\lim \lambda _i=0$.
For a very general $t\in T$, we have $X_t\cong X^i_t$ for all $i\>1$ (as discussed above).
By Lemma \ref{l-psef}, there exists a $\mu >0$ such that $K_{X_t}+B_t+\mu \omega _t$ is not pseudo-effective for very general $t\in T$.
Since \[K_{X_t}+B_t+\mu \omega _t = K_{X_t}+B_t+\lambda _i(\omega_i)_t+(\mu \omega _t-\lambda _i(\omega_i)_t) \] and $\mu \omega _t-\lambda _i(\omega_i)_t$ is K\"ahler for $i\gg 0$ (as $\lim\lambda_i=0$), it follows that $K_{X_t}+B_t+\lambda _i(\omega_i)_t$ is not pseudo-effective for $i\gg 0$.
Since
\[K_{X_t}+B_t+\lambda _i(\omega_i)_t = K_{X^i_t}+B^i_t+\lambda _i(\omega^i_i)_t\]
is nef (for $t\in T$ very general), this is the required contradiction. So $\lim \lambda _i=\lambda>0$.
Now for a fixed point $w_0\in W$, let $C_{w_0}\subset X_{w_0}$ be a flipping curve of the above MMP. Note that every step of the above MMP is also a step of the $(K_X+B+X_{w_0})$-MMP over $W$. Thus by special termination, after finitely many steps the flipping locus of the above MMP is disjoint from the fiber $X_{w_0}$. So after passing to a subsequence we may assume that for each $i\>1$, $t_i\in W$ is a point such that the fiber $X_{t_i}$ contains a flipping curve of the above MMP for the very first time. Consequently, we have that $X=X^1\dashrightarrow X^i$ is an isomorphism over a neighborhood of $t_i$; in particular, $X_{t_i}\cong X^i_{t_i}$. Let $C_i\subset X^i_{t_i}$ be a flipping curve of the above MMP as in Theorem \ref{thm:weak-cone0}. Then identifying $C_i$ with its image in $X_{t_i}$ we get
\[ (K_{X}+B+\lambda _i\omega _i)\cdot C_i=(K_{X^i}+B^i+\lambda _i\omega^i_i )\cdot C_i=(K_{X^i_{t_i}}+B^i_{t_i}+\lambda _i(\omega ^i_i)_{t_i})\cdot C_i=0.\]
Since $\lambda _i\geq \lambda >0$, and $2\omega_i-\omega =\omega +2(\omega_i-\omega )$ is K\"ahler, it follows that
\[\omega \cdot C_i\leq 2\omega^i \cdot C_i=2(\omega^i_i)_{t_i}\cdot C_i=\frac{-2}{\lambda _i}(K_{X^i_{t_i}}+B^i_{t_i})\cdot C_i\leq \frac {12}\lambda,\] and so by Lemma \ref{l-douady}, the curves $\{C_i\}_{i}$, belong to finitely many families of curves on $X$ (over $W$). Consequently, the curves $\{C_i\}_i$ are contained in finitely many fibers $X_{t_1},\ldots , X_{t_k}$, where $t_i\in W$, and hence by special termination this sequence of flips must terminate, this is a contradiction. Therefore, we may assume that $K_{X^m}+B^m+\lambda_m \omega^m$ is nef for some $m\>1$, and there is a Mori fiber space $X^m\to Z$ over $T$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:ss-mmp}]
This follows from Theorem \ref{thm:ss-pseff-mmp} and \ref{thm:ss-non-pseff-mmp}.
\end{proof}
\bibliographystyle{hep} |
2009.03197 | \section{Introduction}
For the High-Luminosity Upgrade of the Large Hadron Collider, the
\mbox{ATLAS}~\cite{ATLAS} Inner Detector will be replaced with a new, all
silicon Inner Tracker (ITk), composed of a pixel tracker~\cite{TDRp}
and a strip tracker~\cite{TDRs}.
The main component of the ITk strip tracker is the module, comprising
a silicon strip sensor, multiple custom readout chips mounted on a
electronic circuit, called a hybrid, and a powerboard. In the central
region of the ITk strip detector, the four barrel layers comprise
11,000 modules mounted on staves such that the sensors are arranged
parallel to the beam axis (see figure~\ref{fig:intro_stave}). The two end-caps in the
forward region are constructed from six disks supporting a total of 7,000 modules
mounted on petals such that the sensors are arranged orthogonal to the
beam axis (see figure~\ref{fig:intro_petal}).
Modules in the strip tracker barrel and end-caps were designed to
contain the same materials and components, which have the same
functionality, but different geometries. Only two types of sensors are
used in the barrel region, whereas six sensor geometries are
required for hermetic coverage of the end-cap. Here, only modules
designed for the barrel region are presented.
\begin{figure}
\centering
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/Stave.JPG}
\caption{Stave for the ITk strip tracker barrel: thirteen modules are
arranged in one row}
\label{fig:intro_stave}
\end{subfigure}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=0.3\linewidth]{figure/Petals.jpg}
\caption{Petal for the ITk strip tracker end-caps: six modules are
arranged in six rings}
\label{fig:intro_petal}
\end{subfigure}
\caption{Support structures with modules for the ITk strip tracker
barrel (composed of staves) and end-caps (consisting of petals). Sensor strips on staves are aligned parallel to the beam axis modulo a \unit[26]{mrad} stereo angle on either side of the stave whilst strips on petals are arranged perpendicular to the beam axis with a \unit[20]{mrad} stereo angle implemented into the sensors themselves. An individual module is indicated in white, with sensor strip implants oriented perpendicular to the hybrids on each module segment. The end-of-substructure card (see~\cite{TDRs}) of each
structure is indicated in yellow. In the outer three rings of the end-cap so-called split modules are implemented due to the limited area of \unit[6]{inch} silicon wafers so that each ring module contains two silicon strip modules.}
\label{fig:intro_structures}
\end{figure}
An extensive prototyping program was conducted in preparation for the
production of 11,000 barrel modules at ten construction sites in the
US, UK and China. The aim of the prototyping programme was to
develop realistic tests of the concepts for tooling and assembly,
readout software and testing procedures, hence, the prototype modules use
readout chips, sensors and other components similar to those foreseen
to be used in production.
\section{Components}
\label{sec:comp}
In the central region of the ITk strip tracker (barrel), two versions of
modules are used:
\begin{itemize}
\item short strip (SS) modules in the inner two barrel layers, where
each sensor strip has a length of about \unit[2.5]{cm}
\item long strip (LS) modules in the outer two barrel layers, with
sensor strip lengths of about \unit[5]{cm}
\end{itemize}
Despite their different strip lengths, both module types have similar
sizes, which are determined by the size of the silicon strip
sensor (about \unit[$10\times10$]{cm$^2$} each). Therefore, strips are
arranged in two rows on LS sensors and in four rows on SS sensors (the terms row and segment are used interchangeably throughout this manuscript),
where each row consists of 1280 signal strips and two unconnected edge strips (see
figures~\ref{fig:sensor_LS} and~\ref{fig:sensor_SS}). Accordingly, LS modules require 2560 readout channels (corresponding to 10 ABC130
readout chips with 256 channels each) and SS modules 5120
(corresponding to 20 ABC130 readout chips).
\begin{figure}
\begin{subfigure}{.47\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/LSsensor.JPG}
\caption{\mbox{ATLAS17LS} sensor with two segments containing long
strips with a length of about {\unit[5]{cm}} each and two rows of bond pads per segment.}
\label{fig:sensor_LS}
\end{subfigure}
\begin{subfigure}{.05\textwidth}
\hfill
\end{subfigure}
\begin{subfigure}{.47\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/SSsensor.JPG}
\caption{\mbox{ATLAS12SS} sensor with four short strip segments (strip
length about {\unit[2.5]{cm}}) and five rows of bond pads per
segment.}
\label{fig:sensor_SS}
\end{subfigure}
\caption{\mbox{ATLAS} barrel long strip and short strip sensors used for the
construction of ABC130 barrel modules. Sensor strips are oriented
horizontally, with each segment comprising 1282 sensor
strips. The vertical lines seen here are rows of the bond pads, the only large-scale feature in the strip area discernible by eye.}
\end{figure}
Despite requiring different numbers of readout channels and chips,
electronic components for barrel modules were designed to be compatible with both
sensor geometries. Flexible circuit boards supporting ABC130
readout chips, called hybrids (section~\ref{comp:hyb}), were designed,
with one hybrid required per two strip segments. An SS module uses two such hybrids,
an X-type and a Y-type version, whereas
an LS module uses only one X-type hybrid (see
figures~\ref{fig:module_LS} and~\ref{fig:module_SS}). Flex circuit boards called powerboards (section~\ref{comp:pb}), which support a
DCDC power converter, high voltage switch and a monitoring chip, match
both LS and SS module layouts, thereby minimising the number of
components to be designed, tested and qualified for production.
\begin{figure}
\begin{subfigure}{.47\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/LSmodule.png}
\caption{ABC130 LS barrel module on an LS test frame: one X-type
hybrid is mounted at the border between LS strip segments
with the powerboard mounted on the same segment.}
\label{fig:module_LS}
\end{subfigure}
\begin{subfigure}{.05\textwidth}
\hfill
\end{subfigure}
\begin{subfigure}{.47\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/SSmodule.JPG}
\caption{ABC130 SS module on an SS test frame: one X-type hybrid and
a Y-type are mounted at the borders between two short strip
segments with one powerboard between them.}
\label{fig:module_SS}
\end{subfigure}
\caption{ABC130 long-strip and short-strip modules.}
\end{figure}
\subsection{Sensors}
\label{subsec:comp_sensors}
Short strip modules for the ABC130 barrel module program were
constructed using \mbox{ATLAS12} barrel sensors~\cite{ATLAS12}, a prototype
version of the sensors to be used in the \mbox{ATLAS} ITk developed from the
predecessor \mbox{ATLAS07} sensors~\cite{ATLAS07}. The sensors are fabricated from 6-inch floatzone wafers in a single-sided process.
The sensors have a nominal thickness of $\unit[310\pm20]{\upmu\text{m}}$ with a maximum thickness variation of \unit[10]{$\upmu$m} across the sensor area.
After dicing, \mbox{ATLAS12} sensors have a size of $\unit[96.7\times96.6]{\text{mm}^2}$.
Compared to \mbox{ATLAS07} sensors, the dead space in periphery of the sensor was reduced from approximately \unit[1]{mm} to \unit[500]{$\upmu$m} per edge.
Each ATLAS12SS sensor consists of four segments with 1282 strip implants each, where the first and last strip serve as field shaping strips. The strips have a length of $\unit[23.9]{\text{mm}}$ and a strip pitch of \unit[74.5]{$\upmu$m}.
In order to cope with the high-radiation environment of the ITk, strip sensors are made from p-doped bulk material with n$^{+}$-doped strip implants. The bulk remains as p-doped after radiation damage, therefore the sensor depletion zone grows from the strip implant side towards the backside, allowing for a significant signal collection even when operated underdepleted due to radiation damage at the end-of life fluence.
Each n$^{+}$-doped strip implant is connected to an n-doped implant ring surrounding all strip implants (bias ring) to
hold all strip implants at the same potential during operation. The
bias ring is surrounded by another n-doped implant ring (guard ring)
and a p-doped implant ring (edge ring) laid-out next to
the dicing edge. Figure~\ref{fig:sensorlayout} shows an overview of the different sensor design features. This edge ring prevents the depleted region,
evolving from the bias ring, from extending to and along the dicing edge between the edge ring and the p-doped backplane (held at high voltage), and is needed to prevent an early breakdown~\cite{Diodes}. Detailed studies of the electrical properties of the \mbox{ATLAS12} can be found in~\cite{HOMMELS}.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figure/Sensorlayout.pdf}
\caption{Detail image of an ATLAS barrel sensor and its design features}
\label{fig:sensorlayout}
\end{figure}
With increasing radiation damage, the Si/SiO$_2$ passivation layer on the sensor surface
will experience a build-up of defects and suffer from the surface damage of the ionising dose, which could lead to a short circuiting of the n$^{+}$-strip implants. This is prevented using an inter-strip isolation technique based on p-stop traces, which was chosen out of several options tested on \mbox{ATLAS07} devices.
The sensor design also includes a protection structure for the AC coupling of the strips against beam splashes, a so called gated Punch Trough Protection~\cite{PTP}.
The gated PTP design of the \mbox{ATLAS12} sensors extends the strip below the bias resistor, leaving a \unit[20]{$\upmu$m} gap to the bias rail and the gap covered by the extended sheet of the bias rail, which results in a hard breakdown across this gap in case of an excessive potential.
Another novelty of the \mbox{ATLAS12} is the staggered design of the bond pads to match the four rows of bond pads on ABC130 ASICS (see section~\ref{sec:ABC}) to facilitate wire bonding. The bond pad design in the sensor mirrors the bond pad arrangement of the ABC13 ASICS, which results in a four row bonding process, where each subsequent row increases in height.
During the development of \mbox{ATLAS12} sensors, it was discovered that they were sensitive to humidity~\cite{humidity}. This meant
sensor breakdown, indicated by high leakage current, at reverse bias voltages below the nominal operating voltage of \unit[-500]{V} was observed at ambient humidity
levels. Therefore, a protocol was established over the course of the
ABC130 barrel module program, which required a minimisation of sensor exposure to higher humidity levels to prevent early breakdowns:
\begin{itemize}
\item storage of sensors at modules at a maximum of \unit[10]{\%} humidity
\item sensor tests to be performed at maximum relative humidity of \unit[20]{\%}
\item minimisation of time sensors spent outside of dry storage, e.g. assembly
\end{itemize}
In addition to the construction of short strip modules, several long strip modules were
constructed using \mbox{ATLAS17LS} sensors~\cite{ATLAS17LS}, which were developed after \mbox{ATLAS12} sensors to prototype the long strip geometry. In contrast to the \mbox{ATLAS12}, the \mbox{ATLAS17LS} sensors are slightly larger with dimensions
of $\unit[98.0\times97.6]{\text{mm}^2}$, utilising the full usable area out of 6-inch wafer. ATLAS17LS sensors have a strip pitch of \unit[75.5]{$\upmu$m} and two long strip segments with 1280 \unit[4.83]{mm} strips each.
Additionally, the wafer layout included new test structures and updated fiducial marks for spatial referencing for the \mbox{ATLAS17LS} design. The fabrication of \mbox{ATLAS17LS} sensors used split batches to test options for alternative passivation and a non-standard active depth~\cite{ATLAS17LS}.
\subsection{Readout chips}
\label{subsec:comp_chips}
\subsubsection{ABC130}
\label{sec:ABC}
Each ABC130 chip \cite{ABC130Spec} provides the initial data acquisition and readout chain for up to 256
sensor strips. Submitted in June 2013, it is the second generation of the \mbox{ATLAS} Strips readout family of
custom Application Specific Integrated Circuits (ASICs) since the ABCD~\cite{DabrowskiABCD}, which was used for the SemiConductor Tracker (SCT) readout. The ABC130 follows the
ABCN-25~\cite{DabrowskiABCN25}, which implemented ABCD in a new process, with some improvements, but kept a similar architecture. The ABC130 is the next member of this ``\mbox{ATLAS} Binary Chip'' family, and its
suffix is from its implementation in IBM's (now GLOBALFOUNDRIES') CMOS8RF\_DM 130nm technology. The die
has a size of \unit[$6.8\times7.9$]{mm$^2$} with the wide side meant to be oriented orthogonally to the
direction of the sensor strips and along the edge of the hybrid circuit board. With these dimensions, it allows for bonding of the input pads to the sensor strip pitch, while still allowing space for decoupling capacitors to be
placed between chips.
The first significant change from ABCN-25 is that the smaller feature size allowed a doubling of the number of readout channels per chip. The front-end input pads are
arranged in a novel configuration of four staggered
rows of 64 pads each (see figure \ref{fig:abc130_photo}) for wire bonding to
the AC sensor pads (see section~\ref{subsec:MA}). Ground pads at either end of
each row provide for a sensor ground reference (HV decoupling and guard
ring). The pitch of \unit[119]{$\upmu$m} is chosen to allow direct bond connection from
the available pad sizes to the sensor pitch.
These pads are arranged so that one ASIC can be connected to two rows of
strips on the sensor, with the edge of the ABC130 placed close to the
boundary. The connections from both strip rows to the ASIC amplifier channels are interleaved, which provides a powerful performance cross-check in case of problems.
These two rows are referred to as odd (running away from the ASIC) and even
(running under the ASIC).
The odd strips are also connected by long bonds that reach over the
top of those for the odd strips (see figure \ref{fig:sensorwires}).
Power and signal connections are restricted to the other three sides of the die
and are wire-bonded to the hybrid circuit board (see section~\ref{subsec:assem_hyb}).
\begin{figure}
\begin{subfigure}{.50\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/abc130_pads.png}
\caption{Pad layout of ABC130\_0 die (the ABC130\_1 die is a superset, see section~\ref{para:abc130_dig_io}).}
\label{fig:abc130_pads}
\end{subfigure}
\begin{subfigure}{.02\textwidth}
\hfill
\end{subfigure}
\begin{subfigure}{.46\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/abc130_photo.png}
\caption{Photograph of ABC130 die showing I/O and four rows of front-end pads}
\label{fig:abc130_photo}
\end{subfigure}
\caption{ABC130 die layout}
\end{figure}
Another substantial change over the ABCN-25 is in the readout system. It was substantially updated and new trigger levels have been added in order to raise the trigger rate from \unit[100]{kHz} to \unit[500]{kHz}, and to include the regional readout concept~\cite{TDRs}. The new architecture has three main stages:
\begin{itemize}
\item First, the inputs are sampled from the front-ends on every cycle of the
LHC Bunch Crossing (BC) clock (\unit[40.079]{MHz}), and put into a synchronous
pipeline (the ``L0 buffer''). This allows an external process (the L0 trigger)
up to \unit[6]$\upmu$\text{s} to choose which crossings to read out, with \unit[1]{BC} = \unit[25]{ns};
\item When the L0 accept (L0A) arrives at the ABC130 (a fixed period from
the original BC), the appropriate data is copied to the ``L1 buffer'';
\item The final readout command (either R3 or L1A, described below) can then be received up to
\unit[512]$\upmu$\text{s} later, and refers to a specific location in the buffer.
\end{itemize}
This architecture allows collection of data into the L1 buffer at a higher
rate than the output bandwidth allows, as not all data might be selected for
read out. The Regional Readout Request (R3) trigger is designed to be
acted on by a small proportion of modules, selectable at the HCC-level (see section~\ref{sec:HCC}), based on
where that module is in the detector, and provides fast readout of data that
can provide input to the L1 Trigger system~\cite{TDRs}. This proportion is
expected to be no more than \unit[10]{\%} of the strip tracker on average.
The L1A is then used to read out full information for the required BCs.
All digital signalling is carried out using SLVS\cite{SLVS} differential I/O
between ABC130s and an HCC130 (see section~\ref{sec:HCC}), in a bi-directional daisy-chain fashion (see
figure~\ref{fig:abc130_daisychain}). This allows for the failure of individual
ASICs as the readout direction from downstream ASICs in the chain can be reversed.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{figure/abc130_daisychain.pdf}
\caption{ABC130s are connected in daisy-chains of 5 ASICs, with each end of the chain connected to an HCC130 to allow bidirectional access to the chain in the event one of the ABC130s fails~\cite{TDRs}.}
\label{fig:abc130_daisychain}
\end{figure}
\paragraph{Analogue Front-End}
The ABC130 analogue front-end block consists of 256 independent input channels
accepting negative signals from the AC-coupled n-type strips of the ITk
strips sensor. The architecture and performance of the individual front-end
channels are detailed in~\cite{Kaplon2012FrontEE}. Each channel's preamplifier
is designed around a single-ended buffered telescopic cascode with an NMOS
input transistor. The input transistor is enclosed with active feedback built
with a PMOS transistor biased in saturation. This feedback scheme allows for
full control of the DC potential at the preamplifier output and permits the
use of a very power-efficient shaper stage with a single-ended input. This
particular configuration of the input stage was originally designed for the
p$^{+}$\,on\,n sensors intended for the \mbox{ATLAS} tracker upgrade and was later
modified for the negative-going signal of the current n$^{+}$\,on\,p sensor.
On the ABC130, the input current from the sensor strip to the input channel
modulates the transconductance of the feedback transistor and causes
degradation of the noise performance and gain of that stage. Figure~\ref{fig:NoisevsCint} shows a comparison of the noise performance for negative and positive signals. These issues, as well as degradation of the
noise performance after irradiation, have been addressed in the new design
implemented for the ABCStar~\cite{ABCStarSpec}.
The preamplifier input stage has been optimised for the value of \unit[5]{pF}, covering the expected range of input capacitance of the short strip (SS) sensors, but also operates effectively with the higher capacitance of long strip sensors (LS) as well as the different end-cap sensor configurations. This includes the expected parameter variation to maximum lifetime irradiation of the modules. The inputs are protected with non-silicided NMOS thin oxide devices, whose width was chosen as a tradeoff between ESD protection and the parasitic capacitance it added to each channel (\unit[$\sim0.4$]{pF}). The circuit was designed to withstand a \unit[1.5]{kV} Human Body Model event and a \unit[0.6]{A} Transmission Line Pulse (\unit[5]{ns} rise time, \unit[100]{ns} duration). Therefore, a series \unit[22]{$\Omega$} resistor is used to improve response to the Charge Device Model, a tradeoff between protection and added noise. Inputs can be left unconnected without affecting the performance of any other channel.
The effective channel gain is \unit[80]{mV/fC} at nominal bias currents and process parameters with a signal response peaking time of around \unit[22]{ns}. The Full Width at Half Maximum (FWHM) of the response is around \unit[35]{ns} and the overall shaping function is close to a second order CR-RC filter. In terms of frequency response, if the AC-coupling between booster and shaper as well as the limitation of the preamplifier bandwidth are neglected, the front-end channel can be approximated by a bandpass filter with a center frequency of about \unit[15]{MHz} and a roll-off of around \unit[20 or 40]{dB/decade} for frequencies below or above, respectively. The time walk of the discriminator is below \unit[16]{ns} for nominal threshold settings and \unit[50]{\%} of a minimum ionising particle’s (MIP) signal after the total expected radiation dose.
This timing performance guarantees correct data association to a given BC for the worst
case of signal charge sharing, where the charge is shared equally between neighbouring
strips, and at the end of lifetime of the experiment. The response linearity
is better than \unit[5]{\%} for signal charges from 0 to \unit[-4]{fC}, and
better than \unit[15]{\%} from 0 to \unit[-8]{fC}. Expected noise is
\unit[850]{e$^-$} for SS sensors, and \unit[1150]{e$^-$} for LS sensors.
Double-pulse resolution for a \unit[-3.5]{fC} signal followed by a
\unit[-3.5]{fC} signal is \unit[$\leq75$]{ns}, and maximum recovery time for
a \unit[-80]{fC} signal followed by a \unit[-3.5]{fC} signal is \unit[200]{ns}.
For a \unit[-1]{fC} signal, the gain sensitivity to power supply voltage is
\unit[$<1$]{\%} per \unit[100]{mV}. The chip features a low dropout (LDO) linear
voltage regulator that, in addition to improving the rejection of power supply noise at the front-end and providing an accurate voltage to the chip independent of any voltage drops on the hybrid, allows the analogue core operating voltage to be set
to within \unit[$\pm20$]{mV} of the target voltage of \unit[1.2]{V}. In
addition, the preamplifier input transistor and feedback bias currents can
be tuned to compensate for process variation with internal 5-bit Digital-to-Analogue Converters (DACs) referenced to an internal bandgap circuit (\unit[$592\pm40$]{mV}).
A common threshold level is set with an on-chip 8-bit DAC and is distributed to
the 256 input channel discriminators by means of current mirrors. Due to
process variation, there is about \unit[20]{mV} rms of threshold variance
between channels, so a 5-bit TrimDAC is provided for each channel. The
magnitude (range of the DAC) of this tuning can also be adjusted.
In this way, the
inter-channel threshold variance can be reduced into the single millivolt
range (see section~\ref{chartests}).
The output of the threshold comparators is then sampled on the rising
edge of the BC clock and shifted into the L0 Buffer (FIFO). A mask register is
supplied to force a zero into the pipeline and allow skipping of any noisy
channels.
Each channel includes the ability, selectable by a calibration enable bit, to inject a tunable
calibration pulse to simulate a strip ``hit''.
This capability can be used to calibrate the full tracker or a module performance on a per-strip level or as a Built-In Self-Test (BIST) function for the inputs during testing of
wafers.
Each channel receiving a calibration pulse is connected to a \unit[60]{fF} \unit[$\pm1$]{\%} capacitor
(\unit[$\pm$10]{\%} over full production skew) through a CMOS switch. The injected charge is defined by setting a defined voltage using an 8-bit calibration DAC. A fixed-width calibration pulse (8BC $\approx$
\unit[200]{ns}), generated by a chip-control command, activates a chopper circuit that applies the voltage to provide a controlled amount of charge (0 to \unit[-9]{fC}) to the input of each channel where the calibration pulse is enabled. The polarity of the
calibration pulse is also controllable by a bit in the control registers. The relative phase of the
calibration pulse can be varied using a programmable strobe delay circuit from 0 to \unit[80]{ns} so the
position of the pulse relative to the BC can be tuned for optimal results.
\paragraph{Power and Ground}
The ABC130 has independent digital and analogue power domains, each with its own power (DVDD and AVDD) and ground (GNDD and GNDA) pad connections, and each has its own on-chip programmable Low-DropOut (LDO) regulator that can be used to provide the required regulated \unit[$+1.2$]{V} core voltage. Options were also included on the chip to allow for the application of the core voltages using external connections. By providing a sufficient number of power pads connected to the outputs of the LDOs (VDDD and VDDA) that are normally connected to decoupling capacitors, these could also safely be used as power inputs if the LDO's input pads are not being powered. Furthermore, a voltage-controlled high current shunt circuit was included to allow for series powering of the chips. All of these modes of powering the ABC130 were tested, and it was decided to use the LDOs as voltage regulators to provide core voltages for both the analogue and digital portions of the chip when used on modules.
As part of the front-end pads array, there are four ground pads on each end to provide a ground reference for the sensor's HV decoupling and guard ring. Furthermore, there are three special sets of ground pads: analogue ground pads specifically for the front-end (GNDIT), and one pad each for the digital and analogue ESD circuit returns. On modules, all of these are wire-bonded to the respective digital or analogue ground planes of the hybrid.
The LDOs can be controlled by programming registers: each has its own Control
Enable Bit and a register field that allows them to be tuned to
\unit[$1.20\pm0.02$]{V} in 16 steps. If the Control
Enable Bit is not set, then
the output voltage of the LDOs (VDDD and VDDA) applied to the chip's core are
the voltages applied to their inputs (DVDD and AVDD) minus the minimal drop
across the LDOs. The chip is fully functional in this state; however, the LDOs
should be tuned to \unit[1.2]{V} for proper operation during data taking. In
addition to the programmable control, the chip has dedicated pads that can be
used to disable the LDOs in case the chip is to be powered by externally
provided core voltages or using the shunt circuit. The default state on power
on with no connection to the pads is to have the LDOs disabled but controllable.
The nominal pre-irradiation current at \unit[$+1.2$]{V} is \unit[40]{mA} for the digital portion of the chip, and \unit[70]{mA} for the analogue circuitry. However, due to the Total Integrated Dose current ``bump'' (TID bump, see section~\ref{sec:TIDBUMP}) experienced by this CMOS technology, the amount of digital current drawn by the chip will increase by a factor $\mathcal{O}(\unit[100]{\%})$
with increasing radiation dose before falling back to near pre-irradiation levels as the dose moves out of the TID bump range (around \unit[1]{Mrad}). To allow data taking to be consistent before, through, and beyond the TID bump operating region, the shunt circuit can be used to draw the difference between the expected maximum TID Bump current, and the current being drawn by the chip at the current TID. As the current increases through the TID Bump, the shunt current can be reduced so the overall current remains constant. Similarly, the shunt current can be increased again as the TID bump current begins to decrease, again maintaining a constant operating temperature and current draw as the TID increases, and helping to ensure comparable results for measurements taken throughout the irradiated operating regions of this CMOS technology. For the next chip generation, the ABCStar, a procedure for its pre-irradiation has been developed to pass the TID bump before the ASICs are assembled into modules. The shunt circuit is disabled by tying the Shunt Control analogue input to ground.
\paragraph{Digital Input and Output}
\label{para:abc130_dig_io}
There are two types of digital I/O pads on the ABC130: low-voltage
single-ended CMOS I/Os for low speed signals (LVCMOS~\cite{LVCMOS}), and
high-speed differential SLVS I/O with a nominal \unit[600]{mV} common-mode
voltage and \unit[400]{mV} differential voltage (SLVS~\cite{SLVS}).
Generally, static I/O uses the low-voltage CMOS single-ended signalling, and
all clocks and data I/O use high-speed differential SLVS signalling. Most
LVCMOS I/O is left without wire-bonds when assembled onto a module, with the
exception of the RSTB, and a
5-bit Chip ID (see section~\ref{subsec:assem_hyb}). Clock and command lines are implemented in a common-bus multi-drop configuration on the hybrids. Any command communications to the ABC130s contain a Chip ID and are only acted on
by the chip whose ID matches the one in the command (with the exception that
ID = 31 is a broadcast address and all ABC130s must respond to those
commands). Similarly, all packets output by an ABC130 include its Chip ID to
allow any packets it generates to be associated with that particular chip.
Chip IDs only need be unique within a group of ABC130 ASICs read out by the
same HCC.
In addition to the LVCMOS signals used during operation on a module, a number
of other pads were provided as experimental features, as risk mitigation, or
to assist in testing die before dicing the manufactured wafer of dice. These
include:
\begin{itemize}
\item pads to disable the digital and/or analogue LDOs (active high with CMOS pull-downs)
\item a Termination Enable pad (active high with a CMOS pull-down) that can be used to provide on-chip \unit[75]{$\Omega$} (\unit[82]{$\Omega$} max.) termination for the SLVS receivers
\item the ``abc up'' pad (active high with CMOS pull-down) that can be used to invert the sense of the internal reset tree
\item 5 pads to implement a JTAG~\cite{JTAG} test interface (Scan\_Enable, SDI\_CLK, SDI\_BC, SDO\_CLK, and SDO\_BC).
\end{itemize}
The ABC130 will operate properly with any or all of these pads left unconnected.
All dynamic operations of the chip use high-speed differential SLVS I/O (each logical signal has both a positive and negative pad to provide differential input, output, or I/O as appropriate):
\begin{itemize}
\item two clock inputs, BC and RCLK (Readout CLocK)
\item two command and trigger inputs, COM\_LO and L1\_R3
\item a set of bi-directional data and flow-control signals: one set for the ``left'' side of the chip, DATAL and XOFFL; and one set for the ``right'' side of the chip, DATAR and XOFFR
\end{itemize}
The \unit[40]{MHz} nominal differential BC clock is provided to all the ABC130s
on a module via the HCC130 and is used to trigger sampling of the
front-end inputs.
The BC is also used as the clock for both the COM\_L0 and L1\_R3
differential Dual-Data Rate (DDR) inputs with effective input data rates of 2
times BC (\unit[80]{Mbps} nominal). Each input is split into two \unit[40]{Mbps}
signals. On the rising edge of BC, the COM\_L0 is latched as the command data
stream to the ABC130s; and on the falling edge of BC, that signal is latched
as the L0 trigger. Similarly, the L1 trigger
data stream on L1\_R3 is latched on the rising edge of BC, and the R3 trigger
data stream is latched on the falling edge of BC. Finally, the HCC130
provides the differential RCLK signal at up to four times the rate of the BC
(\unit[160]{MHz} nominal) that is used to clock data on the DATAL/R digital
readout pads and the XOFFL/R flow-control pad signals.
The bi-directional signals are configured in pairs, so that when DATAL is an
output, DATAR is an input and vice versa. Similarly when DATAL is an output,
XOFFL is an input and vice versa as in table~\ref{tab:abc130_bidirections}.
A single configuration register bit determines whether the ABC130 is
operating in a ``right to left'' mode or in a ``left to right'' mode.
When configured as outputs, the differential output current of the drivers are
programmable between \unit[1]{mA} and \unit[7]{mA}, in 8 steps.
\begin{table}[htbp]
\centering
\begin{tabular}{l|l|l|l}
\textbf{Signal} & \textbf{Side of ASIC} & \textbf{I/O in L-R} & \textbf{I/O in R-L} \\
\hline
DATAL & Left & Input & Output \\
DATAR & Right & Output & Input \\
XOFFL & Left & Output & Input \\
XOFFR & Right & Input & Output \\
\end{tabular}
\caption{Bidirectional signals, which side of the ASIC they are positioned, and whether they are inputs or outputs}
\label{tab:abc130_bidirections}
\end{table}
When in ``right to left'' mode, DATAR is an input and forwards data received
from its neighbour to the right through to DATAL.
XOFFL is an input (recieving flow-control signalling from its neighbour to the
left) and XOFFR is an output (providing flow-control to its neighbour to the
right). When in ``left to right'' mode, the data, flow-control, and I/O
directions are reversed.
The ABC130s on a module are
connected in a daisy-chain with the DATAR and XOFFR on one chip connected to
the DATAL and XOFFL of the next chip to its ``right''.
The farthest ``left'' and farthest ``right'' ends of the daisy-chain are
connected to the HCC130 (see section~\ref{sec:HCC}), which can
receive data and/or provide flow-control signals from either of the ends of
the daisy-chain. This architecture allows part of the daisy-chain to be
configured as ``left to right'' and the other part as ``right to left'' to
handle the case of a single failed ABC130 anywhere in the daisy-chain. All
chips to the ``left'' of a faulty ABC130, if any, are configured in the
``right to left'' direction; and all chips to its ``right'', if any, are
configured in the ``left to right'' direction. This way, maximal physics data
can be read out from a partially operational module (see
figure~\ref{fig:abc130_daisychain}).
The ABC130 was manufactured on a Multi-Project Wafer (MPW) run along with an unrelated chip. There were two
different versions of the ABC130 in each reticle on the wafer: the ABC130\_0 and the ABC130\_1, which
were identical in function except the ABC130\_1 additionally had experimental circuitry for a Fast
Cluster Finder (FCF) and additional pads to support that functionality. The FCF was designed to provide
prompt, BC-synchronous, cluster position data to an external device that could be used to correlate
clusters between tracking layers and select high $p_{\textrm{T}}$ (transverse momentum) coincidences to a trigger processing
unit. Ultimately, this functionality was not used on the modules built with ABC130 chips, and was not
tested during wafer testing either. The operation of this circuitry is beyond the scope of
this article, but can be found in the ABC130 Specification~\cite{ABC130Spec}.
\paragraph{Chip operation}
\label{sec:ABCop}
For normal operation, after the chip is reset, the registers on each ABC130 are initialised using the command stream of the COM\_L0 DDR input with the values that have been determined to provide it with nominally tuned settings, and to set all relevant mode bits necessary to put the chip into the desired operational configuration. These settings include:
\begin{itemize}
\item the LDO tuning value required to provide \unit[1.2]{V} core voltages to the analogue and digital circuit domains
\item all of the front-end control DACs and TrimDACs to correct for process and inter-channel variation
\item the channel mask registers to disable any known faulty input channels
\item the required SLVS driver currents
\item and the threshold value.
\end{itemize}
Setting the threshold to an optimal value for each ABC130 - which is distributed to all of its 256 input channels, each of which is fine-tuned by the per-channel TrimDACs - is critical, as it determines the front-end's sensitivity to signals from the sensor strips it is reading out, and its susceptibility to noise from the sensor and the front-end circuitry. The threshold can be set based on the requirements of a particular data-taking session and determines the ``hit'' rate (which can include both signal and noise), and thus the maximum data transmission bandwidth from the module during operation. There are features that can limit the maximum data rate of the ABC130s, but using these results in the discarding of potential hits (see below).
The decision to record a hit or no-hit is taken on the rising edge of the BC
clock. The state of all 256 input channels of every ABC130 on a module will be
sampled into its L0 Buffer, a \unit[256]{bit}-deep FIFO. The state is formed
by the logical AND of the input comparators reading out the sensor strip it is
wire-bonded to, and the inverse of the associated Mask Registers bits (a 1 bit
in a Mask Register will force its associated channel to always read as the 0,
or ``no hit'', state). Each of these 256-bit input vectors is pushed from the
front-end onto the L0 Buffer FIFO along with the value of an 8-bit, command
resettable, BC counter (BCID).
Because this process is continuous, a sample will
remain in the L0 Buffer to be read out for a maximum of \unit[6.387]{$\upmu$s}
before falling off the far end of the FIFO.
The next step is to capture the data from the L0 buffer into the L1 buffer.
To save input vectors for possible later readout, an L0 (first level) trigger
accept needs to be issued. This L0A is actioned by logic-level 1 on the
COM\_L0 DDR input.
When an L0A is received, one ``event'' from the L0 Buffer is transferred to the
L1 Buffer. The Latency is the fixed number of BCs between when the front-end is sampled
and when an L0A is received by the module from the trigger system to
store that sample. This is configurable by
the setting of the 8-bit Latency value in the chip's control register set, and
specifies the address in the L0 Buffer of the centre of a 3-BC long ``event''.
As shown in figure~\ref{fig:abc130_event_xfer}, an entry in the L1 Buffer consists of three 256-bit
memory blocks, which will be used to store the L0 Buffer entries for:
the previous bunch crossing, the bunch crossing of interest, and the next
bunch crossing. All three of the
values copied to the L1 Buffer (both the 256-bit input vector and
the associated 8-bit BCID) will further be tagged with an
8-bit Local L0 IDentifier Counter (Local\_L0ID Counter) value. This whole event
is stored at the address in the L1 Buffer specified by the Local\_L0ID Counter
after it is incremented by one. Like the BC Counter, the Local\_L0ID Counter
is settable to a known initial value (usually \$FF) when data taking begins
so both the trigger system and the ABC130s are in synchronization and the
first L0 Trigger writes into location 0 of the L1 Buffer. Since the L1
Buffer can store 256 entries, for a \unit[500]{kHz} L0 Trigger rate, for
example, it can hold the data for an event for \unit[512]{$\upmu$s} before it
is overwritten with another event.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figure/abc130_event_xfer.pdf}
\caption{Transfer of event from L0 Buffer to L1 Buffer on receipt of L0 Trigger on COM\_L0 input.}
\label{fig:abc130_event_xfer}
\end{figure}
The final stage is to read out the event data from the L1 buffer.
To read out the physics data of an event, an L1 or R3 Trigger is issued by the trigger system via the
L1\_R3 DDR input (through the HCC130). Each of the trigger commands consists
of a three-bit header - 110 for L1 and 101 for R3 - followed by the 8-bit L0ID value of the event to read
out. The L0ID value sent with the trigger is simply the number of L0As sent
since the Local\_L0ID counter was reset, and corresponds to the memory
address of the L1 Buffer that will be read out.
Depending on whether an L1 or R3 Trigger was issued, the event is sent to the
L1-DCL (Data Compression Logic), or the R3-DCL, which generate a sequence of
fixed length data packets.
The R3-DCL produces only a single output packet, and is intended to provide a
quick snapshot of whether or not any clusters were detected for a particular
bunch crossing event. Conversely, the L1-DCL produces a comprehensive output of all cluster data for the relevant event where the hit data matches a specified
pattern, and can result in many packets being queued for transmission.
Both DCLs find clusters of hits, by searching the channels first in one
set of 128 channels followed by the other 128 channels. Thus clusters are
found only between strips in the same sensor region. Also, it does not
``wrap'' from one set to another.
These are recorded in the output packet as 0-127 and 128-255.
For an R3 Trigger, the R3-DCL will generate a single output packet flagging
whether there are no hits, some hits (1-4), or many hits (more than 4).
The R3-DCL can be configured through the
``EN\_01'' bit in the configuration registers to either define a ``hit'' by looking for a hit only in the L1 Buffer block corresponding to the selected BC
(level mode), or to look for a level change from 0 to 1 between the previous
BC and the selected input vector (edge mode).
The R3-DCL only registers clusters that have hits in a maximum of three
channels, larger clusters are ignored. The location of the first hit
is reported for clusters with width of 1 or 2, and the location of the central
strip when the cluster width is 3.
The R3-DCL will report the locations of a maximum of 4
hit clusters per event, and will set an overflow bit in the output packet if
there are more than 4 valid
clusters~\cite{ABC130Spec}. The format of the R3 packet is detailed in
figure~\ref{fig:abc130_r3_packet}.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figure/abc130_r3_packet.pdf}
\caption{Format of Regional Readout Request (R3) ABC130 Output Packet, the number in parenthesis indicates the corresponding number of bits.}
\label{fig:abc130_r3_packet}
\end{figure}
For an L1 Trigger, the L1-DCL reports clusters in one of two formats as selected by the ``mcluster'' bit
in the configuration registers: either L1-3BC mode where information on all 3 recorded BCs are reported; or L1-1BC mode, where only cluster patterns are reported (see figure~\ref{fig:abc130_l1_packets}).
For both modes, a compression mode can be configured to choose which clusters
to report based on the pattern of bits in the 3 recorded BCs. There are
two patterns for use during normal data taking, X1X (level) and 01X (edge), where the X indicates ``don't care''.
A further ``any hit'' mode to be used for detector alignment matches
(1XX or X1X or XX1). A final XXX mode is intended only for chip testing.
Clusters are scanned for in the two sets of 128 channels based on this mode
and clusters formed.
When a hit is found that matches the selected pattern, that bit forms
the first bit of that cluster
and its location is used to report the
start of the cluster in the packet.
For the L1-3BC
mode, that location is used as the channel address reported in the packet. The address is followed by the
3 bits for the hit on that channel (from the 3 recorded BCs), and the next
three channels(whether or not they have any hits in them).
The DCL then moves to the following channel to check for a new cluster start.
Because 4 channels are reported per L1-3BC packet, a total of 64 packets
could potentially be created if at least every 4th channel had a hit to cause a cluster to be reported.
For the L1-1BC mode, up to three clusters can be
reported: each one comprised of the 8-bit cluster start location, and the
one-bit hit status of the following three channels (3 bits) based on whether that channel matched the hit
criteria. Like the L1-3BC mode, the next cluster is searched for after the last bit of the previously
reported 4 channels of cluster data. Because 3 clusters can be reported in each packet, up to 22 L1-1BC
packets could be generated if every 4th channel had a hit to cause a cluster to be reported.
It
should be noted that due to the potentially large number of packets generated for an L1 Trigger event
with many hits, the possibility of saturating the front-end circuitry with noise or hits from the sensors
when the thresholds are set low needs to be managed carefully. If a large number of hits are expected,
the L1 Trigger rate needs to be controlled carefully to ensure no loss of data in those situations. A
further feature is provided in the ABC130 that allows the number of packets generated to be capped at
some specified number less than 64 for L1-3BC mode or 32 for L1-1BC mode. While this mode could result in
data loss, the assumption would be that the high-occupancy data is not useful.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figure/abc130_l1_packets.pdf}
\caption{Format of Second Level (L1) Trigger ABC130 Output Packets [number in parenthesis is number of bits].}
\label{fig:abc130_l1_packets}
\end{figure}
As packets are created by the DCL, they are pushed onto the appropriate
flow-controlled FIFOs and are then pulled from the FIFOs and serialized
according to
their relative priority. In addition to FIFOs for the L1-DCL and R3-DCL,
there is a
FIFO to queue configuration register reads, and a separate FIFO to output the
reading of a special high-priority status register (Register \$3F). These are
output in order of highest to lowest priority: high-priority register reads,
R3-DCL, L1-DCL, and then regular register reads at the lowest priority.
Furthermore, packets that are being transmitted through the ABC130 from an
adjoining ABC130 or HCC130 are interleaved into the output data stream based on
the setting of 4 Pry (priority) bits in the configuration registers. If there
are packets from the internal data sources to send, Pry sets the number of
through-packets that might be forwarded before one internally generated packet
needs to be sent.
Thus, if Pry is set to 0, then a through-packet will
always be sent before any internally generated packets. If Pry is set to 8, for
example, then 8 through packets (if present) will be forwarded before sending
the next internally generated packet. If the through-packet FIFO (which
is 4 deep) is about to be filled, the XOFF signal is asserted to the
upstream chip to prevent the FIFO from overfilling and for
through-packets to be lost that way. Similarly, if the internal FIFOs are about
to fill up, the blocks that are sending data to them receive internal flow-control
signalling and must stop operation until FIFO space is available for them to
continue sending (see figure~\ref{fig:abc130_outmux} on the FIFO and priority
structure).
\begin{figure}
\centering
\includegraphics[scale=0.7]{figure/abc130_outmux.png}
\caption{ABC130 Packet Output Multiplexer with Priority Control}
\label{fig:abc130_outmux}
\end{figure}
\paragraph{Wafer testing}
Since \mbox{ATLAS} ITk strip tracker modules have up to 12 ABC-style chips connected to one HCC, if even one chip were to fail due to a manufacturing defect during module testing, it would require a risky and complex re-work effort to attempt to recover the module. While the data-flow architecture allows for the routing of data around a failed ABC130, 256-channels of data would still be lost. If that repair process failed or damaged the module, that one failed chip could result in the costly loss of an entire hybrid or module. As such, each ABC130 die undergoes an extensive testing and characterization process while still on the wafer using specialized wafer probing equipment and custom software. The ABC130s on the wafer are categorized into good (grade A), usable in the lab or in the event of a parts shortage (grade B), and bad dice (which can be used as mechanical samples for assembly testing and tooling development).
ABC130 wafer testing was conceptualised, developed, and implemented using a commercial semi-automated wafer-probing system, a commercially manufactured custom probe card, a custom interface card and a commercial off-the-shelf (COTS) FPGA development board. The use of COTS components resulted in a much faster test system development compared to the custom electronics used for wafer testing during the construction of the \mbox{ATLAS} Semiconductor Tracker~\cite{wafer_testing}.
The wafer test software was integrated with the module test software suite: the ITk Strips DAQ (ITSDAQ, formerly SCTDAQ), see section~\ref{sec:software}. In the tests run on each ABC130 die on every wafer, the wafer is manually placed on the wafer-probing station's platen and positioned using the probe-station's microscope's digital camera. Once the wafer is aligned, the wafer test software can step automatically between each of the ABC130s on the wafer and run all necessary tests on it.
The tests begin with basic integrity testing looking for gross failures of the die in terms of power supply currents and to ensure proper contact between the probe card and all of the die's pads. The tests then conduct a number of further sanity checks including:
\begin{itemize}
\item setting registers to default values and verifying that the power supply currents change appropriately
\item tuning the chip's LDOs and front-end DACs
\item scanning all other DACs
\item a series of digital tests to check the functioning of the digital portion of the chip
\item a series of complex functional tests are conducted to verify the chip's proper operation from front-end to data output
\end{itemize}
These final tests include tuning the chip's strobe delay value to provide optimal stimulus using the built-in calibration pulses, and then running a comprehensive 3-point gain test where each channel's front-end response to three different calibration pulse charges (\unit[0.5]{fC}, \unit[1.0]{fC}, and \unit[1.5]{fC}) is plotted to determine the gain response and noise level of all the input channels (see section~\ref{chartests}). These tests also verify the functioning of triggering blocks, the L0 and L1 Buffers, the cluster finding and sparsification blocks, and the data transmission I/O blocks. The data is analysed in real time by ITSDAQ and the parts undergo a preliminary categorisation at that phase. Further analysis is performed offline on the data produced by the wafer probing routines where comparative analyses are also done between dice and between wafers, and the die categorisation can be updated as needed at that phase.
In preparation for the next generation chip, the ABCStar, and to ramp up towards full production testing, a second wafer test site was established. Whereas previously a system primarily developed in-house had been used for wafer testing, the second test site was established at an outside company to use their commercial, fully automated, wafer-probing stations and associated industrial test software infrastructure.
\subsubsection{HCC130}
\label{sec:HCC}
The Hybrid Chip Controller, HCC130, the first \mbox{ATLAS} strips prototype
chip controller, was submitted for fabrication in August 2014. The
99-pad \unit[$4.7\times2.96$]{mm$^2$} ASIC (see
figure~\ref{fig:HCC130PadFrame}) was designed to provide the
interface between the hybrid-mounted front-end ABC130 and the off
detector electronics through the GBTx~\cite{GBTX} using LVDS-like low
voltage differential drivers and receivers. It also contained an
early version of the Autonomous Monitor that was functionally validated
and eventually moved to the AMAC ASIC (see section~\ref{sec:AMAC}). HCC130 receives the
\unit[40]{MHz} bunch crossing (BC) clock and two, custom protocol control
signals from multi-drop buses driven by the
GBTx. Both of these control signals are encoded with two independent
logical streams time multiplexed into one.
Data of all types sent from each module are transmitted point to
point by the HCC130 to the GBTx at 160 or \unit[320]{Mbps}.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{figure/HCC130PadFrame24April2014}
\caption{HCC130 Pad Ring and floorplan}
\label{fig:HCC130PadFrame}
\end{figure}
The L0\_COM physical control signal encodes L0, a
beam synchronous trigger that stores the ABC130 pipeline delayed data
into a 256-word deep data buffer from which data are requested for
readout. The second logical channel of L0\_COM signal is a
priority based variable length Command (COM) protocol that provides
control to set operational modes in both the HCC130 and ABC130 ASICs
and initiates requests for data from internal registers in the HCC130
and ABC130 ASICs. The difference in naming from the ABC130 indicates a
subtle difference in that the HCC130 modifies the COM signal before
forwarding to the ABC130, to mask out commands intended for ABC130 attached
to other HCC130 on the same bus.
The R3s\_L1 physical signal provides two triggers used to request readout of
data from the ABC130. The L1 is broadcast directly to the ABC130s, requesting
readout of data from one of the 256 memory locations of the ABC130s L1 buffer.
In addition to the L0ID identifying the memory location to read out, the R3s
signal contains an extra 14 bits, used to propagate the signal only to
HCC130s with matching addresses. In this mode only addresses 2-29 are
available. If the address matches, the mask bits are stripped and the remainder
broadcast to the ABC130s.
The HCC130 utilises a copy of the CERN ePll block designed
for the GBTx~\cite{GBTX} to generate low jitter, 40, 80, 160 and \unit[320]{MHz}
clocks using the incoming multi-drop \unit[40]{MHz} BC as a
reference. The ePll is used internally on the HCC130 and provides the
hybrid ABC130s with a regenerated, programmable, phase delayed
\unit[40]{MHz} clock for phasing event data properly within the beam
crossings and related pipeline control. It also provides the ABC130s
on the hybrid with a selectable 80 or \unit[160]{MHz} data clock to
drive the serial loops used for data readout. HCC130 can collect data
from the hybrid through any of its four serial receivers attached
to either end of the two hybrid readout loops. Corresponding XOFF signals
allow for flow-control to the ABC130s. This readout technique
provides contingency for single and multiple chip failures in either
of the two ABC130 serial data loops.
Once on the HCC130, a priority encoder ensures an even flow of data from
the two ends of each loop and that R3 data are sent to the DAQ system
with the highest priority. A data concentrator merges data from the
two loops. A detailed HCC130 functional block description is shown in
figure~\ref{fig:HCC130BlockDiag}.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{figure/HCCBlockD.png}
\caption{HCC130 block diagram}
\label{fig:HCC130BlockDiag}
\end{figure}
\subsubsection{AMACv1a}
\label{sec:AMAC}
The Autonomous Monitoring And Control ASIC (AMAC) was submitted for
fabrication in August 2016. It was the first successful prototype of
the radiation tolerant, ten bit precision analogue monitor ASIC and was
constructed using two identical seven channel Autonomous Monitor
blocks originally housed in the HCC130. The AMACV1a pad frame is
shown in figure~\ref{fig:AMACv1aDie}: it has 62 bond pads and a die size
of \unit[$2.7\times2.8$]{mm$^2$}. An internal ring oscillator
provides a near \unit[40]{MHz} clock to control the autonomous monitor
functions and I2C protocol is used for control and readout. AMACV1a
monitors 14 independent module level parameters: voltages,
temperatures and sensor leakage current. A clock driven state machine
controls a switched capacitor stepped integration ramp to create a
common reference for the Wilkinson style ADC. Each integration step
increases the reference by \unit[1]{mV} and increments a ten bit counter reset
to 0, each 1023 step ramp cycle. Each of the 14 monitored parameters
is translated into a voltage between 0 and \unit[1]{V} and compared
with the ramp that covers the same range. When the reference ramp
exceeds the value of the measured parameter for two consecutive ramp
steps the counter value is recorded in a register and compared with
pre-programmed upper and lower limits. Out of limit values are
flagged and - if enabled - can switch the state of four logic outputs
outputs that may be wired to LV or HV supply controls. Measured
values are updated and stored locally once per millisecond. They may
be readout remotely through the I2C interface.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figure/AMACv1aBondPadDie.pdf}
\caption{AMAC v1a Pad Ring and floorplan}
\label{fig:AMACv1aDie}
\end{figure}
\subsection{Hybrids}
\label{comp:hyb}
Readout chips for ABC130 barrel modules are mounted on flexible, radiation hard and low mass Polyimide circuits (called hybrids), which were developed to carry ten
ABC130 readout chips and one HCC130 readout chip each (see section~\ref{sec:comp}).
Hybrids provide the following electrical functionalities:
\begin{itemize}
\item Multi-drop external clock and control are connected to the hybrid
\item On-hybrid internal clock and control are distributed to the ABC130 chips
\item All ASICs on the hybrid are connected to a common ground and power domain
\item Hybrid front-end data is returned to the high-level readout via the end-of-substructure card (see figure~\ref{fig:intro_stave})
\end{itemize}
The hybrid readout topology groups the ten ABC130 readout chips
into two daisy-chains of five chips each. Each chain can be read
out in either direction by the HCC.
ABC130 barrel modules are assembled by gluing hybrids with readout
chips directly onto the surface of silicon strip sensors (see
section~\ref{subsec:MA}). The circuit layout has been optimised to
minimise electrical interference into the sensitive analogue front-end electronics or sensor strips. This has been achieved by the use of a single power
and return domain with partioning of analogue and digital circuitry to
mitigate common impedance coupling of the analogue and digital
signalling. Furthermore, fast digital signalling within close
proximity of the analogue front-end are routed as differential
strip-lines to take advantage of the shielding effect of the return
planes.
In addition to ABC130 and HCC130 readout chips, hybrids are equipped
with two NTC thermistors, of which one is used to monitor the hybrid
temperature during operation. The second one is part of a
temperature interlock system required during the burn-in of
hybrids as part of their quality control.
In order to ensure a maximum yield, hybrids were designed to utilise
standard manufacturing processes with long-term
reliability. Tracks and gap sizes are about
$\unit[100]{\upmu\text{m}}$, vias (plated laser drilled holes) have a
hole diameter of about $\unit[150]{\upmu\text{m}}$ and lands of
$\unit[350]{\upmu\text{m}}$, which ensures uniform plating and
therefore reliable contacts through vias. Barrel hybrids comprise three copper layers ($\unit[18-35]{\upmu\text{m}}$
thickness) between polyimide dielectrics ($\unit[50]{\upmu\text{m}}$
layer thickness), resulting in a total hybrid thickness of
approximately $\unit[300]{\upmu\text{m}}$.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{figure/P1040392.JPG}
\caption{ABC130 barrel hybrid panel with four X-type hybrids, four
Y-type hybrids and two test coupons.}
\label{fig:panel}
\end{figure}
Hybrids are produced on panels (glass-reinforced epoxy laminate
sheets) which hold four X- and four Y-type hybrids (see
figure~\ref{fig:panel}) as well as two test coupons per panel, which
are used for hybrid manufacturing quality control:
\begin{itemize}
\item testing of via reliability by via chain resistance measurements
\item monitoring trace etching quality by testing DC resistance of test traces
\item testing quality of surfaces for wire bonding by performing wire bonding pull tests
\end{itemize}
Hybrid panels are equipped with vacuum holes and landing pads for
assembling hybrids and readout chips (see
section~\ref{subsec:assem_hyb}) as well as traces and connectors for
the electrical testing of fully assembled hybrids (see
section~\ref{subsec:test_hyb}).
\subsection{Powerboard}
\label{comp:pb}
The powerboard fulfils three purposes on the ITk Strip Module:
\begin{enumerate}
\item DCDC regulation of the \unit[11]{V} input to \unit[1.5]{V} to supply the ASICs on the hybrid
\item High-voltage switching of up to \unit[-500]{V} to reverse bias the sensor
\item Control of low-voltage and high-voltage supply, as well as monitoring of voltages, currents, and temperatures
\end{enumerate}
The DCDC regulation is achieved by the FEAST chip~\cite{feast}, a
radiation hard custom ASIC developed by the CERN electronics group for
various experiments and their detectors. The FEAST employs a
buck-converter style switching regulator, which requires an external
inductance. For the powerboard, this inductance is an air-core solenoid
coil with a nominal inductance of \unit[545]{nH} and DC resistance of
\unit[35]{m$\Omega$}. It is required to be an air-core coil as the
detector will be placed in a \unit[2]{T} solenoid field, which precludes the application of ferrite cores.
Due to the shape and characteristics of an air-core solenoid, during operation the coil emits RF noise, which could be picked up by the silicon strips underneath and around the powerboard. Therefore, the whole DCDC circuit is enclosed by
a shield formed by a specific copper layer in the PCB underneath and a
\unit[100]{$\upmu$m} thick aluminium shield-box soldered on top with continuous seams.
Switching control of the high-voltage is gained via a GaNFET
transistor switch, by routing the high-voltage supplied to a module
onto the powerboard through the switch. This routing also allows a
low-pass RC-filter to be placed at the output of the high-voltage line,
which is connected to the silicon sensor. To switch the GaNFET, a
voltage of more than \unit[2]{V} between gate and source is needed, as the
source of the transistor after closing the circuit is at high voltage. An AC signal at a frequency of \unit[100]{kHz} and amplitude of
\unit[3.3]{V} is AC-coupled into the high-voltage domain. It is then amplified
and rectified via a quadruple charge pump circuit, which generates the
necessary gate voltage with respect to the current source potential.\\
Both the low-voltage and high-voltage domain are controlled by the
AMAC chip, which has been designed
specifically for the usage on the powerboard. It can generate an
enable signal for the FEAST chip to turn on the power to the
\unit[1.5]{V} domain and also generates the AC signal to switch the
high-voltage on or off. Furthermore, the AMAC chip features
multi-channel ADCs to measure multiple operation critical values:
\begin{itemize}
\item input and output voltage and current
\item FEAST internal temperature (via an internal PTAT circuit)
\item powerboard temperature (via NTC thermistors)
\item hybrid temperature (via NTC thermistors, for later powerboard versions)
\item and silicon sensor leakage current
\end{itemize}
It also contains logic to set an upper and lower boundary on
the monitored values and if these limits are violated it can interlock
the low-voltage or high-voltage of the module.
A picture of powerboard v2 can be seen in
figure~\ref{fig:powerboard_v2}, which shows the main components of the
powerboard. On this specific version of the powerboard the AMAC is
powered via two commercial linear regulators, in the next version of
the powerboard these will be replaced by a rad-hard linear regulator,
the LinPOL12V, which will also be used in the final production version
of the powerboard.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figure/powerboard_v2_loaded.jpg}
\caption{Photograph of a fully assembled powerboard v2. The main components of design are pointed out as well as the interface.}
\label{fig:powerboard_v2}
\end{figure}
\section{Module construction}
Each ABC130 barrel module consists of one sensor (see
section~\ref{subsec:comp_sensors}), one or two hybrids (see
section~\ref{comp:hyb}) with ten ABC130 chips and one HCC130 chip each
(see section~\ref{subsec:comp_chips}) and one powerboard (see
section~\ref{comp:pb}). Modules are assembled in a defined series of
steps optimised for early defect detection to avoid the use of
low-grade electronics on good quality sensors:
\begin{enumerate}
\item gluing readout chips to hybrids and powerboards (see
sections~\ref{subsec:assem_hyb} and~\ref{subsec:assem_pb})
\item electrical connection of readout chips to hybrids and powerboards using an automated wire bonding process
\item tests of electrical functionalities of hybrids and powerboards (see sections~\ref{subsec:test_hyb} and~\ref{subsec:test_pb})
\item gluing tested hybrid(s) to sensor (see section~\ref{subsec:MA})
\item electrical connection of hybrid(s) and sensor using an automated wire bonding process
\item electrical tests of module (see section~\ref{subsec:test_mod})
\item gluing tested powerboard to sensor (see section~\ref{subsec:MA})
\item electrical tests of module (see section~\ref{subsec:test_mod})
\end{enumerate}
Components are mechanically and thermally connected using adhesives,
which achieves the low material budget required in the \mbox{ATLAS}
tracker~\cite{TDRs}. Each hybrid and module is assembled in a manual
process using custom designed precision tooling (see section~\ref{subsec:assem_hyb}) including a stencil to ensure a reliable glue coverage and thickness between components. After each gluing step, the glue thickness between components is checked by performing metrology measurements. Since the stenciling process ensured a consistent glue volume, glue
layer thicknesses outside the specified range were
found to lead to lower quality wire bonds:
\begin{itemize}
\item thick glue layers, corresponding to low glue coverage under components, resulted in
insufficiently supported bond pads and therefore weak wire bonds
\item thin
glue layers led to glue covering wire bond pads and prevented
electrical connections between bond pads and attached wire bonds and
thereby caused electrical failures
\end{itemize}
Additionally, glue layers with insufficient height between hybrids or powerboards and sensors led to glue spreading towards the sensor guard ring area, which was found to result in early sensor breakdowns~\cite{Cole}.
\subsection{Hybrid assembly}
\label{subsec:assem_hyb}
For the construction of hybrids, ten ABC130 readout chips and one HCC130
readout chip are glued onto an X- or Y-type flex in a series of manual
steps that use precision tooling for positioning.
Prior to assembly, the involved components were tested for electrical functionality:
\begin{itemize}
\item circuits on the flex were tested by the manufacturer
\item ABC130 ASICs were probed on a full wafer of chips (see section~\ref{sec:ABC})
\item most HCC130 ASICs were probed, but after a high success rate
during initial tests (\unit[97]{\%}) and technical difficulties
with the test setup, tests of individual ASICs were eventually
stopped
\end{itemize}
All components were handled in a cleanroom environment using vacuum
tools to avoid contaminations prior to wirebonding (see below).
In order to be populated with ASICs, a panel with hybrid flexes was
positioned on a vacuum chuck with vacuum holes under each hybrid
flex. Vacuum was applied in order to flatten hybrid flexes and provide
controlled glue heights between ASICs and hybrids. Positions of ASICs
on hybrids were controlled using matching positions of precision holes
and locating pins in the assembly tooling (see
figure~\ref{fig:hyb_panel}).
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figure/Hybrid_gluing.JPG}
\caption{Two hybrid flexes on a panel with pads for ABC130 chips
(yellow) and one HCC130 (orange). Precision cut holes (blue) are used
to position tools for population with ASICs. Landing pads (cyan) are
used as height reference during ASIC population.}
\label{fig:hyb_panel}
\end{figure}
ASICs were first positioned in a dedicated chip tray with cutouts for each
ABC130 ASIC (see figure~\ref{fig:chiptray}), which aligned each ASIC
with respect to locating holes matching those in hybrid panels. ASICs
were picked up using a vacuum pick-up tool (see
figure~\ref{fig:pickuptool}) with individual vacuum pedestals for each
ABC130 chip and locating pins to align the tool in the chip tray.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figure/tools/chiptray.JPG}
\caption{Chip tray to align ABC130 chips in positions matching ASIC
positions on hybrids. Alignment holes matching the alignment holes on
panels are used to position a vacuum pick-up tool over the chips.}
\label{fig:chiptray}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figure/tools/pickuptool.JPG}
\caption{Vacuum pick-up tool to pick up ABC130 chips to be mounted on
a hybrid: pedestals matching ABC130 chip positions hold ASICs in
place and are positioned on a hybrid using alignment pins matching
alignment holes in corresponding tools. Adjustable landing feet are
used to set the correct glue height between ASICs and
hybrid. Landing feet are electrically insulated from the tool so
that a contact between landing feet and pads can be checked by
testing the corresponding resistance.}
\label{fig:pickuptool}
\end{figure}
Barrel pick-up tools were designed to be height-adjustable: each landing foot consisted of a precision metal sphere glued into a fine thread screw which could be adjusted to increase or reduce the height of the pick-up tool over a hybrid or sensor. Contact measurements of adjusted pick-up tools showed that landing feet achieved height settings with respect to ASIC pickup areas of up to \unit[10-20]{$\upmu$m}, in addition to a required ASIC pickup area flatness of $\unit[\pm5]{\upmu\text{m}}$.
In order to implement a continuous assembly and testing process for hybrids, a UV cure epoxy (Loctite 3525) was chosen after a study of several alternatives \cite{glue-paper}. It replaced the previously used silver epoxy (Tra-Duct 2902). The UV cure epoxy was dispensed on each landing pad on the hybrid using an automated glue dispenser: a combined volume of
\unit[2.0]{mg} was dispensed in a five-dot pattern matching position
indicators on each landing pad (see figure~\ref{fig:hyb_panel}) with
\unit[0.4]{mg} per glue dot. After dispensing the glue, the pick-up
tool holding ASICs was placed on top of the hybrid.
The intended glue height between flexes and ASICs was achieved by constructing dedicated
landing feet on the pick-up tool to a height that, when placed on the landing pads of the hybrid panel, would ensure the required gap between ASICs and hybrid surface. The target glue thickness for this gluing step was \unit[80 $\pm$ 40]{$\upmu$m} at the beginning of the project, but was later increased to \unit[120 $\pm$ 40]{$\upmu$m}, which was found to produce more reliable results.
A brass weight was placed on top of
the pick-up tool to ensure a good contact of the landing feet on the
panel. Afterwards, UV LEDs (Edison Opto Federal 3535 UV Series~\cite{UVLEDs}) were placed next to the hybrid to shine
UV light into the gap between ASICs and hybrid for \unit[10]{min},
which was found to be sufficient to fully cure the glue underneath each
ASIC.
For historic reasons (the HCC chip equivalent from the ABCN-25 chip set was not mounted on hybrids~\cite{stave}, but on module test frames), ASIC pick-up tools for the ABC130 chip set did not include a vacuum pick-up area for the HCC130 chip. Therefore, HCC130 ASICs
were glued onto hybrid flexes without dedicated tooling and placed by
hand. For this gluing step, a silver-filled epoxy glue (Tra-Duct 2902)
was used, as its higher viscosity facilitated the manual assembly
process.
After each hybrid assembly, the height of each ASIC was measured to
track the achieved glue height and reliability of the gluing
process. Figure~\ref{fig:hyb_thick} shows height measurement results
for a range of hybrids: all hybrids were found to have glue thicknesses within the specified allowed range, well within the allowed uncertainties.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figure/Thicknesses.pdf}
\caption{Thicknesses of glue layers between ASICs and hybrid flexes
measured on 16 hybrids. Over the course of the project, the target
glue thickness was increased from {$\unit[80\pm40]{\upmu\text{m}}$}
(indicated by horizontal stripes) to
{$\unit[120\pm40]{\upmu\text{m}}$} (indicated by vertical stripes).}
\label{fig:hyb_thick}
\end{figure}
After populating a hybrid with ASICs, each ASIC is electrically
connected to the hybrid using wire bonding: a \unit[25]{$\upmu$m}
aluminium wire (with \unit[1]{\%} silicon content) is fed through a
bond wedge, pressed down on the ASIC pad with a force $\mathcal{O}(\unit[10]{\text{cN}})$ while simultaneously applying ultrasonic vibrations and thereby welding the wire to the metal pad underneath.
Using the same process,
the bond wire is afterwards attached to an electroless nickel immersion gold (ENIG) plated pad on the
hybrid side and cut off. Figure~\ref{fig:bond_ABC} shows the wire
bonding scheme for wire bonds between ASIC and hybrid (back-end
bonds).
\begin{figure}
\centering
\includegraphics[width=0.6\linewidth]{figure/ABC_bonding.png}
\caption{Wire bonding scheme for electrical connections between ABC130
ASICs and hybrid (back-end bonds). Fiducials (F- and +-shaped)
around the ASIC corners are used for the automated alignment of a
pre-programmed wire bonding routine to an ASIC on a hybrid. After
attaching hybrids to a sensor, ASICs are connected to sensor strips
using four rows of bond pads (left ASIC side).}
\label{fig:bond_ABC}
\end{figure}
Wire bonds are placed using a program that contains position
information, loop shapes and bonding parameters for all wire bonds on
the hybrid. Prior to using the program on a hybrid, it is aligned to
the positions on hybrid and chips using fiducials (see
figure~\ref{fig:bond_ABC}).
In order to read out ten identical ABC130 chips per HCC130 and two HCC130
chips on a short strip module, each ASIC is assigned an individual ID
number using bond pads on dedicated address
fields. Figure~\ref{fig:HCC_ID} shows the wire bonding scheme for an
HCC130 ASIC with its address field.
\begin{figure}
\centering
\includegraphics[width=0.6\linewidth]{figure/HCC_ID.png}
\caption{Wire bonding scheme for an HCC130 chip on an X-type hybrid
(mirrorred for Y-type hybrids). The address field supports five wire
bonds in order to assign ASIC ID numbers from 0 to 31.}
\label{fig:HCC_ID}
\end{figure}
In order to assign an ID number to a chip, bond pads on the address
field are connected to a hybrid ground pad using wire bonds. Address
fields consist of several numbered fields (e.g. ID0 to ID4 on an HCC),
where each field corresponds to a binary digit (i.e. $2^0$ to $2^4$ on an
HCC). If a bond wire is attached to an address field bond pad, the
corresponding number is set to 0. The five ID pads on an HCC130 therefore
allow to assign ID numbers from 0 to 31. Ten ABC130 chips on the same
hybrid are assigned IDs 16 to 25 sequentially, while HCC130 ID numbers
can be assigned randomly, as long as HCC130 IDs on groups of 13 or 14
modules, i.e. up to 28 HCC130 chips, read out together are unique to each HCC130.
The wire bonding process is performed with the hybrid being located in
the panel where it was manufactured. During wire bonding, the hybrids
are held in place and flattened by applying vacuum to the hybrid
backplane.
\subsection{Electrical tests of hybrids and modules}
\label{subsec:test_elec}
\subsubsection{Physical DAQ system}
ABC130 objects are controlled and read out using ITSDAQ. The ITSDAQ system comprises a PC running a software
component and FPGA-based hardware to handle the digital logic
interfaces and time-critical functions. Commercial hardware and
standards-based protocols are used wherever possible; this reduces
cost, and also complexity in some cases. The connection from the PC to
the FPGA is via standard Ethernet. Custom interface boards are
manufactured to match the various ASIC and module connector
configurations, and these are plugged into the FPGA-board via
(standards based) connectors provided; most commonly an FPGA
Mezzanine Connecter (FMC) and Digilent's VHDCI based standard: VMOD.
For the FPGA hardware, commercial educator-focused ``development''
boards have been selected as these are relatively low cost, widely
available and have a product lifetime of many years. The Digilent
Atlys~\cite{DigilentAtlys} and Digilent Nexys Video~\cite{DigilentNV}
have been used widely.
The custom electronics needed to connect the development boards to
ABC130-based objects take the form of ``Interface-Boards''. By using a
range of Interface-Boards a common FPGA-board is adapted to the wide
range objects under test.
Figure~\ref{fig:itsdaq_overview_structure_incl_hw} shows a block
diagram of the DAQ functional components, along with an example of a
real setup (without the PC).
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figure/itsdaq_overview_structure_incl_hw.pdf}
\caption{The main components of the ITSDAQ system - as a block diagram
and a photo of an example setup showing a Nexys Video with FMC
Interface Board connected to a Single-chip Board.}
\label{fig:itsdaq_overview_structure_incl_hw}
\end{figure}
\subsubsection{PC/firmware interface}
\label{subsub:phy_daq_sys}
Communication with ITSDAQ hardware is via blocks of data transferred
in network packets. Initially raw Ethernet was used, but this has been
superceded by the UDP/IP protocol as it allows easier management of
the interface.
It has been found to be sufficiently reliable on point-to-point links.
The same packet format is used in both directions.
ITSDAQ network packets contain one or more opcode-blocks (``opcodes'')
formatted using a custom protocol. Table~\ref{tab:itsdaq_net_pkt_fmt}
shows the opcode wrapper protocol that forms the payload portion of
UDP packets. In the raw-Ethernet case, the first field (magic number)
is used as the standard Ethernet Type value (effectively defining a
new type of packet) and the rest is the payload. A sequence number is
provided to track packet loss. The Opcode sub-system is detailed in section~\ref{subsub:opcodes}.
\begin{table}[htbp]
\begin{tabular}{l|l|l}
\textbf{Field} & \textbf{Size} & \textbf{Description} \\
\hline
Magic number & \unit[16]{bit} & 0x876n. In raw-Ethernet mode \\
& & ~this becomes the Type field \\
Sequence number & \unit[16]{bit} & Defined at source. Software can use anything,\\
& & ~firmware uses a counter \\
Packet length & \unit[16]{bit} & Length in bytes of entire packet, \\
& & ~including trailer (CRC) \\
Opcode count & \unit[16]{bit} & Number of opcodes in the packet \\
Opcode 0 & & \\
... & & \\
Opcode n & & \\
... & & \\
Trailer & \unit[16]{bit} & CRC \\
\end{tabular}
\caption{ITSDAQ network packet format}
\label{tab:itsdaq_net_pkt_fmt}
\end{table}
Debugging communication (and operation) can be aided using Wireshark
software, especially if using the ITSDAQ protocol
dissector provided with ITSDAQ software.
\subsubsection{Firmware structure}
\label{subsec:firmware_struct}
The firmware needs to be compatible with multiple FPGAs, networking
chip sets, board-clock frequencies and devices under test. The firmware
is structured to cope with variations in board layout, FPGA family and
the devices under test. To aid this, the firmware is split into 3
distinct parts (see also figure
\ref{fig:itsdaq_firmware_overview_struct}):
\begin{itemize}
\item \textbf{Network + Clocks:} FPGA board specific; Interfaces to whatever
network chip set is supplied on the board, and generates \unit[40]{MHz} and related clocks from the local oscillator.
\item \textbf{Main/Core:} Generic; FPGA and device agnostic, the same code is used for
all builds (but does have compile time options).
\item \textbf{DIO:} specific to FPGA, Interface Board and device under test (DUT); Handles physical connections (pinout of
interface board, physical IO types – LVDS, CMOS, pullups etc. and FPGA
primitives – ISERDES, IODELAY2 etc.)
\end{itemize}
The core firmware provides many functions, including: control signal
encode, front-end data-format decoder, histogrammer, sequencer,
trigger generation (oscillator, random, structured bursts), I2C and
other slow interface controllers.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figure/itsdaq_firmware_overview_struct.pdf}
\caption{Overview of the ITSDAQ firmware structure showing generic and
device specific parts, and clock domains.}
\label{fig:itsdaq_firmware_overview_struct}
\end{figure}
Firmware configuration and control signals are distributed as needed
around the FPGA using the ``Opcode Bus'' (see figure~\ref{fig:itsdaq_firmware_overview_struct}). Some of these are
responsible for sending serial streams to the DUT. Data returned from
the DUT is often multiplexed as a pair of streams sharing onto a
single line, implying 2 logical streams per physical link. Each stream
is allocated a dedicated ``Readout Unit'', which has a packet detector,
decoder, histogrammer and FIFO. A large multiplexor funnels all the
data received into a single connection to the network interface block.
Note that \unit[640]{Mb} deserialisers are used in the firmware regardless of
data rate. This allows for both simpler coarse delay setting, and
non-clock rate dependent decoding of multiple software selectable
rates.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figure/itsdaq_firmware_control_readout_struct.pdf}
\caption{Control and readout firmware structure showing multiple
readout units.}
\label{fig:itsdaq_firmware_control_readout_struct}
\end{figure}
\subsubsection{Opcode sub-system}
\label{subsub:opcodes}
Opcodes are the means of communicating directly with the various
functional blocks inside the firmware (and the firmware with the
software). They allow transfers of blocks of data to firmware
``handlers''. All handlers are connected to a common ``opcode bus''
and pull blocks of data addressed to them. All opcode handlers will
send a response when addressed - any data that may have been
requested, or just an acknowledge to signal operation complete.
Examples of opcode handlers are: register block (allows writing to a
traditional register space), status block (returns a block of status
words), two-wire (interface to various slow control protocols) and
raw-Signal (send payload contents as a serial stream).
An ``opcode'' consists of an opcode-ID, a sequence-number and
a payload-size field, along with an (optional) data payload - see
table~\ref{tab:itsdaq_opcode_fmt}.
\begin{table}[htbp]
\begin{tabular}{l|l|l}
\textbf{Field} & \textbf{Size} & \textbf{Description} \\
\hline
OpcodeID & \unit[16]{bit} & Specifies which opcode-handler is \\
& & ~being addressed \\
OC-Sequence No. & \unit[16]{bit} & Generated at source \\
& & ~Replies have same as initiating opcode \\
Payload Size & \unit[16]{bit} & Payload-data length in bytes (0 is valid) \\
Payload Data & 0-725x\unit[16]{bit} & Composition format Unique to each \\
& & opcode-type (opcode-id) \\
\end{tabular}
\caption{ITSDAQ opcode format}
\label{tab:itsdaq_opcode_fmt}
\end{table}
\subsubsection{Software}
\label{sec:software}
The software part of ITSDAQ is primarily developed to run on PCs
running Linux (for example CentOS 7). A Microsoft Windows version was
also been maintained for most of the period in question. It relies on
ROOT~\cite{ROOT} for histogramming and fitting for
analysis and for the graphical user interface.
The software is used to collect data from the ASICs in various
conditions, which usually involves scanning over particular settings
of the ASIC registers and recording the data response.
A basic test thus involves:
\begin{enumerate}
\item Load full configuration to ASICs
\item Set parameter under test
\item Send trigger pattern
\item Record data response
\item Repeat from 3 until number of triggers complete
\item Repeat from 2 until all parameter values scanned
\end{enumerate}
A wide variety of trigger patterns is available so that (for instance)
charge can be injected into the front-end with particular timing. A
non-exhaustive list includes the addition of different reset commands,
sending multiple triggers (and potentially recording data from only
one), sending register read commands and sending arbitrary patterns (see section~\ref{sec:ABCop}).
Recording the response data normally involves decoding the pattern of
hit strips and building a hit map accumulated for all trigger patterns
with the same parameter setting. Additional modes include recording
the raw bit stream sent by the ASICs, and extracting particular parts
of responses (for instance chip ID, or the address or value in a
register read).
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{figure/ASIC-Sensor-mapping.png}
\caption{Front-end wire bonding scheme mapping sensor strip numbers and front-end channel numbers (not to scale). The 256 ABC130 readout channels are split into two streams of 128 channels each, with each stream corresponding to strips located on one sensor segment.}
\label{fig:bond_map_frontend}
\end{figure}
Though the ABC130 ASICs have 256 channels, half of these are bonded to
strips running under the ASIC and half are bonded running away from
the ASIC (see figure~\ref{fig:bond_map_frontend}). Thus a pair of histograms is produced for occupancy plots for each of these sets of channels. Where plots show 128 channels per ASIC, an arbitrary choice between these sets has been made.
\subsubsection{Characterisation Tests}
\label{chartests}
In order to characterise both hybrids and modules electrically, a
sequence of tests is performed. This starts with digital tests where
the response data is expected to be either all or nothing.
Following this are a series of analogue tests with a variable
response. As the hit decision is binary, analogue values are extracted
by sending and accumulating data from multiple triggers.
The current set of digital tests are as follows:
\begin{itemize}
\item Capture HCC and ABC IDs\\
This function tests whether communication with all ASICs is possible. If successful, the ID numbers assigned to each ABC130 and HCC130 ASIC are read out.
\item NMask\\
This diagnostic test changes the setting of the mask register and uses the
send mask feature to produce a deterministic pattern on the output.
\end{itemize}
The analogue tests mostly involve using the charge self-injection function of
the ABC130 (see section~\ref{sec:ABC}). This involves sending a particular command to the ASIC, followed
by an L0 trigger. This simulates a strip hit using the discharge of an
internal capacitor via a timed pulse. The timing
of this pulse (aka strobe) within a clock period can be adjusted by passing
through some number of buffers. The pattern of strips into which charge is injected can
be changed arbitrarily, which allows different patterns of
neighbouring strips to be injected independently. In this case, an
additional loop is applied so that charge is injected into all strips when
integrated over the full scan. How many strips enabled at each step is
configurable, in addition to the number of triggers in each loop.
The current set of analogue tests are as follows:
\begin{itemize}
\item Strobe Delay \\
Before using the calibration injection for
other tests an appropriate delay value is chosen. The correct
setting varies between ASICs due to process variations and over
time due to sensitivity to conditions such as temperature. During
a Strobe Delay scan, a charge of approximately \unit[4]{fC}
is injected into each readout channel, which
is subsequently read out repeatedly at a readout threshold of
approximately \unit[2]{fC}. The varying parameter is the delay in
the injection strobe (between the clock edge and the pulse
generation), over the full range of potential delays (6-bits
representing approximately \unit[80]{ns}).
The compression mode is set to detect the edge of the pulse, so
this finds a window of strobe delay units in which the injected
charge is registered in a particular clock (see
figure~\ref{fig:StrobeDelay}). The correct setting, for subsequent tests,
is chosen based
on the timing of the edges of this window. The delay is set for
each individual ASIC at \unit[57]{\%} of the distance from the
rising edge. This value was selected based on a more detailed scan
of the pulse shape and the noise and gain at different delay values.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figure/StrobeDelay.pdf}
\caption{Example of a Strobe Delay scan for a hybrid with ten readout
chips. For a range of delay settings, signals are injected to each
channel 32 times per setting. Increasing delay means moving
from the falling edge (where channels are not aligned) to the rising
edge (where channels are more sharply aligned) of the pulse. Each
channel registers all injected signals over a range of about 26 DAC
units. The delay for each ASIC is set at {\unit[57]{\%}} of the
pulse width from its rising edge.}
\label{fig:StrobeDelay}
\end{figure}
\item Three Point Gain\\
The response of the amplifier for each
readout channel is measured using a sequence of threshold scans,
where a different charge is injected for each. During the threshold
scan, the discriminator threshold is varied. For each injected
charge, the resulting distribution is expected to be a step function, which becomes an ``S-curve'' (a complimentary error function) due to smearing from noise effects (see
figure~\ref{fig:TPG1}). The shape and slope of the S-curve can be
used to determine the noise and Vt50 of each input channel (see figure~\ref{fig:TPG2}). Vt50 describes the mean amplifier response
for an injected charge, i.e. the point of the curve where
\unit[50]{\%} of readout triggers lead to a hit being registered. By performing
threshold scans for different input charges (e.g. \unit[0.5]{fC},
\unit[1.0]{fC} and \unit[1.5]{fC}), the relation between input
charge and readout threshold can be mapped and each readout
channel's gain be determined using a linear fit (see
figure~\ref{fig:TPG3}). Additionally, the offset of the gain
function of each channel is determined. Figure~\ref{fig:TPG4} shows
the resulting noise distribution for one hybrid with ASICs. This
gain can be used to convert the output noise (the measured width of
the S-curve) into the input-referred noise (the derived noise at the input of
the amplifier), which is then reported in electrons (of equivalent
noise charge).
\begin{figure}
\centering
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/TPG1.pdf}
\caption{S-curve obtained from a threshold scan of a readout channel.}
\label{fig:TPG1}
\end{subfigure}
\begin{subfigure}{.02\textwidth}
\hfill
\end{subfigure}
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/TPG2.pdf}
\caption{First derivative of an S-curve with Vt50 and noise.}
\label{fig:TPG2}
\end{subfigure}
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/TPG3.pdf}
\caption{Gain of individual readout channel, from Vt50 measurements.}
\label{fig:TPG3}
\end{subfigure}
\begin{subfigure}{.02\textwidth}
\hfill
\end{subfigure}
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/TPG4.pdf}
\caption{Noise of channels from all ASICS on one hybrid (about {\unit[400]{ENC}}).}
\label{fig:TPG4}
\end{subfigure}
\caption{Examples from a Three Point Gain measurement of one hybrid:
measurement of the S-curve of an individual channel, mean of
noise and Vt50 for that channel, gain calculation from measurements
at different input charges and noise distribution for one hybrid.}
\label{fig:TPG}
\end{figure}
It should be noted that while threshold tests and their analysis are
performed based on ASIC DAC counts (referring to bit register
settings), the corresponding threshold (measured in mV) does not
increase linearly with DAC counts over the full range of thresholds (see figure~\ref{fig:ABC130_thresh}) and is only converted into mV during
the last step of the analysis.
\item High statistics Three Point Gain\\
While the standard Three
Point Gain performed on hybrids (see section~\ref{subsec:test_hyb})
is sufficient to identify dead channels, the uncertainties of
parameters derived from a fit of the obtained S-curve (see
figure~\ref{fig:TPG1}) depend on its statistics. Increasing the
number of triggers used for the measurement of an S-curve leads to
more reliable channel characteristics (see
figure~\ref{fig:modtest_highstats}).
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figure/ThreePG_stats.pdf}
\caption{Comparison of channel noise measured for one hybrid on a
barrel module from Three Point Gains with high (1000 triggers per
threshold) and low (50 triggers per threshold) statistics. Noise
fluctuations were found to be reduced by the number of applied
triggers, which shows increased noise in certain module regions (caused by insufficient powerboard shielding, see
section~\ref{sec:selectModule:PBEffect}) that are hidden in overall
noise fluctuations for a measurement with low statistics.}
\label{fig:modtest_highstats}
\end{figure}
Due to the time consumption of full threshold scans with high
statistics, modules were tested with low statistics thresholds scans
first to check their overall functionality before performing an
additional high statistics Three Point Gain.
\item Trim Range \\
During the operation of a module, readout
thresholds are not set for individual channels, but for full readout
chips. An operating threshold is chosen to be as low as possible
while also minimising noise occupancy ($\unit[<1]{\%}$). S-curves
from different channels from the same module show a large spread
over the threshold range (see figure~\ref{fig:modtest_notrim}),
which makes the selection of an operating threshold less efficient,
as a threshold with less than \unit[1]{\%} noise occupancy for all
channels leads to a wide range of distances between operating
threshold and Vt50 point.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figure/NoTrim.pdf}
\caption{S-curves for 100 sequential channels on an ABC130 module
without trimming: the positions of their Vt50 points are distributed
over a range of {\unit[26]{DAC counts}}.}
\label{fig:modtest_notrim}
\end{figure}
In order to ensure a uniform response of all module channels to which
the same readout threshold is applied, S-curve positions can be
shifted in the threshold range. While an efficiency curve can not be
moved towards lower thresholds, it can be moved towards higher
thresholds by adding an offset, which has to be
determined by channel, to the pedestal. In order to find a threshold to which a
majority of channels can be trimmed, a scan over the TrimDAC values and
the Trim Range is performed. A chosen charge is injected into the front-ends
and the trims adjusted so that the thresholds align for a particular
chip-level threshold. The Trim Range (the scale of the trim changes) is
chosen to be allow as fine tuning as possible
while including as many channels as possible. Thus, all channels
on a chip are trimmed by adding the tuned offset value to their
threshold, leading to a uniform
distribution of Vt50 on all channels of the same chip (see
figure~\ref{fig:modtest_trim}).
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figure/Trim.pdf}
\caption{{\unit[50]{\%}}-efficiency point (Vt50) of channel S-curves
before and after performing a Trim Range: before trimming, Vt50s of
channels show a large spread. After trimming, channels on the same
readout chip show a flat distribution of Vt50 at a higher value,
which is achieved by adding to the pedestal of S-curves with low
Vt50.}
\label{fig:modtest_trim}
\end{figure}
\item Noise Occupancy\\
The noise occupancy test records channel
occupancy with no injected charge. This is carried out for a series
of thresholds in order to extract the noise curve of the pedestal.
A variety of options are provided for the timing and number of these
triggers.
\item Response Curve \\
In the response curve test, the
correspondence between input charge and threshold is further characterised, beyond the linear
regime of the Three Point Gain test (see
figure~\ref{fig:ABC130_RC}), for higher input charges and corresponding
thresholds and
where the relationship becomes non-linear. This uses ten threshold
scans over a range of input charges up to \unit[6]{fC}.
The correspondence between DAC count
and threshold voltage is approximately linear over a large range of
these settings, an option is provided to make a correction base on
based on a simulation of the relationship
(see figure~\ref{fig:ABC130_thresh}), for greater accuracy.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{figure/Thresholds.pdf}
\caption{Calibrated relation between ABC130 threshold setting and
corresponding physical threshold: the corresponding threshold
increase is linear for low thresholds, but becomes non-linear for
high thresholds.}
\label{fig:ABC130_thresh}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{figure/cal13_notrim.pdf}
\caption{Response Curve for an average of all readout channels on an
ABC130 readout chip: the relation between input charge and readout
threshold is linear for low threshold and becomes non-linear for
high thresholds. The input charge range covered in a Three Point
Gain scan (performed at {\unit[1]{fC}}) is highlighted in orange.}
\label{fig:ABC130_RC}
\end{figure}
Performing a Response Curve allows to relate the ABC130 threshold
setting, physical readout threshold and input charge.
\end{itemize}
\subsection{Hybrid tests}
\label{subsec:test_hyb}
Flexes with ABC130 ASICs and an HCC130 chip can be tested for electrical
functionality, which is used to identify nonfunctional hybrids prior
to the assembly of hybrids, powerboards and sensor into a module. In
order to test assembled hybrids, hybrids are electrically connected to
a hybrid panel, which serves as a test structure using wire bonds to
supply power and read out data.
Each panel provides test positions for eight hybrids, which can be
tested in parallel, provided that the HCC130 on each hybrid has been
assigned an HCC130 ID different from HCC130 IDs of the other seven hybrids.
A test sequence for hybrids includes the following steps (described in
detail in section~\ref{subsec:test_elec}):
\begin{enumerate}
\item Capture HCC and ABC IDs
\item Strobe Delay
\item Three Point Gain
\end{enumerate}
A hybrid is only mounted on a sensor if it has passed all stages of
electrical testing.
\subsection{Powerboard assembly}
\label{subsec:assem_pb}
The powerboard v2 is produced in a thin FR4 based stack-up and loaded
with SMDs in a typical reflow process. The DCDC inductor due to its
shape is loaded manually, as is the shield box enclosing the DCDC
circuit. Special attention is given to fully seal the shield box with
a continuous solder seam to avoid leakage of radiated noise. Once
proper SMD loading has been verified in a visual inspection, the bare
die ASICs, the AMAC and HVmux, can be glued to the powerboard v2 and
wire bonded.
For testing the powerboard is temporarily wire bonded to a test
carrier PCB, which can be seen in
fig.~\ref{fig:pbv2_on_test_carrier}. This carrier can now be connected
to test equipment to test the functionality of the powerboard before
loading it onto a module. The test carrier also provides passive
cooling, which is necessary to run the DCDC circuit at high load
current without overheating the FEAST chip.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figure/pbv2_on_test_carrier.jpg}
\caption{Photograph of a powerboard v2 mounted to the test carrier PCB.}
\label{fig:pbv2_on_test_carrier}
\end{figure}
\subsection{Powerboard tests}
\label{subsec:test_pb}
To test the functionality of the powerboard before loading it onto the
module and to calibrate the AMAC the following tests are performed:
\begin{itemize}
\item AMAC communication test: ensures reliable communication with the AMAC
\item LV turn on/off: ensures functionality of switching \unit[1.5]{V}
of DCDC to hybrid on/off
\item HV turn on/off: ensures functionality of switching HVmux to
supply high voltage to a sensor
\item DCDC efficiency: measures DCDC efficiency for load currents from
0 to \unit[4]{A}
\item $V_\text{in}$ calibration: varies input voltage, which is
measured by the AMAC
\item $I_\text{out}$ calibration: varies output load, which is measured
by the AMAC via an amplifier measuring the voltage drop over the
inductor in the output $\pi$-filter
\item HV sense calibration: varies the current sources by the HV power supply, which is measured by a current-to-voltage converter in the
AMAC
\item Temperature calibration: vary the output load to change the
temperature of the powerboard and measure this via the thermistor
inside the shield volume and temperature sensor within the FEAST chip
\end{itemize}
All of the test results are saved and a calibration is derived for
each monitored value. During testing of 100 powerboard v2 boards only
hard failures have been observed, typically caused by errors during
SMD reflow or dead ASICs, as chips were not tested before loading.
\subsection{Sensor tests}
\label{subsec:test_se}
Sensors are electrically tested upon reception from the vendor, and additional tests are carried out after shipment to module assembly sites.
Reverse bias leakage current (IV) characteristics are determined by raising the bias voltage in \unit[10]{V} steps, observing a \unit[10]{s} delay before reading the current. An example IV curve is plotted in figure~\ref{fig:MA:IV}: the leakage current is well behaved up to~\unit[-1000]{V}.
Reverse bias capacitance curves are measured by an LCR meter using a frequency between \unit[1 and 5]{kHz} and \unit[100]{mV} amplitude. The full depletion voltage ($V_{D}$) is extracted from the intersection of two straight line fits to the
curve of $1/C^2$ versus voltage: one line is fitted to the linear
slope below the $V_{D}$, the other is fitted to
the flat section above $V_{D}$.
The depletion voltage is indicated by the red arrow shown in
figure~\ref{fig:MA:CV}.
\begin{figure}
\begin{subfigure}{.47\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/sensors/iv_VPX12518-W669_sensorcurve.pdf}
\caption{Sensor leakage current measurement (IV)}
\label{fig:MA:IV}
\end{subfigure}
\begin{subfigure}{.04\textwidth}
\hfill
\end{subfigure}
\begin{subfigure}{.47\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/sensors/cv_VPX12518-W669_sensorcurve.pdf}
\caption{Sensor bulk capacitance measurement (CV)}
\label{fig:MA:CV}
\end{subfigure}
\caption{IV and CV curves for \mbox{ATLAS12SS} sensor. The sensor depletion voltage is derived from the CV measurement (indicated by red arrow).}
\end{figure}
The depletion voltage was extracted for all sensors and found to be \unit[-370]{V} on average for the investigated sensors. An overview for a subset of 68 \mbox{ATLAS12} sensors is shown in figure~\ref{fig:depletions}.
\begin{figure}
\centering
\includegraphics[width=0.6\linewidth]{figure/A12VfdHisto.pdf}
\caption{Depletion voltages determined from bulk capacitance measurements performed on 68 \mbox{ATLAS12} sensors. An average depletion voltage of {\unit[$368\pm10$]{V}} was determined.}
\label{fig:depletions}
\end{figure}
Full strip test measurements were carried out on the barrel sensors, whereby
successively \unit[-10]{V}, \unit[-100]{V} is applied across the strip metal and
bias rail to check for short circuits or oxide pinholes, respectively. An LCR
meter (\unit[1]{kHz}, \unit[100]{mV}) is used to determine the
R$_{\text{bias}}$ and C$_{\text{coupling}}$ values of the AC circuit formed by
the bias resistor and capacitance between the strip implant and strip metal.
The specification for the sensor bias resistance is \unit[1.5$\pm$0.5]{M$\Omega$} and the strip AC coupling capacitance is required to be \unit[$>$20]{pF/cm}. The measurements discussed above were compared against these values, and the number of channels outside these specification was found to be on average 5 out of 1280 channels (with the specification requiring a minimum of \unit[98]{\%} good channels).
Figure~\ref{fig:badchannels} shows an overview of the number of bad channels per sensor found on 100 sensors.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figure/BadChannels.pdf}
\caption{Number of bad channels per sensor found during indidividual strip tests of 100 \mbox{ATLAS12SS} sensors.}
\label{fig:badchannels}
\end{figure}
In addition, the maximum number of pinholes was required to be at most seven per 1280 sensor strips, which were required to not form a cluster of eight or more for an individual sensor segment.
\subsection{Module assembly \label{subsec:MA}}
Successfully tested hybrids, powerboards and sensors were assembled into modules in a gluing process similar to the assembly of hybrids:
First, sensors are aligned on a precision vacuum jig using three alignment pins (see figure~\ref{fig:modulejig}). After positioning the sensor, vacuum is applied to the sensor backside to hold the sensor in position and keep it flat during the assembly process.
\begin{figure}
\centering
\includegraphics[width=0.6\linewidth]{figure/tools/modulejig.JPG}
\caption{Vacuum jig for module assembly: a sensor is positioned with respect to alignment holes by using it against three alignment pins located around the central vacuum area. Hybrids are placed on top of the sensor using the same hybrid pick-up tool also used for hybrid assembly, which is positioned using precision alignment holes in the vacuum jig.}
\label{fig:modulejig}
\end{figure}
Hybrids had to be mounted on the sensor surface before the powerboard, as the powerboard shield box on the powerboard prevents the positioning of the hybrid pick-up tool at the correct height. At this stage, hybrids were still located on individual positions on hybrid panels, where they were populated with ASICs and tested. Prior to assembly into modules, the wire bonds connecting hybrids to the power and data lines of a panel need to be removed.
Hybrids are then lifted from the hybrid panel using a hybrid pick-up tool (see figure~\ref{fig:pickuptool}), which is located using the hybrid panel alignment holes (see figure~\ref{fig:hyb_panel}) and thereby ensures the correct hybrid position with respect to the pick-up tool alignment pins.
A two-component epoxy (Epolite FH-5313) was used to attach hybrids and
powerboards to sensors. Over the course of the barrel module programme, an
extensive study was conducted to investigate potential epoxy glues for module
assembly~\cite{Cole}, which yielded two additional candidates: Eccobond F-112 and Polaris
PL-5313. Polaris PL-5313, the successor of Epolite FH-5313, was
chosen as the baseline for module assembly during production and Eccobond
F-112 as the alternative.
During the ABC130 barrel module programme, eight modules were assembled with F-112 and PL-5313 adhesives each, while the majority of modules was constructed using Epolite FH-5313.
Since no difference was observed for modules assembled
with these three adhesives, they will not be distinguished further in the text.
The mixed adhesive was applied to the hybrid backside using a glue stencil mounted on the hybrid pick-up tool (see figure~\ref{fig:stencil}). In order to ensure a sufficient glue viscosity for the use of a stencil, a waiting time of up to \unit[20]{min} was used between mixing both epoxy components and applying the glue.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figure/tools/stencil.JPG}
\caption{Glue stencil used to apply glue to the backside of a hybrid before mounting it on a sensor. A continuous glue line under all ABC130 ASICs was used to ensure good mechanical support for wirebonding and thermal connection. Alignment holes in the stencil frame allow to position the stencil on the hybrid using the pick-up tool alignment pins.}
\label{fig:stencil}
\end{figure}
After dispensing the adhesive, the stencil was removed and the hybrid positioned on the sensor and held in place using a brass weight. A stencil thickness of \unit[250]{$\upmu$m} was chosen to produce corresponding glue layers. By positioning the hybrid pick-up tool on dedicated landing pads in the module assembly jig, the glue layer between hybrids and sensor was compressed to a thickness that was set using adjustment screws on the pick-up tool. The target glue thickness between hybrid and sensor was chosen to be \unit[$120 \pm 40$]{$\upmu$m}, which corresponded to a \unit[50]{\%} thickness compression (assuming a \unit[100]{\%} stencil fill factor) and therefore a doubling of the glue area for good mechanical support below the hybrid.
A minimum curing time of \unit[6]{hours} was used before turning off the vacuum holding hybrid and sensor at a defined distance.
Since glue spreading over the sensor bias ring was found to cause early sensor breakdowns in several cases~\cite{Cole}, the sensor current was monitored after individual gluing steps.
ABC130 barrel modules can be tested (see section~\ref{subsec:test_mod}) without a powerboard attached to the module. An additional module test was therefore performed between hybrid and powerboard attachment, so that the impact of mounting a powerboard on a module could be studied directly (see section~\ref{sec:selectModule:PBEffect}).
In order to test the module performance, each ABC130 readout channel was connected to a silicon sensor strip using aluminium wire wedge bonding. Front-end bonds (i.e. wire bonds connecting the analogue ABC130 readout channels to the sensor) were drawn in four rows arranged in layers (see figure~\ref{fig:sensorwires}). Out of the four rows of staggered sensor bond pads, the lower two rows (64 wires each) were attached to the inner strip segment of an SS module (or the sensor segment located beneath hybrid and powerboard of LS modules), the upper two rows were connected to the outer segment of an SS sensor (or sensor segment without hybrid and powerboard on an LS module), see figures~\ref{fig:module_SS} and~\ref{fig:module_LS}.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figure/sensorwires.JPG}
\caption{Front-end wire bonds connecting the readout channels of ABC130 readout chips and sensor strips: wire bonds are arranged in four layers, with the lower two rows attached to the sensor segment located below the readout chips and the upper two rows attached to the neighbour sensor segment.}
\label{fig:sensorwires}
\end{figure}
Different from hybrids, the layout of powerboards did not allow them to be picked up using vacuum pick-up tools, as the high density of components mounted on the powerboard did not leave enough space for reliable vacuum connections. Powerboards were therefore held along the PCB edges using a width adjustable tool that could be tightened around the powerboard edges using screws (see figure~\ref{fig:pbtool}).
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figure/tools/pbtool.JPG}
\caption{3D printed holder to be fastened around the edges of a powerboard for positioning on a module. Four screws along the powerboard were used to close or open the tool and account for variations in the PCB width. In order to avoid squeezing the shield box mounted on the powerboard, the corresponding shape was carved out of the holder (right side).}
\label{fig:pbtool}
\end{figure}
After picking up the powerboard and fixing it in position, glue was applied to the powerboard's backside (see figure~\ref{fig:pbwithglue}). Powerboards were attached to sensors using the same two-component epoxy used between hybrids and sensors: Epolite FH-5313.
Afterwards, the powerboard was placed between two hybrids on an SS module (see figure~\ref{fig:pbmounting}) or next to one hybrid on an LS module, with a gap of about \unit[1]{mm} between hybrid and powerboard edge to facilitate wire bonding.
\begin{figure}
\centering
\begin{subfigure}{.47\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/tools/pbholder.JPG}
\caption{Glue layer on backside of an upside down powerboard mounted in a powerboard holder.}
\label{fig:pbwithglue}
\end{subfigure}
\begin{subfigure}{.04\textwidth}
\hfill
\end{subfigure}
\begin{subfigure}{.47\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/tools/pbonmodule.JPG}
\caption{Powerboard held in powerboard holder positioned between hybrids on a sensor.}
\label{fig:pbmounting}
\end{subfigure}
\caption{Assembly of a powerboard between two hybrids on a sensor.}
\end{figure}
Similar to the tools used to mount hybrids on sensors, powerboard gluing tools can be adjusted to set the target glue height of \unit[$120 \pm 40$]{$\upmu$m} prior to assembly (see figure~\ref{fig:pbjig}).
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{figure/tools/pbjig.JPG}
\caption{Full powerboard gluing tool assembly: while the module with hybrids is held in position using a vacuum jig, a powerboard with glue is positioned above it. The powerboard is held in a clamp, which is aligned with respect to hybrid and sensor edges using micrometer screws on the holding frame. Screws on the frame allow to set the glue height between powerboards and hybrid to the target height.}
\label{fig:pbjig}
\end{figure}
After positioning a powerboard on a module, vacuum pressure was maintained throughout a curing time of at least six hours.
Since powerboard assembly tools were still under development during the ABC130 barrel module programme, part of the modules were assembled without the tools being available. In these instances, modules were assembled by placing powerboards on the module by hand.
During the ABC130 barrel module programme, the method of holding a powerboard along its edges for assembly was found to lead to uneven glue thicknesses in case of warped powerboards or irregularities along the powerboard edges. Subsequent versions of powerboard pick-up tool were therefore designed to use vacuum pickup pins (similar to hybrid pick-up tools). Additionally, the powerboard design for the next generation of readout ASICs was modified to increase the size of areas suitable for vacuum pick-up.
\subsubsection{Sensor metrology after module assembly}
For the attachment of modules onto support structures, modules are required to have a bow of no more than $\unit[\pm^{150}_{50}]{\upmu\text{m}}$, where positive numbers refer to the module centre being below the edges.
While sensors themselves typically conform with this envelope upon delivery, the assembly process of modules, during which sensors and PCBs are held flat using vacuum tools, can affect the overall sensor bow. In order to monitor the impact of gluing on the sensor shape, dedicated metrology measurements were performed at different stages of module assembly (see figures~\ref{fig:bow1} to~\ref{fig:bow4}): using a white light interferometer system, the absolute sensor height was measured for a fine grid of inspection points (see figure~\ref{fig:bow1}). The sensor shape was mapped based on the measured heights and the sensor bow was calculated based on a fit through the sensor plane.
\begin{figure}
\centering
\begin{subfigure}{.47\textwidth}
\centering
\includegraphics[width=0.9\linewidth]{figure/Bow1.pdf}
\caption{Absolute height measurement grid on sensor}
\label{fig:bow1}
\end{subfigure}
\begin{subfigure}{.04\textwidth}
\hfill
\end{subfigure}
\begin{subfigure}{.47\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/SS13_b.pdf}
\caption{Overall sensor shape after tilt correction}
\label{fig:bow2}
\end{subfigure}
\begin{subfigure}{.47\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/SS13_a1.pdf}
\caption{Sensor shape after attaching first hybrid}
\label{fig:bow3}
\end{subfigure}
\begin{subfigure}{.04\textwidth}
\hfill
\end{subfigure}
\begin{subfigure}{.47\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/SS13_a2.pdf}
\caption{Sensor shape after attaching second hybrid}
\label{fig:bow4}
\end{subfigure}
\caption{Height measurements performed on sensor at different stages of module assembly.}
\end{figure}
After each measurement, the overall sensor shape and maximum height deviations were calculated. Examples for four modules are shown in table~\ref{tab:bows}.
\begin{table}
\centering
\begin{tabular}{c|c|c|c}
& \multicolumn{3}{c}{Maximum difference, $[\upmu\text{m}]$} \\
Module & Sensor & First hybrid & Second hybrid \\
\hline
SS12 & 76 & 83 & 103 \\
SS13 & 103 & 101 & 93 \\
SS14 & 84 & 85 & 95 \\
SS15 & 96 & 119 & 98 \\
\end{tabular}
\caption{Distance between maximum and minimum height deviation from the sensor plane determined in optical measurements for four short strip modules.}
\label{tab:bows}
\end{table}
Measurements did confirm that the attachment of hybrids to a sensor affected the sensor shape and changed the distance between the maximum and minimum deviation from the sensor plane by up to \unit[27]{$\upmu$m}. All modules were found to be within the specifications at all stages of assembly.
\subsection{Module tests}
\label{subsec:test_mod}
For electrical tests, modules are connected to dedicated test frames (see figure~\ref{fig:testframe}), through which power is supplied and data is read out.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figure/Testframe.JPG}
\caption{ABC130 short strip barrel module on a test frame PCB: wire bonds connect hybrids and powerboards to the test frame for data readout (bottom side) and power supply (top side). The sensor rests on a ledge surrounding a cutout in the test frame, through which high voltage and cooling are applied to the sensor backplane.}
\label{fig:testframe}
\end{figure}
In order to apply high voltage to the sensor backside, test frames have a cutout that allows a direct contact between the sensor backside and a testing jig. During testing, vacuum holes hold the sensor on the high voltage testing jig. Additionally, a cooling loop embedded in the testing jig is used to maintain a constant sensor temperature during testing. Condensation on the cooled module is prevented by flushing the test setup with dry air or nitrogen.
The powering concept of ABC130 modules allows the testing of directly powered hybrids as well as hybrids powered through a powerboard (see figure~\ref{fig:powerbonds}).
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figure/powerbonds1.JPG}
\caption{Power supplying bonds on a short strip module with a powerboard: current is supplied from the test frame to the powerboard through bond wires (right side). Hybrids are powered through wire bonds from the powerboard (left side).}
\label{fig:powerbonds}
\end{figure}
Since the attachment of hybrids and a powerboard are separate steps in the construction of modules, which can lead to sensor damage, causing e.g. an early sensor breakdown, modules were tested both after hybrid attachment and after powerboard attachment, permitting a comparison of the module performance at both stages (see section~\ref{sec:selectModule:PBEffect}).
For the full electrical test of a module, the tests performed on electrical hybrids (Capture HCC and ABC IDs, Strobe Delay and Three Point Gain - see section~\ref{subsec:test_elec}) are repeated with a fully depleted sensor. Results from the Three Point Gain are used to grade each individual module channel (see table~\ref{tab:3pg_grades}) and determine the number of good channels on a module.
\begin{table}
\centering
\begin{tabular}{c|c|c}
& Gain & Noise \\
\hline
\multirow{2}{*}{High} & \textbf{high gain} & \textbf{high noise} \\
& $\unit[>125]{\%}$ average chip gain & $\unit[>115]{\%}$ average chip noise \\
\hline
\multirow{2}{*}{Low} & \textbf{low gain} & \textbf{dead} \\
& $\unit[<75]{\%}$ average chip gain & \unit[0]{ENC}
\end{tabular}
\caption{Criteria for grading of module channels based on results from Three Point Gain.}
\label{tab:3pg_grades}
\end{table}
In addition to the test sequence above, more tests can be performed for a full module characterisation:
\begin{itemize}
\item High statistics Three Point Gain
\item Trim Range
\item Noise Occupancy
\item Response Curve
\end{itemize}
These tests provide a complete characterisation of a module such that its quality can be
graded.
\section{Selected module test results}
The aim of the ABC130 barrel module programme was to allow tests of the proposed procedures for assembly, readout and cooling concepts of both modules and integrated structures. Due to the extent of the programme, it was possible to gather statistics on assembly yields and test results and to validate component designs for subsequent component generations.
This section summarises some of the module test results obtained in electrical tests of modules as well as individual components.
In addition to the investigations of individual electrical characteristics presented in the following, performance evaluations of full electrical modules were conducted in particle beams at the DESY-II and CERN SPS facilities~\cite{testbeam}.
\subsection{Dependence on number of strobed channels and triggers}
\label{nstrobe_ntrig}
During an internal charge injection test (either three point gain or response curve), two important parameters can be changed in order to optimise the accuracy of the extracted results and the time that the measurement takes. These are the number of triggers (readout requests after charge injections) per threshold value and the number of channels in the ABC130 chip strobed simultaneously.
Figure~\ref{fig:nTrig} shows the noise extracted at \unit[1.5]{fC} from a response curve test as a function of the number of triggers included in the scans. As can be seen, the noise is underestimated when a small number of triggers is used but plateaus as the number is increased. This is because with a small number of triggers the low occupancy tail in the S-curve is underpopulated, which results in S-curve fits returning a narrower noise profile. As the number of triggers is increased, the tail is better populated, resulting in increased, and more correct, noise measurements. This effect has been verified using Monte Carlo simulation.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figure/nTriggers.pdf}
\caption{Calculated input noise as a function of the number of triggers per scan used in a response curve measurement for both bonded and unbonded channels on a short strip module.}
\label{fig:nTrig}
\end{figure}
Figure~\ref{fig:nStrobed} shows the input noise, gain and output noise extracted at \unit[1.5]{fC} from a response curve test as a function of the number of channels strobed (charge injected) at once compared to that extracted when all 256 channels are strobed simultaneously. The extracted input noise can be seen to reduce as the number of strobe channels decreases. In the response curve analysis, the input noise is calculated as the ratio of the measured output noise (in mV) and gain (in mV/fC) extracted from the response curve. As such, it is useful to look also at the output noise and gain in addition to the input noise. It is seen that the measured gain increases with reducing number of strobed channels whilst the output noise remains constant.
This measurement shows that the charge injection circuitry within the chip does not manage to inject as much charge as expected when strobing many channels at once. This results in a decreased measured gain. In turn, this decreased gain results in increased calculated input noise and thereby explains the decreased noise observed with reduced number of strobed channels. It can also be seen that increasing the capacitive load on the channels by increasing the strip length increases the size of the observed effect.
\begin{figure}
\centering
\begin{subfigure}{.6\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/nStrobed_noise.pdf}
\caption{Input noise}
\label{fig:nStrobed_innse}
\end{subfigure}
\begin{subfigure}{.6\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/nStrobed_gain.pdf}
\caption{Gain}
\label{fig:nStrobed_gain}
\end{subfigure}
\begin{subfigure}{.6\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/nStrobed_outNoise.pdf}
\caption{Output noise}
\label{fig:nStrobed_outnse}
\end{subfigure}
\caption{Input noise, gain and output noise as a function of the number of strobed (charge injected) channels. Results are shown as a ratio compared to the case when all 256 channels are strobed simultaneously. Results for 2.5 and \unit[5.0]{cm} strips are taken on an ABC130 module whilst the \unit[1.0]{cm} strip result uses a mini-sensor connected to an ABC130 hybrid with a single chip.}
\label{fig:nStrobed}
\end{figure}
As shown above, the extracted noise depends on both the number of triggers per scan and the number of channels probed simultaneously. During module testing a choice must therefore be made between speed of test, preferring low numbers of triggers and high numbers of strobed channels, and accuracy of the extracted input noise and gain, preferring high numbers of triggers and low number of strobed channels. As a result the decision was made that for quality control module testing, 192 triggers and 256 strobed channels would be the default whilst for the detailed module or chip characterisation tests shown below, 1024 triggers and 16 strobed channels is preferred to ensure that measurements are taken in the plateau of both distributions. Unless otherwise stated, all results below are done with the high number of triggers and low number of strobed channels required to get an accurate measure of input noise and gain.
\subsection{Module noise and strip capacitance}
The ABC130's amplifier input capacitance is the leading cause for an increased module noise (before irradiation) compared to the hybrid noise value. Therefore, the ABC130 chip set and modules have been fully characterised by measuring noise and gain as a function of load capacitance using the internal charge injection circuitry. This has been done on 32 channel prototype chips~\cite{Kaplon2012FrontEE}, on single chip test boards with either capacitor or mini sensor loads and on short strip and long strip barrel modules. In the case of the prototype, the front end was initially designed for positive signals rather than the negative signals coming from n-on-p sensors and so, results for both positive and negative signals are shown. In addition, results from an irradiated module are included which will be discussed further in Section~\ref{subsec:irradModules}. All non-prototype measurements shown are results are taken using the standard measurement configuration described in section~\ref{nstrobe_ntrig}.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figure/FullNoisePlot.pdf}
\caption{Input noise as a function of capacitance for FE prototype measurements with positive polarity (inverted triangles and blue fit) and negative polarity (triangles and light green fit), single chips loaded with capacitors (squares and dark green fit), single chips loaded with mini sensors (open stars), hybrids on short strip modules (open diamonds), hybrids on long strip modules (open crosses) and hybrids on mixed strip length irradiated modules (red open diamonds). The capacitance values for sensors is taken as the sum of the strip-backplane capacitance and interstrip capacitance (not including next-to-nearest neighbours) as taken from probing measurements of the sensors.}
\label{fig:NoisevsCint}
\end{figure}
Figure~\ref{fig:NoisevsCint} shows a summary of the noise measurements made as part of the ASIC characterisation. Capacitance values for tests performed including sensors are taken as the sum of the inter-strip capacitance and strip-backplane capacitance as measured during sensor probing. In addition, an extra \unit[0.5]{pF} is included for those strips running under the hybrid, taken from calculation of the expected capacitance increase due to the ground plane running above the strips. Good agreement is seen between prototype measurements and capacitive load measurements performed on full chips. An increase in noise is seen between the capacitor measurements and the short strip and long strip modules. A small increase, at the level of a few percent, is expected for such factors as the bias resistance, metal strip resistance and inclusion of next-to-nearest neighbour inter-strip capacitance. The leakage current contribution is negligible before irradiation, but it increases the noise by \unit[5-10]{\%} at full fluence. Finally, a significant increase in noise is seen during irradiation, which is corroborated by single chip measurements and is a TID dependent effect, which has led to a redesign of the front-end for the next ASIC generation.
Figure~\ref{fig:LS2} shows the noise and gain of an early barrel module. This module was built as a long strip module but using a short strip \mbox{ATLAS12} sensor. Short strips were then ganged together in such a way so that all channels running under the hybrid (stream 0) are long strips. In addition, strips running away from the hybrid (stream 1) and connected to the two chips at either end of the hybrid were ganged as long strips with the remaining six chips of channels running away left as short strips. This mixed strip module allows simultaneous characterisation of short and long strip channels at once. Results in this plot were taken at \unit[1.5]{fC} injection charge during a response curve test of 192 triggers and strobing all channels at once.
Firstly, the increased noise associates to channels connected to long strips can immediately be observed comparing the two streams on the six central chips. In addition, the extra \unit[30-50]{ENC} due to the presence of the hybrid above strips can be seen looking at the two chips at either end, which have long strips connected to both streams but show a slightly increased noise on those channels running under the hybrid. Finally, the gain results demonstrate that the measured gain is independent of the connected load.
\begin{figure}
\centering
\begin{subfigure}{.48\textwidth}
\includegraphics[width=\linewidth]{figure/Noise_LS2.pdf}
\caption{Input noise}
\end{subfigure}
\begin{subfigure}{.48\textwidth}
\includegraphics[width=\linewidth]{figure/Gain_LS2.pdf}
\caption{Gain}
\end{subfigure}
\caption{Noise and gain results for a mixed strip module taken from a response curve test at {\unit[1.5]{fC}} injected charge using 192 triggers and strobing all channels simultaneously. Stream 0 channels run under the hybrid and are ganged long strips whilst stream 1 channels run away from the hybrid. Stream 1 channels on the two chips at either end of the hybrid are connected to ganged long strips whilst the six central chips are connected to short strips. Results shown are a chip-by-chip average whilst the error bars show the RMS spread of noise or gain across each chip.}
\label{fig:LS2}
\end{figure}
\subsection{Noise occupancy results}
\begin{figure}
\centering
\begin{subfigure}{.75\textwidth}
\centering
\includegraphics[width=\linewidth,trim={0 0 0 0.75cm},clip]{figure/NO_0.pdf}
\caption{{\unit[2.5]{cm}} strips under hybrid}
\label{fig:NOresult1}
\end{subfigure}
\begin{subfigure}{.75\textwidth}
\centering
\includegraphics[width=\linewidth,trim={0 0 0 0.75cm},clip]{figure/NO_1.pdf}
\caption{{\unit[5.0]{cm}} strips away from hybrid}
\label{fig:NOresult2}
\end{subfigure}
\caption{Noise occupancy as a function of threshold setting and channel number on a mixed strip length module. The module has been trimmed such that the response of each channel is equalised for the case where {\unit[1]{fC}} of charge is injected, corresponding to a threshold DAC value of approximately 40.}
\label{fig:NOresults}
\end{figure}
Noise occupancy scans were run on a module in test beam with mixed strip lengths. Strips running under the hybrid were short strips whilst strips running away from the hybrid were ganged together as long strips. Noise occupancy scans without external charge injection are shown in figures~\ref{fig:NOresult1} and~\ref{fig:NOresult2}. These results were taken with the module trimmed to 1fC injected charge. Results are shown as a function of the 8-bit register setting of the threshold.
Chip-by-chip averages across 128 channels per chip are shown in figure~\ref{fig:NO_ERFC}. Threshold settings in DAC have been converted to mV using calibrations from simulation whilst conversion from threshold voltage to charge have been taken from the chip-by-chip response curve results obtained using the internal charge injection circuitry using 1024 triggers and strobing 16 channels at a time. The chip-by-chip results are then fitted with the complementary error function, the width of which can be used to extract the input noise of the system.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figure/Pedestals_ERFC.pdf}
\caption{Noise occupancy against threshold on a mixed strip length module. Results are shown chip-by-chip as an average over the connected channels and separated into long strips and short strips (128 channels of each per chip). The results are fitted with a complementary error function. The {\unit[50]{\%}} noise occupancy point is centred around {\unit[0]{fC}} by construction as it is the {\unit[50]{\%}} point, which is calibrated in the response curve analysis from which the charge calibration is derived.}
\label{fig:NO_ERFC}
\end{figure}
Assuming that the shape of the noise occupancy is Gaussian, the tail of the noise occupancy can also be used to extract noise from this data by plotting the natural logarithm of the noise occupancy versus the square of the threshold. The result of this is shown in figure~\ref{fig:NO_lnOcc}. As expected, the chip-by-chip averages clearly separate into two populations dependent on long-strip or short-strip. It can also be seen to a very good approximation that the tails of the noise occupancy are indeed Gaussian as demonstrated by the linear trends shown down to very low noise occupancy. In addition, the gradient of linear fits to this data can be used to extract the measured noise of the system.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figure/Pedestals_lnOcc.pdf}
\caption{The natural logarithm of noise occupancy against threshold squared on a mixed strip length module. The results are fitted with linear functions. Two populations of results can be clearly seen, arising from those channels connected to short strips and those to long strips. The linear agreement shown demonstrates the consistency of the tails of the noise occupancy with a Gaussian shape.}
\label{fig:NO_lnOcc}
\end{figure}
\subsection{Comparison between Noise Occupancy and Three Point Gain}
Three methods by which noise can be extracted from module testing results have been shown above:
\begin{itemize}
\item that extracted from internal charge injection measurements (extracted at \unit[1.5]{fC} from a response curve)
\item that extracted from a complementary error function fit of the noise occupancy data
\item that extracted from a linear fit to the natural logarithm of occupancy versus the square of threshold
\end{itemize}
The latter two, although based on the same data are driven by different parts of the distributions as the complementary error function extraction is dominated by the core of the noise occupancy S-curve whilst the linear fit is driven by the shape of the tail of the S-curves.
Figure~\ref{fig:CompareNoise} shows a comparison of the noise extracted from the three methods. For the short strips on the module, all three methods agree very well whilst in the long strip case the complementary error function analysis tends to return a larger noise than the other two methods, although within the statistical error bars shown the three methods generally agree everywhere.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figure/NoiseComparison.pdf}
\caption{A comparison of different noise extraction techniques used on a mixed strip length module. Results are shown for short and long strips. The error bars shown arise from the statistical errors on the fits used to extract the results.}
\label{fig:CompareNoise}
\end{figure}
\subsection{Sensor hysteresis}
\begin{figure}
\centering
\begin{subfigure}{.8\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/hysteresis_1.pdf}
\caption{Hysteresis effect of ramping the sensor up and down with varying lengths of time holding the sensor reverse biased at \unit[-400]{V}.}
\label{fig:hysteresis_1}
\end{subfigure}
\begin{subfigure}{.8\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/hysteresis_2.pdf}
\caption{Hysteresis effect of changing the length of time for which the sensor was held unbiased between tests.}
\label{fig:hysteresis_2}
\end{subfigure}
\caption{Input noise versus reverse bias voltage as a function of the bias voltage history of the module. Before these tests, the sensor was held unbiased for at least \unit[24]{hours}. In both cases, the sensor was held at \unit[-400]{V} for four hours. The bias voltage history is shown in the legend on each plot.}
\label{fig:hysteresis}
\end{figure}
Figure~\ref{fig:hysteresis} shows the result of running internal charge injection tests when changing the reverse bias voltage of a short strip module. The applied reverse bias voltage was changed in \unit[25]{V} steps from \unit[-25 to -400]{V}, where the full depletion voltage for this sensor is \unit[-370]{V}. The bias ramp speed was \unit[2]{V/s} and tests were run immediately after the bias voltage was reached. In the first test, the bias voltage was ramped down immediately after ramping up. The time the sensor was held at \unit[-400]{V} was changed from one to eight hours. As can be seen from the plots, the shapes of both the upward and downward ramp change depended on the biasing history of the sensor. In the second test, the bias voltage was held at \unit[-400]{V} for four hours, and the length of time it was kept unbiased was varied from zero to eight hours. As can be seen, the shape of the downward ramp is independent but the upward ramp shape strongly depends on the time the sensor was left unbiased. Note that prior to the first test, the sensor had been unbiased for more than \unit[24]{hours}.
These variations are attributed to the increase in inter-strip capacitance (C$_{\text{int}}$), which dominates the preamplifier input noise, for reverse bias voltages below the set bias voltage. The effect is described in detail in~\cite{thesis_CK}, and is probably caused by localised charge build-up in the surface layers of the sensor. Prolonged biasing of the sensor exacerbates the effect, and hence when ramping down, the noise levels increase compared to those measured when ramping up the bias voltage.
\subsection{EMI pick up studies \label{sec:selectModule:Pickup}}
The effect of induced noise from electromagnetic fields generated by the powerboard components
was studied in more detail. In a first investigation, the module susceptibility to an EMI agressor was tested as a function of both distance and frequency.
The module was powered by a powerboard glued onto the sensor surface in its nominal position between the two hybrids.
Noise was intentionally induced using an external coil of the same type and dimension as the one used on powerboards.
The coil was placed with its flux aimed perpendicular to the directions of the signal wire bonds of one of the ABC130 chips, as shown in Figure~\ref{fig:EMI:CoilSetup}.
A frequency generator provided the input with an input power of \unit[-15]{dBm}.
\begin{figure}[t]
\centering
\includegraphics[width=0.6\linewidth]{figure/EMIAggressorCoil.png}
\caption{The external coil and its orientation relative to the signal wire bonds that was used for the EMI pick up studies.}
\label{fig:EMI:CoilSetup}
\end{figure}
In the first configuration, the coil was placed $\sim$~\unit[1]{cm} from the sides of the ABC130 wire bonds, and the frequency was scanned between 5 and \unit[50]{MHz}.
A three point gain test was run for each frequency step, and the mean chip noise was calculated for the exposed chip.
In addition, prior to turning on the frequency generator, a three-point gain test was run to establish a baseline noise.
The induced noise from the coil was assumed to be uncorrelated from existing noise sources without the coil, and therefore the noise from the coil was calculated by subtracting in quadrature with the baseline noise values:
\begin{equation}
\sigma_{\text{EMI}} = \pm\sqrt{\sigma_{\text{coil}}^{2} - \sigma_{\text{baseline}}^2}
\end{equation}
\noindent
where the sign is positive in cases where the baseline noise was less than the induced noise from the coil and negative otherwise;
the latter being possible in cases where very little noise was induced into the module, and due to statistical fluctuations the noise measured with the coil was less than the baseline.
As shown in figure~\ref{fig:selectModule:pickup_bandpass}, the increase in noise is most pronounced for frequencies in the range of 10 to \unit[25]{MHz},
and has a strong dependence on proximity to the module.
Noise values are shown separately for even and odd channels, corresponding to inner and outer signal bonds.
For distances beyond~\unit[50]{mm}, the induced noise from the coil falls to zero.
The general shape of the frequency dependence reflects the ABC130 front-end amplifier bandpass.
The top wirebond rows have higher pickup than the bottom ones, indicating either an incomplete screening effect, or dependence on the wirebond loop area.
\begin{figure}[ht]
\begin{subfigure}{.48\linewidth}
\centering
\includegraphics[width=\linewidth]{figure/S19_ChipID_ABC130_M2_Hyb0_HCC13_RC_1_Row_1_NOISEENC_vs_distance.pdf}
\end{subfigure}
\begin{subfigure}{.48\linewidth}
\centering
\includegraphics[width=\linewidth]{figure/M16_ChipID_ABC130_M3_Hyb1_RC_1_Row_1_NOISEENC_vs_distance.pdf}
\end{subfigure}
\caption{Measured noise values as a function of the frequency of applied EMI fields, and distance from the external powerboard coil.}
\label{fig:selectModule:pickup_bandpass}
\end{figure}
The EMI spectrum from the powerboard was measured with an open loop probe and a network analyzer, both for the board with and without the shieldbox (figure~\ref{fig:EMIattenuation}). The primary emission spectrum has peaks descending with frequency, which are harmonics of the \unit[2]{MHz} driving frequency. The shield box provides an attenuation that grows with frequency, consistent with the skin effect properties. The attenuation is direction-dependent. It is about \unit[60]{dB} upwards and downwards of the powerboards, and about \unit[25]{dB} on the side. Although the \unit[2]{MHz} emission is significant, there is a strong reduction of the emission at the peak of the amplifier's acceptance. The general features of the spectrum have been simulated in SPICE using a model with a buck converter, as shown in figure~\ref{fig:SPICE_EMI}, where the measured EMI noise on a module is super-imposed above the simulated power spectrum of the coil.
\begin{figure}
\centering
\begin{subfigure}{.85\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/PB-attenuation.png}
\caption{Measurements of powerboard EMI power spectrum with an open loop probe with and without the shield box.}
\label{fig:EMIattenuation}
\end{subfigure}
\begin{subfigure}{.90\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/ModulePickup_SimulatedPowerPower.png}
\caption{Simulated power spectrum from the coil in the powerboard circuit and the module noise pickup.}
\label{fig:SPICE_EMI}
\end{subfigure}
\caption{Powerboard-induced EMI measurements and simulations as a function of frequency, illustrating the effect of the powerboard shield (a) and the presence of EMI harmonics in the bandpass of the ABC130 amplifiers (b).}
\end{figure}
The measurements confirm the existence of many EMI emission harmonics in the front-end amplifier's bandpass, which can contribute to the readout noise. The strong attenuation provided by the shield box is essential for the unusual placement of the DC-DC converter directly on the module in close proximity to both sensor strips and the front-end circuitry.
\subsection{Effect of powerboard on module noise \label{sec:selectModule:PBEffect} }
During the ABC130 barrel module program, each module was tested twice: once after hybrids were mounted, by powering hybrids directly, and again after its powerboard had been mounted, by powering hybrids through the powerboard as intended in the detector. While the additional test was performed to attribute causes of electrical failures during testing to either powerboard or hybrids, the repeated measurements allowed for a comparison of the module performance before and after its powerboard was mounted.
Figure~\ref{fig:PBnoise1} shows the ratio of noise measured before and after powerboard attachment for each sensor strip on all of the four strip segments.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figure/Ratios_LBL_12.pdf}
\caption{Strip-by-strip ratios of channel noise with and without attached powerboard, per sensor segment. While the outer sensor strips show a mostly flat distribution of noise ratios, the inner two segments, which are partially covered by the powerboard, show an increased noise after powerboard attachment in the vicinity of the shield box. Hybrid 2 shows a distinct peak next to the shield box, hybrid 1 shows a flat increase next to it.}
\label{fig:PBnoise1}
\end{figure}
A similar pattern was found for most of the produced barrel modules: while the outer sensor segments, which are not covered by the powerboard, showed mostly flat noise distributions, inner module segments, which are partially covered by the powerboard, consistently showed an increase in the vicinity of the shield box.
In order to compare the noise increase after powerboard attachment, module test results from all produced modules were combined. Since the absolute ratio depends on the environmental conditions during testing, test results from different modules were not combined using the absolute calculated noise ratios. Instead, an average noise ratio was determined for each strip segment in each test using a linear fit. The calculated average was used to determine, on a strip-by-strip base, channels where the calculated noise ratio was more than \unit[2.5]{\%} higher than the average ratio for that strip segment. Each channel which exceeded this threshold was added to a map, which was then compared to the position of the powerboard on the modules (see figure~\ref{fig:PBnoise2}).
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figure/Comparison_excess.pdf}
\caption{Map of module channels which showed an excess $(\unit[> 2.5]{\%})$ compared to the average noise ratio on that strip segment. Blue, orange and violet indicate different module production sites, with each colour variation corresponding to contributions from one individual module. Combined results from different module production sites show a similar distribution of noise increase as an individual module (see figure~\ref{fig:PBnoise2}). Inner sensor segments show an increased noise around the sensor shield box, while outer rows show only randomly distributed noise.}
\label{fig:PBnoise2}
\end{figure}
The combination of test results from different modules and construction sites confirmed that the observed noise increase was a systematic effect caused by the presence of the powerboard.
Based on these measurements, the powerboard design was modified to increase its shielding properties.
\subsection{Irradiated modules}
\label{subsec:irradModules}
In order to investigate the radiation tolerance of the ABC130 barrel module, a mixed strip length module was irradiated with protons at the CERN PS to a fluence of $\unit[8\times10^{14}]{n_{eq}/cm^2}$, corresponding to the maximum NIEL fluence expected at the end of life-time in the short strip region of the ITk strip barrel. The same charge injection based measurements were performed on this module and the results can be seen in figure~\ref{fig:irrad}, which shows the noise and gain of the module before and after irradiation. Only 5 chips are shown due to issues during the assembly process.
As can be seen, there is an increase in the measured noise on the module by about $\unit[20]{\%}$. It is specified that at end of life the detector must have a charged particle detection efficiency of at least $\unit[99]{\%}$ and an occupancy from noise hits of no more than $10^{-3}$. This requirement approximately equates to a requirement of a signal-to-noise ratio (SNR) of at least 10 throughout the life of the experiment. Given the conservative estimate of the charge collection at end of life of $\unit[1.6]{fC}$ ($\unit[10000]{e^-}$) and the measured noise of $\unit[950]{e^-}$ for the short strips on this module, an SNR of 10.5 is achieved and the end of life requirement is met. The end-of-life requirement has been further studied in test-beam studies~\cite{testbeam}.
\begin{figure}
\centering
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/Noise_LS3_unirrad.pdf}
\caption{Noise before irradiation}
\label{fig:irrad2a}
\end{subfigure}
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/Gain_LS3_unirrad.pdf}
\caption{Gain before irradiation}
\label{fig:irrad2b}
\end{subfigure}
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/Noise_LS3_irrad.pdf}
\caption{Noise after irradiation}
\label{fig:irrad3a}
\end{subfigure}
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/Gain_LS3_irrad.pdf}
\caption{Gain after irradiation}
\label{fig:irrad3b}
\end{subfigure}
\caption{Measured noise and gain on a mixed strip length ABC130 barrel module before and after irradiation with protons to $\unit[8\times10^{-14}]{n_{eq}/cm^2}$ at the CERN PS. The channels under the hybrid (red) are connected to short strips whilst those running away from the hybrid (black) are connected to long strips formed by ganging together short strips. Measurements before irradiation were performed at room temperature whilst measurements after were performed at $\unit[-15]{^{\circ}C}$.}
\label{fig:irrad}
\end{figure}
\subsection{Bad channel identification}
As described in section~\ref{chartests}, the Three Point Gain test can be used to determine electrical characteristics of module channels and assess their quality. This section investigates different types of channel defects and optimising their identification.
\subsubsection{Electrical Shorts} \label{elshorts}
Due to a lack of natural abundance of strip defects in the prototype sensors (see figure~\ref{fig:badchannels}), defects were intentionally added to a short-strip module in the form of additional wire bonds for some of the strips. Three types of defects were added corresponding to different scenarios that could arise:
\begin{itemize}
\item bonds from strip to DC pad (added to three strips) to simulate shorts between the strip and implant (pinholes)
\item bonds between neighbouring strips (added to four pairs of strips) to simulate shorts at the sensor
\item bonds between non-neighbouring strips (added to four pairs of strips) to simulate shorts between ABC130 input channels or signal wirebonds
\end{itemize}
Figure~\ref{{defectimages}} shows an example of each type of defect.
\begin{figure}
\centering
\includegraphics[scale=0.375]{figure/defects/defect_images.png}
\caption{Examples of the three types of defects that were added to a short-strip module for the study in section~\ref{elshorts}. The image on the left shows a simulated pinhole with a wirebond between AC and DC pads. The images on the middle and right show simulated AC shorts for neighbouring and non-neighbouring channels respectively.}
\label{{defectimages}}
\end{figure}
The Three Point Gain test was run after these defects were added. The three strips bonded to simulate pinholes all had normal behaviour. All four pairs bonded to simulate shorts to neighbour strips showed similar behaviour where one channel has high gain and the other has low/nearly zero gain. The threshold scans in figure~\ref{key19} show an example of this behaviour. Three out of the four pairs of channels bonded to simulate shorts between non-neighbouring strips also showed a similar behaviour. The one pair that had different behaviour may have been due to an issue with wire bonding.
\begin{figure}
\centering
\includegraphics[scale=0.25]{figure/defects/acshort_scurve_newer.png}
\caption{Example of threshold scans for each channel of bonded neighbouring strips are shown in the collection of plots in the left and right columns. The scans correspond to injected charges of {\unit[0.5]{fC}} (top), {\unit[1.0]{fC}} (middle), and {\unit[1.5]{fC}} (bottom). The red curves correspond to one of the channels from the pair. The neighbouring channels are shown in blue for reference.}
\label{key19}
\end{figure}
From these results it appears that pinholes cannot be identified from Three Point Gain tests while the shorts between strips can be identified from an obvious pattern. In regards to the inability to identify pinholes, it may be possible that after irradiation they may be visible from these tests and thus will require further investigation.
\subsubsection{Unbonded Classification and Bias Voltage}
The motivation for this study was to investigate the idea of running a Three Point Gain test at reverse bias voltage below full depletion for the purpose of identifying channels for which the signal bonds have become disconnected. Lowering the bias voltage of the sensor should increase the overall noise of the module and thus make it more distinct compared to hybrid-level noise, which helps to identify unbonded channels.
The classification of channels based on noise measurements is anticipated to be more ambiguous for future ABCstar modules due to a smaller noise dependence on the input capacitance. For illustration purposes, this effect was studied for an R0 module with an ABC130 chip set (see figure~\ref{r0histogram}), which features one row of strips with short length (\unit[1.9]{cm})~\cite{R0ref}.
\begin{figure}
\centering
\includegraphics[scale=0.5]{figure/defects/r0histogram.png}
\caption{Histograms of the noise for channels from an R0 module with values from hybrid tests in red and the four other colours corresponding to different strip rows. Some overlap can be seen in the tails of the input noise distributions for the innermost strip row in blue and the hybrid values.}
\label{r0histogram}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.5]{figure/defects/below_plot.png}
\caption{Comparison of the input noise between chip average (blue) and an unbonded channel (red) on the same chip as a function of the applied reverse bias voltage. The difference in terms of the number of standard deviations is shown below.}
\label{key26}
\end{figure}
Three Point Gain tests were run at different reverse bias voltages between -1 and
\unit[-400]{V}. The tests were conducted using a short-strip module which had
a few channels that were intentionally unbonded to use as a comparison. An
example of the noise difference between bonded and unbonded channels is shown
in figure~\ref{key26}. The optimal value seems to be around \unit[-25]{V}
where it is possible to substantially increase the noise while still obtaining
reasonable results without large variations in the channel noise that appear
at lower bias voltages.
\subsection{Current increase with Total Ionising Dose (TID bump)}
\label{sec:TIDBUMP}
The \unit[130]{nm} process used to fabricate the ABC130 chip set is known to be
sensitive to certain radiation effects~\cite{TIDfaccio}. In particular, NMOS
transistors fabricated using this process show an increase in leakage current
when exposed to ionizing doses of radiation up to approximately \unit[1]{Mrad} (full lifetime dose is simulated to be \unit[53.2]{Mrad}).
Continued exposure to ionizing radiation (beyond \unit[1]{Mrad}) then gradually reduces
the leakage current back towards its nominal value. This phenomenon has been
named the ``TID bump".
The size of the effect depends both on the
characteristics of the transistor and on the characteristics of the radiation.
The use of enclosed layout transistors in analog blocks of the chips, such as
those which perform the signal amplification and discrimination, completely
negates the impact of the TID bump for those transistors. However, the impact
is considerable for regions of the chips using digital functionality, where the
transistors do not use an enclosed layout. For these portions of the chips, it
is crucial to understand the impact of the TID bump, and its dependence on the
environmental conditions. Too much leakage current could result in an inability
to properly power the chips or to cool the detector.
Chips were tested while
being cooled and irradiated at dose rates compatible with those expected during
HL-LHC operation. Chips were cooled to between \unit[-25]{$^{\circ}$C} and \unit[0]{$^{\circ}$C}. They were irradiated at dose rates ranging from \unit[0.6]{krad/h} to
\unit[2.5]{krad/h}; these rates cover the expected range for chips in different
positions during nominal HL-LHC operation. The observed increase in leakage
currents for these tests ranged between \unit[$\sim30$]{\%} and \unit[$\sim160$]{\%}, with smaller increases at higher temperatures and lower dose rates~\cite{TDRs} (see figure~\ref{fig:TIDbump}).
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figure/TID_bump.pdf}
\caption{An example of the 'TID bump', an initial increase in the digital current supplied to an ABC130 chip when it is irradiated; the current reaches a maximum before {\unit[1]{Mrad}} of Total Ionising Dose then decreases again, approaching its initial value as the radiation continues. The plot shows the digital current of the chip as it is being irradiated at {\unit[2.5]{krad/h}} and cooled to {\unit[-10]{$^{\circ}$C}}, conditions chosen to be similar to those expected for the ITk strip tracker chips during HL-LHC operation. The radiation was performed using a Caesium-60 source.}
\label{fig:TIDbump}
\end{figure}
An empirical function was fit to the data in order to describe the dose rate and
temperature dependence of the bump for use in thermo-electrical models of
the detector.
Furthermore, tests of multiple chips revealed that not only does the size of the bump depend on the environmental conditions, but that there was significant
variance in the size of the bump measured across different chips tested in the
same conditions. Significant variations in the size of the TID bump were seen
both between different chips produced in the same batch, and between chips
produced in different batches.
Ultimately, in order to mitigate the current increase, a strategy of
pre-irradiation was chosen: chips undergo high dose-rate irradiation
up to a dose of \unit[$\sim10$]{Mrad}, well beyond the TID bump. Using
this method, the chips have already passed the bump so that their leakage
current has returned to that before any irradiation. A number of tests performed on annealed chips confirmed that they would not undergo a secondary bump after long room temperature annealing periods between the pre-irradiation and operation at the HL-LHC. These tests include up to 14 months with the chip stored at \unit[80]{$^{\circ}$C} and up to 11 months with the chip left powered and running at room temperature. These chips were irradiated to \unit[8]{Mrad} using Co-60 and re-irradiated at room temperature at \unit[0.7]{Mrad/hr} using a \unit[3]{kV} tungsten x-ray tube. The effect of this pre-irradiation procedure can be seen in figure~\ref{fig:TIDcomparison}.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figure/summary_ratio.pdf}
\caption{Comparison of current increase for un-pre-irradiated and pre-irradiated ASICs with various annealing processes carried out - chips were either left unpowered in an oven at {\unit[80]{$^{\circ}$C}} or powered and read-out constantly at room temperature. Independent of the annealing applied, pre-irradiated ASICs showed a current increase of less than {\unit[50]{\%}} - significantly less than the primary TID bump. Results are shown for multiple ASICs at each annealing option and all chips were taken from the same wafer.}
\label{fig:TIDcomparison}
\end{figure}
\section{Conclusion and Outlook}
Within the scope of the ABC130 barrel module programme, about 100 modules were
constructed at ten institutes foreseen to assemble barrel modules for the
future ITk strip tracker. While a small number of LS modules was built, the
majority of assembled modules were SS modules, as their more complicated
assembly and readout was considered more challenging. Their construction
allowed to develop assembly tools and test procedures for future module
construction.
The barrel module prototyping programme demonstrated that the established procedures for assembly and quality control ensured the production of modules within the required specifications. The assembly procedures and developed tooling were found to result in the required module geometries as evidenced by the outcome of metrological surveys. The electrical testing procedures confirmed that the Signal-to-Noise-ratio required by the \mbox{ATLAS} strip tracker for the subsequent module generation could be achieved. Areas for improvement, such as an increased noise under the powerboard shield box, could be identified and corrected in subsequent iterations.
The assembled modules were used to construct complete prototype versions of
higher-level detector structures called staves. Staves consist of a carbon fibre
core with integrated cooling, power, and data I/O infrastructures on both sides, onto which
13 or 14 modules per side are glued and electrically connected~\cite{stave}.
The availability of 100 barrel modules allowed the assembly of one fully loaded
stave, three half loaded staves (where only one side of the stave was fully
populated) and one partially loaded double-sided stave (where both stave sides
were partially populated with modules). The availability of several populated stave sides permitted tests of fully assembled structures as well as system tests, where potential interactions of neighbour staves could be studied. The extensive test program performed on staves~\cite{Staves} goes beyond the scope of this publication.
The scale and results of this prototyping program demonstrated a high degree of development for all involved components, as well as the module assembly and testing processes. The major driver for the next design evolution was an increase of the trigger rate requirement to MHz range. To address this challenge, a new readout architecture was necessary, instigating the development of the subsequent generation of readout chips called star chips. While ABC130 chips were read out in chains of five chips in series, the star chip set was developed to allow the direct readout of each individual ASIC to increase the bandwidth. A new generation of components based on the star chip design was designed based on the findings from the ABC130 barrel module programme and will be developed into modules for the ITk strip tracker.
\section*{Acknowledgements}
Individual authors were supported in part by the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. This study was supported by National Key Programme for S\&T Research and Development (Grant No.: 2016YFA0400101). The work at SCIPP was supported by the Department of Energy, grant DE-SC0010107. This work was supported by the Science and Technology Facilities Council [grant number ST/R002592/1], the Polish Ministry of Science and Higher Education, Grant No.: DIR/WK/2018/04, the Canada Foundation for Innovation and the Natural Sciences and Engineering Research Council of Canada and the Australian Research Council.
\bibliographystyle{unsrt} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.